A simple task manager focused on minimal management and some flow metrics.
Domain model:
Note
Even though the intention is for this diagram to be of the domain model, attributes
and packages
from a class diagram are used to prioritize information over simplicity.
In a real project, the tradeoff may go in favor of the simplicity so that the diagram can be used to communicate with non-technical people or if the application has a broader scope (thus the chunk of the domain that is relevant to be depicted here).
Use cases:
Actors definition:
- Stakeholder: anyone interested in the project.
- Product owner: the role responsible for having the final word on product related decisions.
- User: anyone using the project manager application.
Note
These are a beforehand sketch of the ideal use cases. Not all are, nor will be, implemented.
There is a single one, for Windows desktop. It's called Desktop
, and is implemented in Windows Presentation Foundation (WPF).
Rather than using a technological layering, I prefer a folder structure that reflects the domain. This is to both to create a better mental model and for having more cohesion inside folders and less coupling between them.
In ASP.NET Core, using EntityFramework ORM for handling the persistence.
Conventions:
- REST. Since currently the application is a simple CRUD application, REST is a perfect match.
- To code directly on the ASP.NET Controllers rather than decouple from them. Due to simplicity of the use cases, it is much simpler to just orchestrate the repository directly. In a real-life scenario, I would have considered making a POCO controller class (in terms of MVC) as an early abstraction. This would enable testing without having to deal with any HTTP or ASP.NET concerns in tests that want to test business logic rather than these technological aspects; also, a better separation of concerns to reduce cognitive complexity as the codebase inevitably grows.
- To have the Backend and Frontend in the same solution, and referenced by project rather than NuGet packages. Mainly, to keep things simple: all the projects are in the same place, changes are reflected instantly without needed to republish packages and avoid the extra complexity in the CI/CD for packaging and publishing NuGets. Again, in a real-life scenario, things would be different: versioning the backend would be imperative, and to maintain the Developer Experience a simple mechanism in the
.csprojs
for replacing package references with project references inDebug
/development will be in place.
The architecture is as such given this is a learning project. In a real world, back-of-envelope calculations for expected concurrent users, requests per second and other consideration would have been taken into account to ensure scalability and act on the consistency/availability tradeoff (given partition tolerance is always desired) for the different use cases. This will probably require different architecture among use cases (e.g. CQRS: separating the writes and reads).
For instance, an API gateway will be in place between the front and the back, to decouple the front from the backend location. Also, it will probably be responsible for handling authentication and authorization. Likewise, it would be a great point to introduce a load balancer instead, in case we decide to go with horizontal scaling (though I prefer to squeeze vertical scaling as much as possible, due to costs and complexity concerns).
Again, simplicity has been the main drive point for this learning project.
This project make use of the NuGet's Central Package Management (CPM) in order to remove duplication of versions of certain packages that are used in different projects (some ASP.NET
and testing ones).
For trickier scenarios (or when working with older versions than .NET 6) where CPM is not a good fit, another solution would be:
- Central
.props
files in which versions for packages are defined once by means of variables. - Define a
Directory.Build.props
to import the previously created.props
. - On each
.csproj
, reference the given variables for the target packages.
Note
If desired, this also allows having different versions of the same package in different projects e.g. to allow gradually migrating to a new version of a third party library that introduces breaking changes. Mind issues with inconsistencies on transitives dependencies scenarios, though.
Conventions:
- Arrange Act Assert (AAA pattern).
- Implicitly: by grouping statements and leaving empty lines between each part.
- Explicitly: when needed, by using comments to specify each part.
- Strong focus on integration tests. This has been taken due to the reason that project is a simple CRUD application, with almost no domain logic to be tested. The sparse control logic is tested alongside the main use cases. What remains is the
backend->persistence
andfrontend->backend
integration.
- Test API: any test code that serves the purpose of making tests easier, more readable or maintainable in general by encapsulating knowledge or boilerplate code.
- It may well be synonym of
test helpers
,test utils
, etc.
- It may well be synonym of
Since this project is hosted in GitHub, that the CI/CD pipelines requirements are quite simple and due to my familiarity with the tool, I've decided to use GitHub Actions.
The idea, is that after each git push
, the CI pipelines are triggered in order to verify that all the projects in the solution successfully build and passes their tests. If, on the contrary, any pipeline fails, I stop&fix to return to a healthy state as soon as possible (see Continuous Delivery, Jez Humble and Dave Farley (2010)).
They are under /.github/workflows/
.
Many of the design choices, naming, not making extensive documentation on APIs (or public members, of any sort), have been taken with the idea that this is "pet project", and that my default and preferred way of working is collaboratively with practices like pair and ensemble programing. In this scenario, unless stated otherwise to avoid interruption of flow, discussions occur on the go and decisions are taken withing seconds (as well as code reviews). Also, knowledge silos are mostly mitigated in this way.
- As the only person in the team is me, obviously neither pair nor ensemble programming has taken place.
- As with many things, I would discuss with the team if they prefer another way of working.
The UML usage style in this project is sketching, as described by Marting Fowler e.g. UML Distilled, Martin Fowler (2009). This basically means that it is used as a quick sketch to clarify ideas in broad strokes, and is neither intended to be kept up to date nor describe the project fully.
Important
In a real-life scenario, I'd discuss with the team if they are conformable with this approach or if another is needed.
Regarding keeping track of pretty much any kind of decision that the team takes, I like to use the Architectural Decision Records (ADRs), by Michael Nygard. For example, why one technology is used in favor of others, the reason why the API does things in a certain way, or even a convention that the team has decided following.
Given the simplicity of this project and that I am the only contributor, I've decided not to use them.
Since I value documentation, I consider having written documentation to its minimum, so that it conveys the relevant bits and important info about the matter at hand.
This means that comments in code are used only when necessary: if the information can be conveyed through member (class, method, variables, etc.) naming or by a test (executable specifications), it will be better documented as such. The same goes for any member documentation: summaries
, remarks
, etc.
Basically, this allows better signal-to-noise ratio: rather than having all members with redundant summaries, only the essential is there. In other words, if you see a "green stain" in the code (or in whatever colour you have configured in your IDE) you know it's something important and that you'd better read it. On the contrary, having all with redundant comments or summaries (set aside the out-of-date docs issue) trains your brain to just ignore it, so it's easier for that relevant info to go unnoticed.
Note
Just to be clear: this doesn't mean that I see written docs such as comments or summaries as wasteful. The issue is with summaries that say exactly what the method name already says (or if it adds new information to the name, I prefer to just improve the name with that info).
Important
Comments should talk about the why, not what the code is doing. The latter is a responsibility of the code itself. If I find myself trying to explain some aspect on what the code does, that's a symptom that it needs to be refactored to better convey its intent.
Textual DSL for diagrams. Refer to the official docs. It's my tool of choice given it is free software (under GPL-3.0) and supports changing the arrows orientation (something Mermaid
does not). It's also worth mentioning PlantText as an online PlantUML editor.
Usage of Alerts in order to remark some info and lighten the burden of having to read extensive paragraphs.