-
Notifications
You must be signed in to change notification settings - Fork 0
Fundamentals
click here to read
- Styling: Modern practices for managing CSS.
- Routing: Plan and implement URL-driven navigation.
- Loading Data: Strategies for loading and rendering data.
- Data Mutations: Execute CRUD operations, safely.
- SEO: Ensure page content gets discovered, organically.
- Error Handling: Effective strategies, without the ambiguity.
- Input Validation: Real-time, schema-based user input validation.
- Accessibility: Making forms usable for everyone.
- File Uploads: Support more than just text in your forms.
- Complex Data Structures: Confidently handle the intricacies of nested data.
- Form Security: Prevent spam, XSS, and other malicious attacks.
- Database Schema: Craft a robust database architecture with future flexibility.
- Relationships: Know what and when for one-to-one, one-to-many, many-to-many.
- Migrations: Seamlessly transition your data.
- Seeding: Populate initial data for dev and test environments.
- Query Optimization: Fetch data as efficiently as possible.
- User Preferences: Store settings in the user’s browser.
- Session Management: Secure data storage done right the first time.
- Cookie-Based Identification: Identification that follows best practices.
- Password Storage: Safety beyond just hashing.
- Password Validation: Security that's without inconvenience.
- Session Expiration: Auto-logout doesn't have to mean data loss.
- Permissions: Role-based access control.
- Verification: Verify user emails, support forgot password, 2FA–the works.
- Third Party Auth: OAuth, multi-connection, SSO-ready.
- Test Automation: Ditch manual test suites for scalable automatic ones.
- HTTP Mocks: Simulate server interactions for E2E tests.
- Authenticated Tests: Testing with user roles in mind.
- Unit Tests: Properly scoped and thoroughly executed.
- React Component Testing: Get into the UI specifics.
- Integration Testing: Strike a productive balance on test scope.
click here to read
It talks about development, operation, and scaling.
It is a triangulation on ideal practices for app development, paying particular attention to the dynamics of the organic growth of an app over time, the dynamics of collaboration between developers working on the app’s codebase, and avoiding the cost of software erosion.
I. Codebase One codebase tracked in revision control, many deploys
II. Dependencies Explicitly declare and isolate dependencies
III. Config Store config in the environment
IV. Backing services Treat backing services as attached resources
V. Build, release, run Strictly separate build and run stages
VI. Processes Execute the app as one or more stateless processes
VII. Port binding Export services via port binding
VIII. Concurrency Scale out via the process model
IX. Disposability Maximize robustness with fast startup and graceful shutdown
X. Dev/prod parity Keep development, staging, and production as similar as possible
XI. Logs Treat logs as event streams
XII. Admin processes Run admin/management tasks as one-off processes
The response codes for HTTP are divided into five categories:
- Informational (100-199)
- Success (200-299)
- Redirection (300-399)
- Client Error (400-499)
- Server Error (500-599)
These codes are defined in RFC 9110. To save you from reading the entire document (which is about 200 pages), here is a summary of the most common ones.
click to read

Top 5 common ways to improve API performance.
Result Pagination:
This method is used to optimize large result sets by streaming them back to the client, enhancing service responsiveness and user experience.
Asynchronous Logging:
This approach involves sending logs to a lock-free buffer and returning immediately, rather than dealing with the disk on every call. Logs are periodically flushed to the disk, significantly reducing I/O overhead.
Data Caching:
Frequently accessed data can be stored in a cache to speed up retrieval. Clients check the cache before querying the database, with data storage solutions like Redis offering faster access due to in-memory storage.
Payload Compression:
To reduce data transmission time, requests and responses can be compressed (e.g., using gzip), making the upload and download processes quicker.
Connection Pooling:
This technique involves using a pool of open connections to manage database interaction, which reduces the overhead associated with opening and closing connections each time data needs to be loaded. The pool manages the lifecycle of connections for efficient resource use.
Scaling a system is an iterative process. Iterating on what we have learned in this chapter could get us far. More fine-tuning and new strategies are needed to scale beyond millions of users. For example, you might need to optimize your system and decouple the system to even smaller services. All the techniques learned in this chapter should provide a good foundation to tackle new challenges. To conclude this chapter, we provide a summary of how we scale our system to support millions of users:
-
Keep web tier stateless
-
Build redundancy at every tier
-
Cache data as much as you can
-
Support multiple data centers
-
Host static assets in CDN
-
Scale your data tier by sharding
-
Split tiers into individual services
-
Monitor your system and use automation tools
The cache tier is a temporary data store layer, much faster than the database. The benefits of having a separate cache tier include better system performance, ability to reduce database workloads, and the ability to scale the cache tier independently.
- Cache Types (In-memory caching, Distributed caching, Client-side caching)
- Cache Strategies (Cache-Aside, Write-Through, Write-Behind, Read-Through)
- Measuring Cache Effectiveness (Calculate the cache hit rate, Analyse cache eviction rate, Monitor data consistency, Determine the right cache expiration time)
Exception handling is a fundamental and crucial aspect of programming. It's a simple concept that involves managing exceptions that may occur during the execution of a program. These exceptions can cause the program to stop functioning. By handling exceptions, the program can continue to operate or shut down gracefully, preventing abrupt termination.
Suppose you have an exception caused by something like invalid user input, hardware malfunction, network failure, or programming error. How would you handle it?
Exceptions are handled by creating an object known as an "exception object." An exception object contains information about the type of error that occurred and the location of the error in the code. Exceptions can also be explicitly "thrown" by developers, using the 'throw' keyword to indicate a specific error condition in their code.
C# has a built-in mechanism for handling exceptions that occur during program execution. This mechanism allows developers to catch and manage exceptions using a try-catch block. The try block contains the code that may cause an exception, while the catch block specifies how to handle an exception.
When an exception occurs, you can manage it by logging an error message, displaying a user-friendly message, or taking corrective action. If the exception isn't caught, the program may terminate. In this module, you'll implement error handling while building the Langton’s Ant code.
click to read
Benefits of Canary Deployments Why go to the trouble of implementing a canary strategy? The benefits are many:
A/B testing: we can use the canary to do A/B testing. In other words, we present two alternatives to the users and see which gets better reception.
Capacity test: it’s impossible to test the capacity of a large production environment. With canary deployments, capacity tests are built-in. Any performance issues we have in our system will begin to crop up as we slowly migrate the users to the canary.
Feedback: we get invaluable input from real users.
No cold-starts: new systems can take a while to start up. Canary deployments slowly build up momentum to prevent cold-start slowness.
No downtime: like blue-green deployments, a canary deployment doesn’t generate downtime.
Easy rollback: if something goes wrong, we can easily roll back to the previous version.
click here to read
A forward proxy, also referred to as a “proxy server,” or simply a “proxy,” is a server that sits in front of one or more client computers and serves as a conduit between the clients and the internet. The forward proxy receives the request before sending it on from the client machine to the internet resource. On behalf of the client machine, the forward proxy then sends the request to the internet and returns the response.
A forwards proxy is mostly used for:
- Client Anonymity
- Caching
- Traffic Control
- Logging
- Request/Response Transformation
- Encryption
A server that sits in front of one or more web servers and serves as a go-between for the web servers and the Internet is known as a reverse proxy. The reverse proxy receives the request before sending it on to the internet resource for the client. After sending the request to one of the web servers, the reverse proxy receives the response from that server. The response is then sent back to the client by the reverse proxy.
A reverse proxy is mostly used for:
- Server Anonymity
- Caching
- Load Balancing
- DDoS Protection
- Canary Experimentation
- URL/Content Rewriting

It states that two instances of similar code do not require refactoring, but when similar code is used three times, it should be extracted into a new procedure. The rule was popularised by Martin Fowler in Refactoring and attributed to Don Roberts