Skip to content

Commit e9f254d

Browse files
committed
Bump version v1.1.4: README improvements
1 parent 1c463e4 commit e9f254d

File tree

3 files changed

+41
-43
lines changed

3 files changed

+41
-43
lines changed

README.md

Lines changed: 37 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
11
<h2 align="middle">zero-overhead-promise-lock</h2>
22

33
The `ZeroOverheadLock` class implements a modern Promise-lock for Node.js projects, enabling users to ensure the **mutually exclusive execution** of specified asynchronous tasks. Key features include:
4-
* __Graceful Teardown__: The ability to await the completion of all currently executing or pending tasks, making it ideal for production applications that require smooth and controlled shutdowns.
5-
* __"Check-and-Abort" Friendly__: The `isAvailable` getter is designed for "check-and-abort" scenarios, enabling operations to be skipped or aborted if the lock is currently held by another task.
4+
* __Graceful Teardown :hourglass_flowing_sand:__: The ability to await the completion of all currently executing or pending tasks, making it ideal for production applications that require smooth and controlled shutdowns.
5+
* __"Check-and-Abort" Friendly :see_no_evil:__: The `isAvailable` getter is designed for "check-and-abort" scenarios, enabling operations to be skipped or aborted if the lock is currently held by another task.
6+
7+
If your use case involves keyed tasks - where you need to ensure the mutually exclusive execution of tasks **associated with the same key** - consider using the keyed variant of this package: [zero-overhead-keyed-promise-lock](https://www.npmjs.com/package/zero-overhead-keyed-promise-lock). Effectively, a keyed lock functions as a temporary FIFO task queue per key.
68

79
## Table of Contents
810

@@ -12,21 +14,20 @@ The `ZeroOverheadLock` class implements a modern Promise-lock for Node.js projec
1214
* [Modern API Design](#modern-api-design)
1315
* [API](#api)
1416
* [Getter Methods](#getter-methods)
15-
* [Opt for Atomic Operations When Working Against External Resources](#opt-atomic-operations)
16-
* [Using Locks as a Semaphore with a Concurrency of 1](#lock-as-semaphore)
1717
* [Use Case Example: Aggregating Intrusion Detection Event Logs](#first-use-case-example)
1818
* [Check-and-Abort Example: Non-Overlapping Recurring Task](#second-use-case-example)
19+
* [Opt for Atomic Operations When Working Against External Resources](#opt-atomic-operations)
1920
* [License](#license)
2021

2122
## Key Features :sparkles:<a id="key-features"></a>
2223

2324
- __Mutual Exclusiveness :lock:__: Ensures the mutually exclusive execution of asynchronous tasks, either to prevent potential race conditions caused by tasks spanning across multiple event-loop iterations, or for performance optimization.
2425
- __Graceful Teardown :hourglass_flowing_sand:__: Await the completion of all currently pending and executing tasks using the `waitForAllExistingTasksToComplete` method. Example use cases include application shutdowns (e.g., `onModuleDestroy` in Nest.js applications) or maintaining a clear state between unit-tests.
25-
- __Suitable for "check and abort" scenarios__: The `isAvailable` getter indicator enables to skip or abort operations if the lock is currently held by another task.
26+
- __Suitable for "Check and Abort" scenarios :see_no_evil:__: The `isAvailable` getter indicator enables to skip or abort operations if the lock is currently held by another task.
2627
- __Backpressure Metric :bar_chart:__: The `pendingTasksCount` getter provides a real-time metric indicating the current backpressure from tasks waiting for the lock to become available. Users can leverage this data to make informed decisions, such as throttling, load balancing, or managing system load. Additionally, this metric can aid in **internal resource management** within a containerized environment. If multiple locks exist - each tied to a unique key - a backpressure value of 0 may indicate that a lock is no longer needed and can be removed temporarily to optimize resource usage.
2728
- __High Efficiency :gear:__: Leverages the Node.js microtasks queue to serve tasks in FIFO order, eliminating the need for manually managing an explicit queue of pending tasks.
2829
- __Comprehensive documentation :books:__: The class is thoroughly documented, enabling IDEs to provide helpful tooltips that enhance the coding experience.
29-
- __Tests :test_tube:__: Fully covered by extensive unit tests.
30+
- __Thoroughly Tested :test_tube:__: Covered by extensive unit tests to ensure reliability.
3031
- __No external runtime dependencies__: Only development dependencies are used.
3132
- __ES2020 Compatibility__: The `tsconfig` target is set to ES2020.
3233
- TypeScript support.
@@ -40,7 +41,7 @@ In contrast, asynchronous tasks that include at least one `await`, necessarily s
4041

4142
## Other Use Cases: Beyond Race Condition Prevention :arrow_right:<a id="other-use-cases"></a>
4243

43-
Additionally, locks are sometimes employed **purely for performance optimization**, such as throttling, rather than for preventing race conditions. In such cases, the lock effectively functions as a semaphore with a concurrency of 1.
44+
Additionally, locks are sometimes employed **purely for performance optimization**, such as throttling, rather than for preventing race conditions. In such cases, the lock effectively functions as a semaphore with a concurrency of 1. For example, limiting concurrent access to a shared resource may be necessary to reduce contention or meet operational constraints.
4445

4546
If your use case requires a concurrency greater than 1, consider using the semaphore variant of this package: [zero-backpressure-semaphore-typescript](https://www.npmjs.com/package/zero-backpressure-semaphore-typescript). While semaphores can emulate locks by setting their concurrency to 1, locks provide a more efficient implementation with reduced overhead.
4647

@@ -66,39 +67,6 @@ The `ZeroOverheadLock` class provides the following getter methods to reflect th
6667
* __isAvailable__: Indicates whether the lock is currently available to immediately begin executing a new task. This property is particularly useful in "check and abort" scenarios, where an operation should be **skipped or aborted** if the lock is currently held by another task.
6768
* __pendingTasksCount__: Returns the number of tasks that are currently pending execution due to the lock being held. These tasks are waiting for the lock to become available before they can proceed.
6869

69-
## Opt for Atomic Operations When Working Against External Resources :key:<a id="opt-atomic-operations"></a>
70-
71-
A common example of using locks is the READ-AND-UPDATE scenario, where concurrent reads of the same value can lead to erroneous updates. While such examples are intuitive, they are often less relevant in modern applications due to advancements in databases and external storage solutions. Modern databases, as well as caches like Redis, provide native support for atomic operations. **Always prioritize leveraging atomicity in external resources** before resorting to in-memory locks.
72-
73-
### Example: Incrementing a Counter in MongoDB
74-
Consider the following function that increments the number of product views for the last hour in a MongoDB collection. Using two separate operations, this implementation introduces a race condition:
75-
```ts
76-
async function updateViews(products: Collection<IProductSchema>, productID: string): Promise<void> {
77-
const product = await products.findOne({ _id: productID }); // Step 1: Read
78-
if (!product) return;
79-
80-
const currentViews = product?.hourlyViews ?? 0;
81-
await products.updateOne(
82-
{ _id: productID },
83-
{ $set: { hourlyViews: currentViews + 1 } } // Step 2: Update
84-
);
85-
}
86-
```
87-
The race condition occurs when two or more processes or concurrent tasks (Promises within the same process) execute this function simultaneously, potentially leading to incorrect counter values. This can be mitigated by using MongoDB's atomic `$inc` operator, as shown below:
88-
```ts
89-
async function updateViews(products: Collection<IProductSchema>, productID: string): Promise<void> {
90-
await products.updateOne(
91-
{ _id: productID },
92-
{ $inc: { hourlyViews: 1 } } // Atomic increment
93-
);
94-
}
95-
```
96-
By combining the read and update into a single atomic operation, the code avoids the need for locks and improves both reliability and performance.
97-
98-
## Using Locks as a Semaphore with a Concurrency of :one:<a id="lock-as-semaphore"></a>
99-
100-
In scenarios where performance considerations require controlling access, in-memory locks can be useful. For example, limiting concurrent access to a shared resource may be necessary to reduce contention or meet operational constraints. In such cases, locks are employed as a semaphore with a concurrency limit of 1, ensuring that no more than one operation is executed at a time.
101-
10270
## Use Case Example: Aggregating Intrusion Detection Event Logs :shield:<a id="first-use-case-example"></a>
10371

10472
In an Intrusion Detection System (IDS), it is common to aggregate non-critical alerts (e.g., low-severity anomalies) in memory and flush them to a database in bulk. This approach minimizes the load caused by frequent writes for non-essential data. The bulk writes occur either periodically or whenever the accumulated data reaches a defined threshold.
@@ -275,6 +243,35 @@ export class NonOverlappingRecurringTask {
275243
}
276244
```
277245

246+
## Opt for Atomic Operations When Working Against External Resources :key:<a id="opt-atomic-operations"></a>
247+
248+
A common example of using locks is the READ-AND-UPDATE scenario, where concurrent reads of the same value can lead to erroneous updates. While such examples are intuitive, they are often less relevant in modern applications due to advancements in databases and external storage solutions. Modern databases, as well as caches like Redis, provide native support for atomic operations. **Always prioritize leveraging atomicity in external resources** before resorting to in-memory locks.
249+
250+
### Example: Incrementing a Counter in MongoDB
251+
Consider the following function that increments the number of product views for the last hour in a MongoDB collection. Using two separate operations, this implementation introduces a race condition:
252+
```ts
253+
async function updateViews(products: Collection<IProductSchema>, productID: string): Promise<void> {
254+
const product = await products.findOne({ _id: productID }); // Step 1: Read
255+
if (!product) return;
256+
257+
const currentViews = product?.hourlyViews ?? 0;
258+
await products.updateOne(
259+
{ _id: productID },
260+
{ $set: { hourlyViews: currentViews + 1 } } // Step 2: Update
261+
);
262+
}
263+
```
264+
The race condition occurs when two or more processes or concurrent tasks (Promises within the same process) execute this function simultaneously, potentially leading to incorrect counter values. This can be mitigated by using MongoDB's atomic `$inc` operator, as shown below:
265+
```ts
266+
async function updateViews(products: Collection<IProductSchema>, productID: string): Promise<void> {
267+
await products.updateOne(
268+
{ _id: productID },
269+
{ $inc: { hourlyViews: 1 } } // Atomic increment
270+
);
271+
}
272+
```
273+
By combining the read and update into a single atomic operation, the code avoids the need for locks and improves both reliability and performance.
274+
278275
## License :scroll:<a id="license"></a>
279276

280277
[Apache 2.0](LICENSE)

package-lock.json

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "zero-overhead-promise-lock",
3-
"version": "1.1.3",
3+
"version": "1.1.4",
44
"description": "An efficient Promise lock for Node.js projects, ensuring mutually exclusive execution of asynchronous tasks. Key features include a backpressure indicator and the ability to gracefully await the completion of all currently executing or pending tasks, making it ideal for robust production applications requiring smooth teardown.",
55
"repository": {
66
"type": "git",
@@ -19,6 +19,7 @@
1919
},
2020
"keywords": [
2121
"lock",
22+
"async-lock",
2223
"mutex",
2324
"promise-lock",
2425
"event-loop-lock",

0 commit comments

Comments
 (0)