- Java
- Internet
- Spring Boot
- Java.pdf / Java / java.base / java.util / concurrent / locks / AbstractQueuedSynchronizer
- State Management
- Node-based Queuing
- Exclusive and Shared Modes
- Condition Support
AbstractQueuedSynchronizer is a utility for creating custom synchronizers in Java, using a node-based queue mechanism.
To customize lock behavior, implement tryAcquire/tryRelease
for exclusive locks or tryAcquireShared/tryReleaseShared
for shared locks.
These methods define when a lock can be acquired or released.
The state
field tracks the lock's status, and AbstractQueuedSynchronizer
automatically manages the queue and wake-up process for waiting threads.
- Internet.pdf / Network Model / Application Layer / HTTP (Hypertext Transfer Protocol) / Cross-domain communication
- Same-Origin Policy (SOP)
- Cross-Origin Resource Sharing (CORS)
- Preflight Request
- Config.pdf / Work Environment / nginx / Core / Proxy
- Forward Proxy (Normal Proxy)
- Reverse Proxy
- Return the necessary CORS headers for all requests directly from the server.
- Set up a reverse proxy server (e.g., Nginx) to intercept both preflight and regular requests, adding the necessary CORS headers.
- DatabaseAndMiddleware.pdf / Kafka / Configuration / server.properties / Leader Election
- Assigned Replicas (AR)
- In-Sync Replicas (ISR)
- Out-of-Sync Replicas (OSR)
-
Set all broker replicas keep synchronized as OSR nodes may lack the latest messages, risking data loss if elected as the leader.
server.properties:
unclean.leader.election.enable = false
-
Keep meessage flush to disk synchronized.
server.properties:
# When the number of messages in a log segment reaches 10000, Kafka forces a flush to disk to persist the data. log.flush.interval.messages = 10000 # Forces a flush operation after the specified time interval in milliseconds. log.flush.interval.ms = 1000 # Sets the interval in milliseconds at which Kafka checks whether a flush is needed. log.flush.scheduler.interval.ms = 3000
-
In the Kafka producer, setting
acks=all
ensures that the producer waits for acknowledgments from all in-sync replicas (ISRs) of the partition before considering the message successfully sent.Additionally, a callback function should be set for each sent message to handle success, failure, and retries. The callback provides a way to receive feedback from the broker, allowing you to log successes or handle any delivery failures (e.g., network issues or broker unavailability) with appropriate actions such as retries or compensatory measures.
-
In the Kafka consumer, setting
enable.auto.commit=false
and manually committing the message offset ensures greater control over message processing.
Relying on automatic offset commits can lead to data loss if the consumer crashes before committing the latest offsets.