Skip to content

arti97/High-Performance-Cache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

High-performance-Cache

PhonePe Interview | Machine Coding round

PhonePe Assessor PLEASE NOTE: This was the last commit at the time of submission. I have chosen to improve this further purely out of my own interest in learning Generics and improving this, so feel free to assess this or the above-linked commit


Documenting all decisions/thoughts here.

First, since the description of put(key, value) says '...If the cache is at capacity, evict the least recently used item...', this is clearly supposed to be an LRU cache.

So now the question is which data structure to use? Array/ArrayList is the first data structure that comes to mind, but any get/put op will be O(n) time/space complexity. Can we do better?

Hashmaps are the next obvious answer -> which reduces our get/put time to O(1), but then tracking LRU will be O(n). We need something to track BOTH LRU and MRU...a list can be used in combination to achieve this.


I'm using Java, and it provides some beautiful built-in implementations of an ordered map and a DLL, however they are not thread safe (References: LinkedHashMap, ArrayDeque)

I found a thread-safe implementation of Deque and will be using it in conjunction with ConcurrentMap.


Implemented a simple LRU cache with usual get/put/remove operations + related tests. Some design choices made include:

  1. Creating a separate cache config class, as this scales it might even be beneficial to provide builder design pattern to incrementally inject custom inputs for various configs
  2. While not specified, both the put() and remove() return the previous value associated with the key, in line with map behaviour
  3. Maintainable, extensible codebase: Separate classes for utils and constants. Access modifiers as and where applicabel
  4. Error and exception handling: Check for null/invalid values as and where applicable, throwing exceptions where deemed fit

Next challenge will be to implement ttl-based expiry, sync/async loading, writes & refreshes.


23/01/2026: Improvements:

  1. Java Generics based cache
  2. Strategy Pattern for ttl-based expiries
  3. Auto-scheduled ttl-based expiry run to trigger periodically post object instantiation
  4. Builder pattern to create cacheNodes from storeNodes
  5. Async cache load from store on instantiation
  6. JUnit tests

Next, implement write strategies (async write-back or sync write-through) and refresh strategies (periodic async refresh from store and/or periodic async refresh to store)


About

PhonePe Interview | Machine Coding round

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages