Skip to content

Self-adjusting C++ cache simulator comparing LRU, LFU, and FIFO eviction policies under diverse workloads using runtime performance metrics.

Notifications You must be signed in to change notification settings

Lambo-IITian/Self-Adjusting-Cache-Simulator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

🧠 Self-Adjusting Cache Simulator (C++)

Overview

Modern systems rely heavily on caching, yet no single cache eviction policy performs optimally across all workloads.
This project implements and evaluates multiple cache replacement strategies and dynamically identifies the most effective policy based on runtime behavior.

The simulator compares LRU, LFU, and FIFO eviction policies under different access patterns and selects the best-performing strategy using real-time hit-rate analysis.


Key Features

  • Implements multiple cache eviction strategies:
    • Least Recently Used (LRU)
    • Least Frequently Used (LFU)
    • First-In First-Out (FIFO)
  • Unified polymorphic cache interface for clean design and extensibility
  • Realistic workload simulation:
    • Bursty (hot-key dominated access)
    • Sequential (streaming access)
    • Random (low locality access)
  • Runtime hit/miss tracking and performance evaluation
  • Adaptive selection of the best-performing cache strategy

Design Overview

Architecture

Access Pattern Generator ↓ Cache Simulator Core ↓ LRU / LFU / FIFO (parallel evaluation) ↓ Metrics Collector ↓ Adaptive Strategy Selector

Core Abstractions

  • Cache – abstract base interface
  • LRUCache, LFUCache, FIFOCache – concrete implementations
  • CacheWithMetrics – wrapper for runtime performance tracking

This separation ensures clean architecture, modularity, and ease of extension.


Cache Strategies Implemented

LRU (Least Recently Used)

  • Implemented using a doubly linked list and hash map
  • O(1) get and put operations
  • Performs well under workloads with strong temporal locality

LFU (Least Frequently Used)

  • Uses frequency-based buckets with linked lists
  • Protects frequently accessed items
  • Effective when hot keys remain stable over time

FIFO (First-In First-Out)

  • Simple queue-based eviction
  • Included as a baseline to highlight limitations
  • Performs poorly under most real-world workloads

Workload Patterns Tested

Pattern Description
Bursty Small set of hot keys accessed repeatedly
Sequential Streaming access exceeding cache capacity
Random Uniformly random access pattern

Observations & Insights

  • Bursty workloads favor LFU when hot keys remain stable
  • Sequential access patterns cause cache thrashing when the working set exceeds cache capacity
  • When the entire working set fits in cache, eviction policy becomes largely irrelevant
  • No single eviction strategy dominates across all workloads, motivating adaptive selection

These results align with real-world cache behavior in systems design.


How to Run

Compile the project using:

g++ -std=c++17 -O2 code.cpp -o cache_sim
./cache_sim

Possible Extensions

-Sliding-window or decay-based performance metrics -Thread-safe cache implementations -Latency-aware eviction policies -Visualization of hit-rate trends

Author

Mohit Gunani IIT BHU

About

Self-adjusting C++ cache simulator comparing LRU, LFU, and FIFO eviction policies under diverse workloads using runtime performance metrics.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages