KEV Watch is a Python-powered dashboard that merges CISA’s Known Exploited Vulnerabilities (KEV) Catalog with NIST NVD CVSS data to help security teams quickly see which vendors and products have the most KEVs, how severe they are, and what to remediate by when.
It demonstrates:
- Clean ingestion of CISA KEV JSON and selective NVD API enrichment (cached).
- A fast, static HTML dashboard you can open locally (no server required).
- A compact CSV export ready for filtering/joins in other tools.
- Respectful API usage (throttling, caching, and incremental refresh) aligned to NVD guidance.
👉 Recruiters & Hiring Managers: Open the pre-baked dashboard first (no long downloads), then try a refresh to see the full flow. Or, download and open kev-watch-mvp-baseline.zip, extract, and launch the dashboard by double-clicking kev_dashboard.html.
Goal: Provide an at‑a‑glance view of exploited CVEs by Vendor → Product, with interactive filtering and clear remediation metadata (Description, Required Action, Vector, Date Added, Due Date).
MVP Scope (complete):
- Left pane: vendors sorted by total KEVs (ties → alphabetical); each product shows a segmented severity bar.
- Right pane: interactive CVE cards with pagination (4 per page). New link‑outs open CISA KEV and NIST NVD for each CVE in a new tab.
- Report date: shown in the header, derived from
data/reports/kev_report_latest.csv
. - Prioritization order for the right pane: Vendor (most KEVs) → Product → CVSS severity → Due Date (tie‑breaker).
Why this matters: This gives analysts and leaders a simple, portable way to understand the current threat landscape that affects their environment and to take action.
repo-root/
│
├── data/
│ ├── cache/ # CVSS cache: cvss.json (NVD vector/score/severity/version)
│ ├── raw/ # Date‑stamped KEV snapshots: kev_YYYYMMDD_HHMMSS.json
│ └── reports/ # CSV reports + dashboard HTML (timestamped + latest pointer)
│
├── images/ # Screenshots of the KVE Watch dashboard functionality
│ # and linked NIST NVD and CISA KEV pages
├── src/
│ ├── __init__.py
│ ├── kev_dashboard.py # Build static HTML dashboard (main entry)
│ ├── kev_enrich.py # NVD CVSS enrichment + JSON cache (data/cache/cvss.json)
│ ├── kev_fetch.py # Download KEV snapshot into data/raw/
│ ├── kev_job.py # Orchestrates: load latest KEV → filter → enrich → CSV
│ ├── kev_output.py # CSV writer (compact, recruiter‑friendly columns)
│ ├── kev_parse.py # Normalize KEV items, helpers, smoke tests
│ ├── kev_viz.py # Optional treemap (Vendor → Product)
│ ├── nvd_client.py # Minimal NVD API client (single CVE → CVSS details)
│ └── severity.py # Formats NVD severity ratings
│
├── test/
│ ├── test_fetch.py
│ ├── test_output.py
│ ├── test_parse.py
│ └── test_score.py
│
├── .gitattributes
├── .gitignore
├── commands.txt
├── LICENSE
├── requirements.txt
└── README.md # (this file)
Run from anywhere: Scripts resolve PROJECT_ROOT
from __file__
, so you can use python -m src.module
from any folder in the repo.
To avoid long first‑run downloads, this repo includes baseline data so you can see the dashboard immediately.
- Create a virtual environment and install deps:
python3.13 -m venv .venv && source .venv/bin/activate pip install -r requirements.txt
- Build the HTML dashboard (from anywhere in the repo):
python -m src.kev_dashboard
- Open the generated file from
data/reports/
— it will be named likekev_dashboard_YYYYMMDD_HHMMSS.html
.
The baseline data may be outdated by design (so reviewers can explore instantly). See the next section to refresh to the latest data.
There are two common flows. Pick one.
Runs the full pipeline: download new KEV snapshot → filter by stack → enrich with CVSS (cached) → write CSV → update kev_report_latest.csv
.
# Optional but recommended (raises your NVD rate limits)
export NVD_API_KEY="your_key_here"
# End‑to‑end job using the default vendor/product filter
python -c "import os; from src.kev_job import run_job; run_job(r'(microsoft|cisco|fortinet|fortigate|vmware|paloalto|linux|kernel)', api_key=os.getenv('NVD_API_KEY'))"
Then rebuild the dashboard:
python -m src.kev_dashboard
1) Download a new KEV snapshot → data/raw/kev_YYYYMMDD_HHMMSS.json
python -m src.kev_fetch
# You should see a new file in data/raw/, e.g. data/raw/kev_20250910_142300.json
2) Update the CVSS cache (data/cache/cvss.json
) and write a new CSV
This enriches CVEs (using/expanding the cache) and writes data/reports/kev_report_<TS>.csv
+ updates kev_report_latest.csv
.
export NVD_API_KEY="your_key_here" # optional
python -c "import os; from src.kev_job import run_job; run_job(r'(microsoft|cisco|fortinet|fortigate|vmware|paloalto|linux|kernel)', api_key=os.getenv('NVD_API_KEY'))"
3) Rebuild the dashboard HTML
python -m src.kev_dashboard
What to expect after a refresh
- KEV snapshot:
data/raw/kev_YYYYMMDD_HHMMSS.json
- CVSS cache:
data/cache/cvss.json
(grows as new CVEs are enriched) - Reports:
data/reports/kev_report_<TS>.csv
anddata/reports/kev_report_latest.csv
- Dashboard:
data/reports/kev_dashboard_<TS>.html
Handy variations
- Process all KEV entries (ignore the stack filter):
python -c "import os; from src.kev_job import run_job; run_job(r'.*', api_key=os.getenv('NVD_API_KEY'))"
data/reports/kev_report_*.csv
(and kev_report_latest.csv
) uses a small, portable schema:
cveID,vendorProject,product,dueDate,cvssScore,cvssSeverity,priority,vulnerabilityName
priority
is reserved for future scoring and may be blank in the MVP.- The CSV can be filtered/joined against an asset inventory to suppress non‑relevant KEVs (future enhancement).
- Left pane: Vendors sorted by total KEVs (ties → alphabetical). Products show “
Product : N KEVs
” plus severity segments. - Right pane: Click a severity segment to filter cards by Vendor → Product → Severity; 4 cards/page with a pager.
Each card now includes CISA KEV ↗ and NIST NVD ↗ link‑outs (open in a new tab). - Report date: Derived from
kev_report_latest.csv
and shown in the header.
- Without a key, NVD allows 5 requests per 30 seconds. With a key, the limit increases to 50 requests per 30 seconds.
- Best practice is to pull deltas using the NVD modified date parameters and to centralize requesters for enterprise use.
- This project is designed to be respectful:
- Cache first (
data/cache/cvss.json
) to avoid repeat calls. - Conservative throttling in the job (6.2s/request by default).
- Incremental refresh of the KEV/NVD data instead of bulk re‑downloads.
- Cache first (
API key: Set an environment variable and pass it through to the job:
export NVD_API_KEY="your_key_here"
python -c "import os; from src.kev_job import run_job; run_job(r'(microsoft|cisco|fortinet|fortigate|vmware|paloalto|linux|kernel)', api_key=os.getenv('NVD_API_KEY'))"
See commands.txt
in the repo for a consolidated list. Highlights:
# Create & activate venv
python3.13 -m venv .venv && source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Build dashboard from latest CSV (no downloads)
python -m src.kev_dashboard
# --- Refresh data ---
# 1) Download a new KEV raw snapshot
python -m src.kev_fetch
# 2) Enrich CVEs (updates cvss.json and writes a fresh CSV)
export NVD_API_KEY="your_key_here" # optional
python -c "import os; from src.kev_job import run_job; run_job(r'(microsoft|cisco|fortinet|fortigate|vmware|paloalto|linux|kernel)', api_key=os.getenv('NVD_API_KEY'))"
# 3) Rebuild dashboard HTML
python -m src.kev_dashboard
# Variations:
# - All KEVs (no filter): python -c "import os; from src.kev_job import run_job; run_job(r'.*', api_key=os.getenv('NVD_API_KEY'))"
# - Force rebuild: rm data/state.json (or the Python snippet in README)
# - Only refresh cvss.json: see README for the short Python snippet
- Python scripting for data processing and API usage (
requests
,pandas
) - Caching & throttling strategies for rate‑limited APIs
- Data visualization (Plotly) and static HTML app generation
- Project structure, portability, and “run from anywhere” patterns
- GitHub‑friendly packaging (requirements, baseline data, screenshots)
- Solve a business problem: Build views that connect risk → remediation in one place for diverse audiences.
- Spec‑driven, AI‑assisted development: Write a clear spec, then iterate fast with AI to ship beyond your solo speed.
- Commit early & often: Use branches/PRs, protect secrets, and keep a clean history for teachable moments.
- Partnership mindset with AI: Ask for best practices and explanations; don’t offload all thinking.
- Debugging discipline: Read stack traces, work in small steps, and keep prompts focused on the outcome.
- CISA Known Exploited Vulnerabilities (KEV) Catalog
- Feed:
https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json
- Catalog UI: Search for a CVE to view its listing.
- Feed:
- NIST National Vulnerability Database (NVD)
- CVE Detail Pages:
https://nvd.nist.gov/vuln/detail/CVE-YYYY-NNNN
- Developer Docs & API key request: see NVD developer pages.
- CVE Detail Pages:
- Python: 3.13
- OS: macOS
- Browsers tested: Safari, Chrome, Firefox
Distributed under the MIT License. See LICENSE
for details.