Welcome to the Algo Challenges repository β a collection of hand-picked algorithmic coding problems inspired by real company interviews I've personally taken part in (Amazon, Bolt, and more).
This repo is built to simulate real technical interviews, promote focused problem-solving, and help you improve your data structures and algorithms skills through disciplined practice.
Challenges are organized by company name in folders like:
amazon/WatchScore
bolt/SomeOtherChallenge
Each challenge contains:
- π
README.md
β problem description and sample cases - π§ͺ
__tests__/
β unit tests written using Jest - βοΈ
solution/
β an official solution you can check after solving
To make the most out of your practice, follow these principles β just like in a real interview:
-
Read the problem description carefully
Pay attention to constraints, edge cases, and sample inputs/outputs. -
Think first, code later
Plan your approach and write your code fully before running any tests β just like a real submission. -
Run tests only when ready
Validate your logic using the provided Jest tests by runningyarn test
. -
If tests fail, avoid peeking
- Try to debug without reading the test cases.
- Use
console.log
or walk through your code logically to find the issue.
-
Still stuck?
- Then look at the failing test case(s).
- You can compare with the official solution in the
solution/
folder only after a solid personal attempt.
-
Add complexity analysis to your solution
At the top of your solution file, add a quick comment like:// Time: O(n); Space: O(1)
This helps build the habit of analyzing code performance.
git clone https://github.com/bohdan-konovalov/algo-challenges.git
cd algo-challenges
yarn install
yarn test
Tests are written using Jest and scoped to individual challenges.
To encourage debugging and mimic real interview conditions, test output is silenced by default. When running tests, youβll only see a summary like this:
Test Suites: 1 failed, 3 total
Tests: 5 failed, 12 total
Snapshots: 0 total
Time: 0.664 s
This gives you space to think before relying on test details.
Open the jest.config.js
file and comment out the following line:
reporters: ["<rootDir>/silentSummaryReporter.js"],
Then re-run your tests with:
yarn test
This will enable detailed output including test names, errors, and stack traces.
algo-challenges/
βββ amazon/
β βββ WatchScore/
β βββ __tests__/
β | βββ index.test.ts
β βββ solution/
β | βββ index.solution.ts
β βββ index.ts
β βββ README.md
βββ bolt/
β βββ SomeOtherChallenge/
βββ .gitignore
βββ jest.config.js
βββ package.json
βββ README.md
βββ silentSummaryReporter.js
βββ tsconfig.json
βββ yarn.lock
Have a cool challenge from an interview or a fun idea?
PRs are welcome β just follow the folder structure and testing style.
Happy hacking, and treat every challenge like itβs your final interview round πͺ