Your task is to extend an existing Python web application by adding new features that simulate a real-world scenario. You will:
- Add a new endpoint that allows users to receive personalized recommendations based on their interests.
- Save these recommendations in an in memory cache.
- Add another endpoint to retrieve saved recommendations for each user.
- Integrate with a mock Large Language Model (LLM) agent using a provided Docker Compose setup.
This task assesses your ability to integrate external services, handle HTTP requests/responses, and write clean, maintainable code following best practices.
Note: you can change any file on this project, as long as you provide a working code OR enough code that can you can explain you choices.
You are provided with a basic project structure of a web application built using FastAPI. The application currently has a few endpoints set up. Your job is to:
-
Add a new endpoint
/recommendationsthat generates and saves personalized recommendations.- Integrate with a mock LLM agent, accessible via Docker Compose, to generate recommendations.
- Save recommendations in a database (you'll need to add a in memory cache).
-
Add another endpoint
/users/:user_id/recommendationsto retrieve saved recommendations.- Ensure proper error handling and input validation.
-
Create a frontend using React that uses those two endpoints
- there is no restriction on component library or tools you use
- create is based on the mockup provided
-
Write unit tests for your new code.
-
Document your work and provide instructions on how to run the application.
- Create Add a React app that will be based on the mockup provided:
- it is recommended to use NextJS and shadcn, but any component library will work.
- this can be served using the existing server, or you can create another server to serve the new FE app.
- if you decide to add a new app, make sure to add it to the docker compose file.
- the frontend should be mobile compatible and look good both and dsektop and on mobile.
- Use the provided
docker-compose.ymlfile to start the mock LLM agent. - LLM Agent URL: The mock LLM agent will be accessible at
http://localhost:1080.
- Method: POST
- Description: Accepts a JSON payload containing user preferences, generates personalized recommendations using the mock LLM agent, saves them in the cache, and returns them in the response.
- Request Body Example:
{ "user_id": "12345", "preferences": ["science fiction", "artificial intelligence", "space exploration"] } - Response Body Example:
{ "user_id": "12345", "recommendations": [ "Book: 'Dune' by Frank Herbert", "Article: 'The Future of AI in Space Travel'", "Movie: 'Interstellar'" ] } - Real-World Scenario: This endpoint simulates a feature in a content platform where users receive content recommendations based on their interests.
- Data Persistence:
- Save each user's recommendations in an in memory cache
bonus:
- Database Setup:
- Add a MongoDB database service to the
docker-compose.ymlfile. - Configure the database connection in your application.
- Define models/schemas using Mongoose.
- Add a MongoDB database service to the
- Data Persistence:
- Save each user's recommendations in the database with the
user_idas a reference. - Ensure that data is stored securely and efficiently.
- Save each user's recommendations in the database with the
- Method: GET
- Description: Retrieves saved recommendations for a given
user_id. If the user hasn't requested any recommendations yet, return a 404 error. - Response Body Example (Success):
{ "user_id": "12345", "recommendations": [ "Book: 'Dune' by Frank Herbert", "Article: 'The Future of AI in Space Travel'", "Movie: 'Interstellar'" ] } - Error Response Example (User Not Found):
{ "error": "No recommendations found for user_id 12345." }
- LLM Agent Interaction: Send a POST request to
http://localhost:8080/llm/generatewith the user's preferences. - The mock LLM agent will return a list of recommendations based on the provided preferences.
- LLM Agent Request Example:
{ "preferences": ["science fiction", "artificial intelligence", "space exploration"] } - LLM Agent Response Example:
{ "recommendations": [ "Book: 'Dune' by Frank Herbert", "Article: 'The Future of AI in Space Travel'", "Movie: 'Interstellar'" ] }
- Input Validation:
- Ensure that
user_idis provided and is a non-empty string. - Ensure that
preferencesis a non-empty list of strings.
- Ensure that
- Error Handling:
- Handle scenarios where the LLM agent returns an error or is unreachable.
- Handle database errors (e.g., connection issues, query failures).
- Return appropriate HTTP status codes and error messages.
- Error Response Example:
{ "error": "Unable to fetch recommendations at this time. Please try again later." }
- Update
README.md:- Provide instructions on how to set up and run the application, including starting the services via Docker Compose.
- Explain how to run the tests.
- Document any assumptions or decisions made during development.
- Programming Language: Python 3.8 or higher
- Web Framework: FastAPI
- Testing Framework:
pytestorunittest - External Tools: MockServer: Accessible at
http://localhost:8080via Docker Compose. - Code Style: Adhere to PEP 8 guidelines.
- Version Control: Provide your solution in a Git repository format.
- Source Code: Submit all source code files, including the updated project structure.
- Documentation: Ensure the
README.mdis clear and provides all necessary instructions. - Dependencies: Include a
requirements.txtfile listing all Python dependencies. - Testing: All tests should be runnable using a single command (e.g.,
pytest). - Docker Compose: Include your updated
docker-compose.ymlfile with the database service added. Ensure that all services can be started withdocker-compose up. - Git Commits: Make meaningful commit messages that reflect the changes made.
-
Functionality:
- The endpoints work as specified.
- Correct integration with the mock LLM agent.
-
Code Quality:
- Clean, readable, and well-organized code following best practices.
- Proper use of comments and docstrings where appropriate.
-
Error Handling:
- Robust input validation and error management.
- Appropriate HTTP status codes are returned.
-
Testing:
- Comprehensive unit tests covering various scenarios.
- Tests run successfully without errors.
-
Documentation:
- Clear instructions and explanations in the
README.md. - Any assumptions or design decisions are well-documented.
- Clear instructions and explanations in the
-
Integration:
- Effective use of the provided MockServer to simulate the LLM agent.
- Bonus: Successful addition of a database service to Docker Compose.
- Additional Libraries: You may use additional Python libraries if necessary (e.g., SQLAlchemy for database interactions), but please justify their use in your documentation.
- Assumptions: If any part of the task is unclear, make reasonable assumptions and note them in the
README.md. - Focus Areas: Demonstrate your understanding of API development, external service integration, and testing.
- Time Management: While quality is important, this task is also an opportunity to show how you manage your time to deliver a working solution efficiently.
We look forward to reviewing your submission. If you have any questions, feel free to reach out. Good luck!
Below is a brief guide to help you set up the project environment.
git clone <repository-url>
cd <repository-directory>-
Python Dependencies:
pip install -r requirements.txt
-
Docker Services:
Start all services (including the mock LLM agent) using Docker Compose:
docker-compose up
Start the FastAPI application:
uvicorn main:app --reload-
Generate Recommendations:
curl -X POST "http://localhost:8000/recommendations" \ -H "Content-Type: application/json" \ -d '{ "user_id": "12345", "preferences": ["science fiction", "artificial intelligence", "space exploration"] }'
-
Retrieve Recommendations:
curl -X GET "http://localhost:8000/users/12345/recommendations"
pytestNote: Replace <repository-url> and <repository-directory> with the actual URL and directory name of your Git repository.
Mock LLM Agent Endpoint: http://localhost:8080/llm/generate
- Database Service: If applicable, ensure the database is correctly configured in both
docker-compose.ymland your application settings.
Feel free to enhance the project structure and configuration as you see fit, as long as the core requirements are met.
- it is recommended to look at the different options for the LLM Agent Endpoints!
Good luck!