Skip to content

Commit 4be2adc

Browse files
authored
Dev: refactor of boilerplate (simplification) (#30)
* refactoring: simplified folder structure * fix: gitignore and cicd yml * readme: update user and dev md * fix: rm relative imports * fix: add tests to dockerfile
1 parent c274bbe commit 4be2adc

30 files changed

+188
-519
lines changed

.dockerignore

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -147,11 +147,4 @@ data/
147147
reports/
148148

149149
# Synthetic data conversations
150-
src/agents/utils/example_inputs/
151-
src/agents/utils/synthetic_conversations/
152-
src/agents/utils/synthetic_conversation_generation.py
153-
src/agents/utils/testbench_prompts.py
154-
src/agents/utils/langgraph_viz.py
155-
156-
# development agents
157-
src/agents/student_agent/
150+
src/agents/utils/example_inputs/

.github/workflows/dev.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ jobs:
5050
if: always()
5151
run: |
5252
source .venv/bin/activate
53+
export PYTHONPATH=$PYTHONPATH:.
5354
pytest --junit-xml=./reports/pytest.xml --tb=auto -v
5455
5556
- name: Upload test results

.github/workflows/main.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ jobs:
5050
if: always()
5151
run: |
5252
source .venv/bin/activate
53+
export PYTHONPATH=$PYTHONPATH:.
5354
pytest --junit-xml=./reports/pytest.xml --tb=auto -v
5455
5556
- name: Upload test results

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ coverage.xml
5050
*.py,cover
5151
.hypothesis/
5252
.pytest_cache/
53+
reports/
5354

5455
# Translations
5556
*.mo

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ COPY src ./src
2525

2626
COPY index.py .
2727

28-
COPY index_test.py .
28+
COPY tests ./tests
2929

3030
# Set the Lambda function handler
3131
CMD ["index.handler"]

README.md

Lines changed: 34 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -43,11 +43,11 @@ In GitHub, choose Use this template > Create a new repository in the repository
4343

4444
Choose the owner, and pick a name for the new repository.
4545

46-
> [!IMPORTANT] If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.
46+
> [!IMPORTANT] If you want to deploy the chat function to Lambda Feedback, make sure to choose the `Lambda Feedback` organization as the owner.
4747
48-
Set the visibility to Public or Private.
48+
Set the visibility to `Public` or `Private`.
4949

50-
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to Public.
50+
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to `Public`.
5151
5252
Click on Create repository.
5353

@@ -78,9 +78,9 @@ Also, don't forget to update or delete the Quickstart chapter from the `README.m
7878

7979
## Development
8080

81-
You can create your own invocation to your own agents hosted anywhere. Copy or update the `base_agent` from `src/agents/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
81+
You can create your own invocation to your own agents hosted anywhere. Copy or update the `agent.py` from `src/agent/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
8282

83-
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agents/llm_factory.py`.
83+
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agent/utils/llm_factory.py`.
8484

8585
### Prerequisites
8686

@@ -90,23 +90,37 @@ You agent can be based on an LLM hosted anywhere, you have available currently O
9090
### Repository Structure
9191

9292
```bash
93-
.github/workflows/
94-
dev.yml # deploys the DEV function to Lambda Feedback
95-
main.yml # deploys the STAGING function to Lambda Feedback
96-
test-report.yml # gathers Pytest Report of function tests
97-
98-
docs/ # docs for devs and users
99-
100-
src/module.py # chat_module function implementation
101-
src/module_test.py # chat_module function tests
102-
src/agents/ # find all agents developed for the chat functionality
103-
src/agents/utils/test_prompts.py # allows testing of any LLM agent on a couple of example inputs containing Lambda Feedback Questions and synthetic student conversations
93+
.
94+
├── .github/workflows/
95+
│ ├── dev.yml # deploys the DEV function to Lambda Feedback
96+
│ ├── main.yml # deploys the STAGING and PROD functions to Lambda Feedback
97+
│ └── test-report.yml # gathers Pytest Report of function tests
98+
├── docs/ # docs for devs and users
99+
├── src/
100+
│ ├── agent/
101+
│ │ ├── utils/ # utils for the agent, including the llm_factory
102+
│ │ ├── agent.py # the agent logic
103+
│ │ └── prompts.py # the system prompts defining the behaviour of the chatbot
104+
│ └── module.py
105+
└── tests/ # contains all tests for the chat function
106+
├── manual_agent_requests.py # allows testing of the docker container through API requests
107+
├── manual_agent_run.py # allows testing of any LLM agent on a couple of example inputs
108+
├── test_index.py # pytests
109+
└── test_module.py # pytests
104110
```
105111

106112

107113
## Testing the Chat Function
108114

109-
To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
115+
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
116+
117+
### Run Unit Tests
118+
119+
You can run the unit tests using `pytest`.
120+
121+
```bash
122+
pytest
123+
```
110124

111125
### Run the Chat Script
112126

@@ -116,9 +130,9 @@ You can run the Python function itself. Make sure to have a main function in eit
116130
python src/module.py
117131
```
118132

119-
You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
133+
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
120134
```bash
121-
python src/agents/utils/testbench_agents.py
135+
python tests/manual_agent_run.py
122136
```
123137

124138
### Calling the Docker Image Locally
@@ -156,7 +170,7 @@ curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations
156170
#### Call Docker Container
157171
##### A. Call Docker with Python Requests
158172
159-
In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
173+
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
160174
161175
##### B. Call Docker Container through API request
162176
@@ -183,7 +197,6 @@ Body with optional Params:
183197
"conversational_style":" ",
184198
"question_response_details": "",
185199
"include_test_data": true,
186-
"agent_type": {agent_name}
187200
}
188201
}
189202
```

docs/dev.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,15 @@
1212

1313
## Testing the Chat Function
1414

15-
To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
15+
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
16+
17+
### Run Unit Tests
18+
19+
You can run the unit tests using `pytest`.
20+
21+
```bash
22+
pytest
23+
```
1624

1725
### Run the Chat Script
1826

@@ -22,9 +30,9 @@ You can run the Python function itself. Make sure to have a main function in eit
2230
python src/module.py
2331
```
2432

25-
You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
33+
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
2634
```bash
27-
python src/agents/utils/testbench_agents.py
35+
python tests/manual_agent_run.py
2836
```
2937

3038
### Calling the Docker Image Locally
@@ -62,7 +70,7 @@ curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations
6270
#### Call Docker Container
6371
##### A. Call Docker with Python Requests
6472
65-
In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
73+
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
6674
6775
##### B. Call Docker Container through API request
6876
@@ -89,7 +97,6 @@ Body with optional Params:
8997
"conversational_style":" ",
9098
"question_response_details": "",
9199
"include_test_data": true,
92-
"agent_type": {agent_name}
93100
}
94101
}
95102
```

index.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,6 @@
11
import json
2-
try:
3-
from .src.module import chat_module
4-
from .src.agents.utils.types import JsonType
5-
except ImportError:
6-
from src.module import chat_module
7-
from src.agents.utils.types import JsonType
2+
from src.module import chat_module
3+
from src.agent.utils.types import JsonType
84

95
def handler(event: JsonType, context):
106
"""

src/__init__.py

Whitespace-only changes.
Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,7 @@
1-
try:
2-
from ..llm_factory import OpenAILLMs, GoogleAILLMs
3-
from .base_prompts import \
4-
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
5-
from ..utils.types import InvokeAgentResponseType
6-
except ImportError:
7-
from src.agents.llm_factory import OpenAILLMs, GoogleAILLMs
8-
from src.agents.base_agent.base_prompts import \
9-
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
10-
from src.agents.utils.types import InvokeAgentResponseType
1+
from src.agent.utils.llm_factory import OpenAILLMs, GoogleAILLMs
2+
from src.agent.prompts import \
3+
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
4+
from src.agent.utils.types import InvokeAgentResponseType
115

126
from langgraph.graph import StateGraph, START, END
137
from langchain_core.messages import SystemMessage, RemoveMessage, HumanMessage, AIMessage
@@ -62,7 +56,7 @@ def call_model(self, state: State, config: RunnableConfig) -> str:
6256
system_message = self.role_prompt
6357

6458
# Adding external student progress and question context details from data queries
65-
question_response_details = config["configurable"].get("question_response_details", "")
59+
question_response_details = config.get("configurable", {}).get("question_response_details", "")
6660
if question_response_details:
6761
system_message += f"## Known Question Materials: {question_response_details} \n\n"
6862

@@ -98,8 +92,8 @@ def summarize_conversation(self, state: State, config: RunnableConfig) -> dict:
9892
"""Summarize the conversation."""
9993

10094
summary = state.get("summary", "")
101-
previous_summary = config["configurable"].get("summary", "")
102-
previous_conversationalStyle = config["configurable"].get("conversational_style", "")
95+
previous_summary = config.get("configurable", {}).get("summary", "")
96+
previous_conversationalStyle = config.get("configurable", {}).get("conversational_style", "")
10397
if previous_summary:
10498
summary = previous_summary
10599

0 commit comments

Comments
 (0)