Skip to content

Commit b779cc6

Browse files
committed
Merge remote-tracking branch 'template/main'
2 parents cb9170b + 4be2adc commit b779cc6

30 files changed

+187
-518
lines changed

.dockerignore

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -147,11 +147,4 @@ data/
147147
reports/
148148

149149
# Synthetic data conversations
150-
src/agents/utils/example_inputs/
151-
src/agents/utils/synthetic_conversations/
152-
src/agents/utils/synthetic_conversation_generation.py
153-
src/agents/utils/testbench_prompts.py
154-
src/agents/utils/langgraph_viz.py
155-
156-
# development agents
157-
src/agents/student_agent/
150+
src/agents/utils/example_inputs/

.github/workflows/dev.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ jobs:
5050
if: always()
5151
run: |
5252
source .venv/bin/activate
53+
export PYTHONPATH=$PYTHONPATH:.
5354
pytest --junit-xml=./reports/pytest.xml --tb=auto -v
5455
5556
- name: Upload test results

.github/workflows/main.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ jobs:
5050
if: always()
5151
run: |
5252
source .venv/bin/activate
53+
export PYTHONPATH=$PYTHONPATH:.
5354
pytest --junit-xml=./reports/pytest.xml --tb=auto -v
5455
5556
- name: Upload test results

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ coverage.xml
5050
*.py,cover
5151
.hypothesis/
5252
.pytest_cache/
53+
reports/
5354

5455
# Translations
5556
*.mo

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ COPY src ./src
2525

2626
COPY index.py .
2727

28-
COPY index_test.py .
28+
COPY tests ./tests
2929

3030
# Set the Lambda function handler
3131
CMD ["index.handler"]

README.md

Lines changed: 36 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# reflectiveChatFunction
22

3-
This repository contains the code for a modular chatbot to be used on Lambda-Feedback platform [written in Python].
3+
This repository contains the code for a modular Socratic chatbot to be used on Lambda-Feedback platform [written in Python].
4+
More details about the chatbot's behaviour in [User Documentation](docs/user.md).
45

56
## Quickstart
67

@@ -43,11 +44,11 @@ In GitHub, choose Use this template > Create a new repository in the repository
4344

4445
Choose the owner, and pick a name for the new repository.
4546

46-
> [!IMPORTANT] If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.
47+
> [!IMPORTANT] If you want to deploy the chat function to Lambda Feedback, make sure to choose the `Lambda Feedback` organization as the owner.
4748
48-
Set the visibility to Public or Private.
49+
Set the visibility to `Public` or `Private`.
4950

50-
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to Public.
51+
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to `Public`.
5152
5253
Click on Create repository.
5354

@@ -78,9 +79,9 @@ Also, don't forget to update or delete the Quickstart chapter from the `README.m
7879

7980
## Development
8081

81-
You can create your own invocation to your own agents hosted anywhere. Copy or update the `base_agent` from `src/agents/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
82+
You can create your own invocation to your own agents hosted anywhere. Copy or update the `agent.py` from `src/agent/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
8283

83-
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agents/llm_factory.py`.
84+
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agent/utils/llm_factory.py`.
8485

8586
### Prerequisites
8687

@@ -90,23 +91,37 @@ You agent can be based on an LLM hosted anywhere, you have available currently O
9091
### Repository Structure
9192

9293
```bash
93-
.github/workflows/
94-
dev.yml # deploys the DEV function to Lambda Feedback
95-
main.yml # deploys the STAGING function to Lambda Feedback
96-
test-report.yml # gathers Pytest Report of function tests
97-
98-
docs/ # docs for devs and users
99-
100-
src/module.py # chat_module function implementation
101-
src/module_test.py # chat_module function tests
102-
src/agents/ # find all agents developed for the chat functionality
103-
src/agents/utils/test_prompts.py # allows testing of any LLM agent on a couple of example inputs containing Lambda Feedback Questions and synthetic student conversations
94+
.
95+
├── .github/workflows/
96+
│ ├── dev.yml # deploys the DEV function to Lambda Feedback
97+
│ ├── main.yml # deploys the STAGING and PROD functions to Lambda Feedback
98+
│ └── test-report.yml # gathers Pytest Report of function tests
99+
├── docs/ # docs for devs and users
100+
├── src/
101+
│ ├── agent/
102+
│ │ ├── utils/ # utils for the agent, including the llm_factory
103+
│ │ ├── agent.py # the agent logic
104+
│ │ └── prompts.py # the system prompts defining the behaviour of the chatbot
105+
│ └── module.py
106+
└── tests/ # contains all tests for the chat function
107+
├── manual_agent_requests.py # allows testing of the docker container through API requests
108+
├── manual_agent_run.py # allows testing of any LLM agent on a couple of example inputs
109+
├── test_index.py # pytests
110+
└── test_module.py # pytests
104111
```
105112

106113

107114
## Testing the Chat Function
108115

109-
To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
116+
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
117+
118+
### Run Unit Tests
119+
120+
You can run the unit tests using `pytest`.
121+
122+
```bash
123+
pytest
124+
```
110125

111126
### Run the Chat Script
112127

@@ -116,9 +131,9 @@ You can run the Python function itself. Make sure to have a main function in eit
116131
python src/module.py
117132
```
118133

119-
You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
134+
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
120135
```bash
121-
python src/agents/utils/testbench_agents.py
136+
python tests/manual_agent_run.py
122137
```
123138

124139
### Calling the Docker Image Locally
@@ -156,7 +171,7 @@ curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations
156171
#### Call Docker Container
157172
##### A. Call Docker with Python Requests
158173
159-
In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
174+
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
160175
161176
##### B. Call Docker Container through API request
162177
@@ -183,7 +198,6 @@ Body with optional Params:
183198
"conversational_style":" ",
184199
"question_response_details": "",
185200
"include_test_data": true,
186-
"agent_type": {agent_name}
187201
}
188202
}
189203
```

docs/dev.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,15 @@ Body:
3838

3939
## Testing the Chat Function
4040

41-
To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
41+
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
42+
43+
### Run Unit Tests
44+
45+
You can run the unit tests using `pytest`.
46+
47+
```bash
48+
pytest
49+
```
4250

4351
### Run the Chat Script
4452

@@ -48,9 +56,9 @@ You can run the Python function itself. Make sure to have a main function in eit
4856
python src/module.py
4957
```
5058

51-
You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
59+
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
5260
```bash
53-
python src/agents/utils/testbench_agents.py
61+
python tests/manual_agent_run.py
5462
```
5563

5664
### Calling the Docker Image Locally
@@ -88,7 +96,7 @@ curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations
8896
#### Call Docker Container
8997
##### A. Call Docker with Python Requests
9098
91-
In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
99+
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
92100
93101
##### B. Call Docker Container through API request
94102
@@ -115,7 +123,6 @@ Body with optional Params:
115123
"conversational_style":" ",
116124
"question_response_details": "",
117125
"include_test_data": true,
118-
"agent_type": {agent_name}
119126
}
120127
}
121128
```

index.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,6 @@
11
import json
2-
try:
3-
from .src.module import chat_module
4-
from .src.agents.utils.types import JsonType
5-
except ImportError:
6-
from src.module import chat_module
7-
from src.agents.utils.types import JsonType
2+
from src.module import chat_module
3+
from src.agent.utils.types import JsonType
84

95
def handler(event: JsonType, context):
106
"""

src/__init__.py

Whitespace-only changes.
Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,7 @@
1-
try:
2-
from ..llm_factory import OpenAILLMs, GoogleAILLMs
3-
from .base_prompts import \
4-
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
5-
from ..utils.types import InvokeAgentResponseType
6-
except ImportError:
7-
from src.agents.llm_factory import OpenAILLMs, GoogleAILLMs
8-
from src.agents.base_agent.base_prompts import \
9-
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
10-
from src.agents.utils.types import InvokeAgentResponseType
1+
from src.agent.utils.llm_factory import OpenAILLMs, GoogleAILLMs
2+
from src.agent.prompts import \
3+
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
4+
from src.agent.utils.types import InvokeAgentResponseType
115

126
from langgraph.graph import StateGraph, START, END
137
from langchain_core.messages import SystemMessage, RemoveMessage, HumanMessage, AIMessage
@@ -62,7 +56,7 @@ def call_model(self, state: State, config: RunnableConfig) -> str:
6256
system_message = self.role_prompt
6357

6458
# Adding external student progress and question context details from data queries
65-
question_response_details = config["configurable"].get("question_response_details", "")
59+
question_response_details = config.get("configurable", {}).get("question_response_details", "")
6660
if question_response_details:
6761
system_message += f"## Known Question Materials: {question_response_details} \n\n"
6862

@@ -98,8 +92,8 @@ def summarize_conversation(self, state: State, config: RunnableConfig) -> dict:
9892
"""Summarize the conversation."""
9993

10094
summary = state.get("summary", "")
101-
previous_summary = config["configurable"].get("summary", "")
102-
previous_conversationalStyle = config["configurable"].get("conversational_style", "")
95+
previous_summary = config.get("configurable", {}).get("summary", "")
96+
previous_conversationalStyle = config.get("configurable", {}).get("conversational_style", "")
10397
if previous_summary:
10498
summary = previous_summary
10599

0 commit comments

Comments
 (0)