You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+36-22Lines changed: 36 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,7 @@
1
1
# reflectiveChatFunction
2
2
3
-
This repository contains the code for a modular chatbot to be used on Lambda-Feedback platform [written in Python].
3
+
This repository contains the code for a modular Socratic chatbot to be used on Lambda-Feedback platform [written in Python].
4
+
More details about the chatbot's behaviour in [User Documentation](docs/user.md).
4
5
5
6
## Quickstart
6
7
@@ -43,11 +44,11 @@ In GitHub, choose Use this template > Create a new repository in the repository
43
44
44
45
Choose the owner, and pick a name for the new repository.
45
46
46
-
> [!IMPORTANT] If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.
47
+
> [!IMPORTANT] If you want to deploy the chat function to Lambda Feedback, make sure to choose the `Lambda Feedback` organization as the owner.
47
48
48
-
Set the visibility to Public or Private.
49
+
Set the visibility to `Public` or `Private`.
49
50
50
-
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to Public.
51
+
> [!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to `Public`.
51
52
52
53
Click on Create repository.
53
54
@@ -78,9 +79,9 @@ Also, don't forget to update or delete the Quickstart chapter from the `README.m
78
79
79
80
## Development
80
81
81
-
You can create your own invocation to your own agents hosted anywhere. Copy or update the `base_agent` from `src/agents/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
82
+
You can create your own invocation to your own agents hosted anywhere. Copy or update the `agent.py` from `src/agent/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.
82
83
83
-
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agents/llm_factory.py`.
84
+
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agent/utils/llm_factory.py`.
84
85
85
86
### Prerequisites
86
87
@@ -90,23 +91,37 @@ You agent can be based on an LLM hosted anywhere, you have available currently O
90
91
### Repository Structure
91
92
92
93
```bash
93
-
.github/workflows/
94
-
dev.yml # deploys the DEV function to Lambda Feedback
95
-
main.yml # deploys the STAGING function to Lambda Feedback
96
-
test-report.yml # gathers Pytest Report of function tests
97
-
98
-
docs/ # docs for devs and users
99
-
100
-
src/module.py # chat_module function implementation
101
-
src/module_test.py # chat_module function tests
102
-
src/agents/ # find all agents developed for the chat functionality
103
-
src/agents/utils/test_prompts.py # allows testing of any LLM agent on a couple of example inputs containing Lambda Feedback Questions and synthetic student conversations
94
+
.
95
+
├── .github/workflows/
96
+
│ ├── dev.yml # deploys the DEV function to Lambda Feedback
97
+
│ ├── main.yml # deploys the STAGING and PROD functions to Lambda Feedback
98
+
│ └── test-report.yml # gathers Pytest Report of function tests
99
+
├── docs/ # docs for devs and users
100
+
├── src/
101
+
│ ├── agent/
102
+
│ │ ├── utils/ # utils for the agent, including the llm_factory
103
+
│ │ ├── agent.py # the agent logic
104
+
│ │ └── prompts.py # the system prompts defining the behaviour of the chatbot
105
+
│ └── module.py
106
+
└── tests/ # contains all tests for the chat function
107
+
├── manual_agent_requests.py # allows testing of the docker container through API requests
108
+
├── manual_agent_run.py # allows testing of any LLM agent on a couple of example inputs
109
+
├── test_index.py # pytests
110
+
└── test_module.py # pytests
104
111
```
105
112
106
113
107
114
## Testing the Chat Function
108
115
109
-
To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
116
+
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
117
+
118
+
### Run Unit Tests
119
+
120
+
You can run the unit tests using `pytest`.
121
+
122
+
```bash
123
+
pytest
124
+
```
110
125
111
126
### Run the Chat Script
112
127
@@ -116,9 +131,9 @@ You can run the Python function itself. Make sure to have a main function in eit
116
131
python src/module.py
117
132
```
118
133
119
-
You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
134
+
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
174
+
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
160
175
161
176
##### B. Call Docker Container through API request
Copy file name to clipboardExpand all lines: docs/dev.md
+12-5Lines changed: 12 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,15 @@ Body:
38
38
39
39
## Testing the Chat Function
40
40
41
-
To test your function, you can either call the code directly through a python script. Or you can build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
41
+
To test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
42
+
43
+
### Run Unit Tests
44
+
45
+
You can run the unit tests using `pytest`.
46
+
47
+
```bash
48
+
pytest
49
+
```
42
50
43
51
### Run the Chat Script
44
52
@@ -48,9 +56,9 @@ You can run the Python function itself. Make sure to have a main function in eit
48
56
python src/module.py
49
57
```
50
58
51
-
You can also use the `testbench_agents.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
59
+
You can also use the `manual_agent_run.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
In the `src/agents/utils` folder you can find the `requests_testscript.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
99
+
In the `tests/` folder you can find the `manual_agent_requests.py` script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
92
100
93
101
##### B. Call Docker Container through API request
0 commit comments