You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+25-17Lines changed: 25 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,30 +123,38 @@ fr.compare(url)
123
123
124
124
### Generating Text with LLMs
125
125
126
-
FireRequests supports generating responses from LLMs like OpenAI’s and Google’s generative models in parallel batches. This currently doesn't work in Colab.
126
+
FireRequests allows you to run LLM API calls (like OpenAI or Google) in parallel batches using a decorator. This keeps the library lightweight and lets users supply their own logic for calling APIs. This approach currently doesn't work in Colab.
127
127
128
128
```python
129
129
from firerequests import FireRequests
130
130
131
131
# Initialize FireRequests
132
132
fr = FireRequests()
133
133
134
-
# Set parameters
135
-
provider ="openai"
136
-
model ="gpt-4o-mini"
137
-
system_prompt ="Provide concise answers."
138
-
user_prompts = ["What is AI?", "Explain quantum computing.", "What is Bitcoin?", "Explain neural networks."]
139
-
parallel_requests =2
140
-
141
-
# Generate responses
142
-
responses = fr.generate(
143
-
provider=provider,
144
-
model=model,
145
-
system_prompt=system_prompt,
146
-
user_prompts=user_prompts,
147
-
parallel_requests=parallel_requests
148
-
)
149
-
134
+
# Use the decorator to define your own prompt function
0 commit comments