Skip to content

Commit 24ff700

Browse files
stainless-app[bot]colemccrackengradenr
authored
release: 0.31.1 (#249)
Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com> Co-authored-by: Cole McCracken <colemccracken@gmail.com> Co-authored-by: Graden Rea <grea@groq.com>
1 parent aa6d8b1 commit 24ff700

29 files changed

+862
-239
lines changed

.github/workflows/ci.yml

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,13 +36,13 @@ jobs:
3636
run: ./scripts/lint
3737

3838
build:
39-
if: github.repository == 'stainless-sdks/groqcloud-python' && (github.event_name == 'push' || github.event.pull_request.head.repo.fork)
39+
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
4040
timeout-minutes: 10
4141
name: build
4242
permissions:
4343
contents: read
4444
id-token: write
45-
runs-on: depot-ubuntu-24.04
45+
runs-on: ${{ github.repository == 'stainless-sdks/groqcloud-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
4646
steps:
4747
- uses: actions/checkout@v4
4848

@@ -61,12 +61,14 @@ jobs:
6161
run: rye build
6262

6363
- name: Get GitHub OIDC Token
64+
if: github.repository == 'stainless-sdks/groqcloud-python'
6465
id: github-oidc
6566
uses: actions/github-script@v6
6667
with:
6768
script: core.setOutput('github_token', await core.getIDToken());
6869

6970
- name: Upload tarball
71+
if: github.repository == 'stainless-sdks/groqcloud-python'
7072
env:
7173
URL: https://pkg.stainless.com/s
7274
AUTH: ${{ steps.github-oidc.outputs.github_token }}

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.31.0"
2+
".": "0.31.1"
33
}

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 17
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/groqcloud%2Fgroqcloud-4543b558a0a546fc45d3300535b9b535f9cf251f4284bc255d3bc337727e5a50.yml
3-
openapi_spec_hash: 09235cb11f84f84a07819c2b3f0a6d6a
4-
config_hash: 6b1c374dcc1ffa3165dd22f52a77ff89
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/groqcloud%2Fgroqcloud-7b46be9995c02d5d7f1e18b2f29704c54a25664c6c9b36330d133883d1ad3fce.yml
3+
openapi_spec_hash: 210b0cc34ee0750530d872c12e39cc31
4+
config_hash: 961b4995e909aef11a454befa56ad3d2

CHANGELOG.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,40 @@
11
# Changelog
22

3+
## 0.31.1 (2025-09-04)
4+
5+
Full Changelog: [v0.31.0...v0.31.1](https://github.com/groq/groq-python/compare/v0.31.0...v0.31.1)
6+
7+
### Features
8+
9+
* **api:** api update ([0a8a5c0](https://github.com/groq/groq-python/commit/0a8a5c0be7709857fdff72d03a34dce723f13b71))
10+
* **api:** api update ([7896dbd](https://github.com/groq/groq-python/commit/7896dbd7b60828795d2e9780cdf326b001fc839c))
11+
* **api:** api update ([f9b5dca](https://github.com/groq/groq-python/commit/f9b5dca378b9916ca2ab592922b9ffba7247a1ae))
12+
* **api:** api update ([e34ec28](https://github.com/groq/groq-python/commit/e34ec28c9705485684293cfb97eec63b16257bd7))
13+
* **api:** api update ([cd14d0a](https://github.com/groq/groq-python/commit/cd14d0ab7eb0b161ffeef69a7b89cce062aeb078))
14+
* **api:** api update ([7d02cd3](https://github.com/groq/groq-python/commit/7d02cd32852d47016cf3b6cc7ca0f898dd0c99c5))
15+
* **api:** api update ([437f5fb](https://github.com/groq/groq-python/commit/437f5fb2040d345614ffe9ff15c525feb2ab19de))
16+
* improve future compat with pydantic v3 ([42a6cb9](https://github.com/groq/groq-python/commit/42a6cb96e631ad48c371a1ab58b6adea07897685))
17+
* **types:** replace List[str] with SequenceNotStr in params ([d1ee744](https://github.com/groq/groq-python/commit/d1ee74483e6d2e2353985d760837e72af3143be8))
18+
19+
20+
### Bug Fixes
21+
22+
* avoid newer type syntax ([21f93a7](https://github.com/groq/groq-python/commit/21f93a72c50f49187e09310df41e5dbb984298c3))
23+
* compeltions streaming overloads ([201e137](https://github.com/groq/groq-python/commit/201e137d9b336b4fef847713e6e743ac47e733be))
24+
* update example model from decommissioned models to gpt-oss-20b ([1570583](https://github.com/groq/groq-python/commit/1570583818c7355c1ea6cc4460075f60d3cd29bb))
25+
* update example model from decommissioned models to gpt-oss-20b ([#250](https://github.com/groq/groq-python/issues/250)) ([edb9132](https://github.com/groq/groq-python/commit/edb91324ebff8d10f993492dd68ed05ac030bc9f))
26+
27+
28+
### Chores
29+
30+
* **internal:** add Sequence related utils ([01104d8](https://github.com/groq/groq-python/commit/01104d8c4a179a2ad347d0e12f935d6547f61b31))
31+
* **internal:** change ci workflow machines ([e796cb9](https://github.com/groq/groq-python/commit/e796cb9599db99fa9e29f97e54ffe519bd23bf1b))
32+
* **internal:** fix ruff target version ([b58149d](https://github.com/groq/groq-python/commit/b58149d9584a5c9b0afc935af6aef2a2c64c1187))
33+
* **internal:** update comment in script ([bd20c49](https://github.com/groq/groq-python/commit/bd20c49c957e990aeb8c3bb8bb9c3dc9bcc31b86))
34+
* **internal:** update pyright exclude list ([124e838](https://github.com/groq/groq-python/commit/124e8387b8ba18853f93dede16ffba8c49c441a0))
35+
* update @stainless-api/prism-cli to v5.15.0 ([b2f49cc](https://github.com/groq/groq-python/commit/b2f49cc1301696827eb00d5db91895d71c3da965))
36+
* update github action ([c7662e0](https://github.com/groq/groq-python/commit/c7662e01641af2daa758eb23bb19c9b65d0d5d20))
37+
338
## 0.31.0 (2025-08-05)
439

540
Full Changelog: [v0.30.0...v0.31.0](https://github.com/groq/groq-python/compare/v0.30.0...v0.31.0)

README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ chat_completion = client.chat.completions.create(
3939
"content": "Explain the importance of low latency LLMs",
4040
}
4141
],
42-
model="llama3-8b-8192",
42+
model="openai/gpt-oss-20b",
4343
)
4444
print(chat_completion.choices[0].message.content)
4545
```
@@ -71,7 +71,7 @@ async def main() -> None:
7171
"content": "Explain the importance of low latency LLMs",
7272
}
7373
],
74-
model="llama3-8b-8192",
74+
model="openai/gpt-oss-20b",
7575
)
7676
print(chat_completion.choices[0].message.content)
7777

@@ -112,7 +112,7 @@ async def main() -> None:
112112
"content": "Explain the importance of low latency LLMs",
113113
}
114114
],
115-
model="llama3-8b-8192",
115+
model="openai/gpt-oss-20b",
116116
)
117117
print(chat_completion.id)
118118

@@ -146,9 +146,9 @@ chat_completion = client.chat.completions.create(
146146
}
147147
],
148148
model="meta-llama/llama-4-scout-17b-16e-instruct",
149-
search_settings={},
149+
compound_custom={},
150150
)
151-
print(chat_completion.search_settings)
151+
print(chat_completion.compound_custom)
152152
```
153153

154154
## File uploads
@@ -196,7 +196,7 @@ try:
196196
"content": "Explain the importance of low latency LLMs",
197197
},
198198
],
199-
model="llama3-8b-8192",
199+
model="openai/gpt-oss-20b",
200200
)
201201
except groq.APIConnectionError as e:
202202
print("The server could not be reached")
@@ -251,7 +251,7 @@ client.with_options(max_retries=5).chat.completions.create(
251251
"content": "Explain the importance of low latency LLMs",
252252
},
253253
],
254-
model="llama3-8b-8192",
254+
model="openai/gpt-oss-20b",
255255
)
256256
```
257257

@@ -286,7 +286,7 @@ client.with_options(timeout=5.0).chat.completions.create(
286286
"content": "Explain the importance of low latency LLMs",
287287
},
288288
],
289-
model="llama3-8b-8192",
289+
model="openai/gpt-oss-20b",
290290
)
291291
```
292292

@@ -336,7 +336,7 @@ response = client.chat.completions.with_raw_response.create(
336336
"role": "user",
337337
"content": "Explain the importance of low latency LLMs",
338338
}],
339-
model="llama3-8b-8192",
339+
model="openai/gpt-oss-20b",
340340
)
341341
print(response.headers.get('X-My-Header'))
342342

@@ -366,7 +366,7 @@ with client.chat.completions.with_streaming_response.create(
366366
"content": "Explain the importance of low latency LLMs",
367367
},
368368
],
369-
model="llama3-8b-8192",
369+
model="openai/gpt-oss-20b",
370370
) as response:
371371
print(response.headers.get("X-My-Header"))
372372

pyproject.toml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "groq"
3-
version = "0.31.0"
3+
version = "0.31.1"
44
description = "The official Python library for the groq API"
55
dynamic = ["readme"]
66
license = "Apache-2.0"
@@ -148,6 +148,7 @@ exclude = [
148148
"_dev",
149149
".venv",
150150
".nox",
151+
".git",
151152
]
152153

153154
reportImplicitOverride = true
@@ -159,7 +160,7 @@ reportPrivateUsage = false
159160
[tool.ruff]
160161
line-length = 120
161162
output-format = "grouped"
162-
target-version = "py37"
163+
target-version = "py38"
163164

164165
[tool.ruff.format]
165166
docstring-code-format = true

scripts/mock

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ echo "==> Starting mock server with URL ${URL}"
2121

2222
# Run prism mock on the given spec
2323
if [ "$1" == "--daemon" ]; then
24-
npm exec --package=@stainless-api/prism-cli@5.8.5 -- prism mock "$URL" &> .prism.log &
24+
npm exec --package=@stainless-api/prism-cli@5.15.0 -- prism mock "$URL" &> .prism.log &
2525

2626
# Wait for server to come online
2727
echo -n "Waiting for server"
@@ -37,5 +37,5 @@ if [ "$1" == "--daemon" ]; then
3737

3838
echo
3939
else
40-
npm exec --package=@stainless-api/prism-cli@5.8.5 -- prism mock "$URL"
40+
npm exec --package=@stainless-api/prism-cli@5.15.0 -- prism mock "$URL"
4141
fi

scripts/test

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ elif ! prism_is_running ; then
4343
echo -e "To run the server, pass in the path or url of your OpenAPI"
4444
echo -e "spec to the prism command:"
4545
echo
46-
echo -e " \$ ${YELLOW}npm exec --package=@stoplight/prism-cli@~5.3.2 -- prism mock path/to/your.openapi.yml${NC}"
46+
echo -e " \$ ${YELLOW}npm exec --package=@stainless-api/prism-cli@5.15.0 -- prism mock path/to/your.openapi.yml${NC}"
4747
echo
4848

4949
exit 1

src/groq/_base_client.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@
5959
ModelBuilderProtocol,
6060
)
6161
from ._utils import is_dict, is_list, asyncify, is_given, lru_cache, is_mapping
62-
from ._compat import PYDANTIC_V2, model_copy, model_dump
62+
from ._compat import PYDANTIC_V1, model_copy, model_dump
6363
from ._models import GenericModel, FinalRequestOptions, validate_type, construct_type
6464
from ._response import (
6565
APIResponse,
@@ -232,7 +232,7 @@ def _set_private_attributes(
232232
model: Type[_T],
233233
options: FinalRequestOptions,
234234
) -> None:
235-
if PYDANTIC_V2 and getattr(self, "__pydantic_private__", None) is None:
235+
if (not PYDANTIC_V1) and getattr(self, "__pydantic_private__", None) is None:
236236
self.__pydantic_private__ = {}
237237

238238
self._model = model
@@ -320,7 +320,7 @@ def _set_private_attributes(
320320
client: AsyncAPIClient,
321321
options: FinalRequestOptions,
322322
) -> None:
323-
if PYDANTIC_V2 and getattr(self, "__pydantic_private__", None) is None:
323+
if (not PYDANTIC_V1) and getattr(self, "__pydantic_private__", None) is None:
324324
self.__pydantic_private__ = {}
325325

326326
self._model = model

0 commit comments

Comments
 (0)