Skip to content

Commit 1e529df

Browse files
committed
feat: officially support generic openai compatible api providers
1 parent 8141704 commit 1e529df

File tree

8 files changed

+147
-2
lines changed

8 files changed

+147
-2
lines changed
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'token.js': patch
3+
---
4+
5+
Offically support generic Openai compatible providers

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Integrate 200+ LLMs with one TypeScript SDK using OpenAI's format. Free and open
44

55
## Features
66

7-
* Use OpenAI's format to call 200+ LLMs from 10 providers.
7+
* Use OpenAI's format to call 200+ LLMs from 10+ providers.
88
* Supports tools, JSON outputs, image inputs, streaming, and more.
99
* Runs completely on the client side. No proxy server needed.
1010
* Free and open source under MIT.
@@ -21,6 +21,7 @@ Integrate 200+ LLMs with one TypeScript SDK using OpenAI's format. Free and open
2121
* OpenAI
2222
* Perplexity
2323
* OpenRouter
24+
* Any other model provider with an OpenAI compatible API
2425

2526
## [Documentation](https://docs.tokenjs.ai/)
2627

@@ -91,6 +92,8 @@ OPENROUTER_API_KEY=
9192
AWS_REGION_NAME=
9293
AWS_ACCESS_KEY_ID=
9394
AWS_SECRET_ACCESS_KEY=
95+
# OpenAI Compatible
96+
OPENAI_COMPATIBLE_API_KEY=
9497
```
9598

9699
### Streaming
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# Generic OpenAI compatible model providers
2+
Token.js can be used with any model provider with an API that is compatible with the OpenAI v1 API spec. This integration has been tested specifically with Ollama, if you run into problems with other providers please let us know by opening an issue.
3+
4+
## Usage
5+
This example demonstrates using the Ollama running locally with Token.js.
6+
7+
Optionally, you may specify an API key with an environment variable. Typically, you would not need to specify this with Ollama but you may need to one for other model providers.
8+
{% code title=".env" %}
9+
```bash
10+
OPENAI_COMPATIBLE_API_KEY=
11+
```
12+
{% endcode %}
13+
14+
```typescript
15+
import { TokenJS } from 'token.js'
16+
17+
// Create the Token.js client and specify the baseURL for the OpenAI v1 API compatible provider
18+
const tokenjs = new TokenJS({
19+
baseURL: 'http://127.0.0.1:11434/v1/',
20+
})
21+
22+
async function main() {
23+
// Create a model response
24+
const completion = await tokenjs.chat.completions.create({
25+
// Specify the provider and model
26+
provider: 'openai-compatible',
27+
model: 'llama3.2',
28+
// Define your messages
29+
messages: [
30+
{
31+
role: 'user',
32+
content: 'Hello!',
33+
},
34+
],
35+
})
36+
console.log(completion.choices[0])
37+
}
38+
main()
39+
```
40+
41+
## Compatibility
42+
You can use any model provider that is compatible with the OpenAI v1 API spec with Token.js. However, different providers and models have a wide range of support for different features. We recommend reviewing the documentation of the providers you are using if you run into problems with various features supported by Token.js such as image input, tool use, and streaming as these features may not always be supported.
43+
44+
## OpenAI Compatible Providers
45+
There are a variety of different API providers that are compatible with the OpenAI v1 API spec and can be used with Token.js. Here are a few commonly used ones. If you often use a different OpenAI compatible provider, please make a PR to expand this list to help others.
46+
47+
* [LocalAI](https://localai.io/)
48+
* [Ollama](https://github.com/ollama/ollama)
49+
50+
## Official Support for Specific Providers
51+
While Token.js may be used with any model provider that is compatible with the OpenAI API spec, we do not consider such providers to be offically supported. Token.js aims to provide official support for model providers commonly used by our community. Official support includes dedicated integration testing, better documentation on the features supported by the provider, and improved type support. If you are using an OpenAI API compatible provider and would like it to be officially supported by Token.js please let us know by opening an issue on Github.

docs/providers/openrouter.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Perplexity
1+
# OpenRouter
22

33
[Get an OpenRouter API key](https://openrouter.ai/settings/keys)
44

src/chat/index.ts

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ export type MistralModel = (typeof models.mistral.models)[number]
1818
export type PerplexityModel = (typeof models.perplexity.models)[number]
1919
export type GroqModel = (typeof models.groq.models)[number]
2020
export type OpenRouterModel = string
21+
export type OpenAICompatibleModel = string
2122

2223
export type LLMChatModel =
2324
| OpenAIModel
@@ -30,6 +31,7 @@ export type LLMChatModel =
3031
| PerplexityModel
3132
| GroqModel
3233
| OpenRouterModel
34+
| OpenAICompatibleModel
3335

3436
export type LLMProvider = keyof typeof models
3537

@@ -44,6 +46,7 @@ type ProviderModelMap = {
4446
perplexity: PerplexityModel
4547
groq: GroqModel
4648
openrouter: OpenRouterModel
49+
'openai-compatible': OpenAICompatibleModel
4750
}
4851

4952
type CompletionBase<P extends LLMProvider> = Pick<

src/handlers/openai-compatible.ts

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
import OpenAI from 'openai'
2+
import { Stream } from 'openai/streaming'
3+
4+
import {
5+
CompletionParams,
6+
OpenAICompatibleModel,
7+
ProviderCompletionParams,
8+
} from '../chat/index.js'
9+
import {
10+
CompletionResponse,
11+
StreamCompletionResponse,
12+
} from '../userTypes/index.js'
13+
import { BaseHandler } from './base.js'
14+
import { InputError } from './types.js'
15+
16+
async function* streamOpenAI(
17+
response: Stream<OpenAI.Chat.Completions.ChatCompletionChunk>
18+
): StreamCompletionResponse {
19+
for await (const chunk of response) {
20+
yield chunk
21+
}
22+
}
23+
24+
// To support a new provider, we just create a handler for them extending the BaseHandler class and implement the create method.
25+
// Then we update the Handlers object in src/handlers/utils.ts to include the new handler.
26+
export class OpenAICompatibleHandler extends BaseHandler<OpenAICompatibleModel> {
27+
protected validateInputs(body: CompletionParams): void {
28+
super.validateInputs(body)
29+
30+
if (!this.opts.baseURL) {
31+
throw new InputError(
32+
'No baseURL option provided for openai compatible provider. You must define a baseURL option to use a generic openai compatible API with Token.js'
33+
)
34+
}
35+
}
36+
37+
async create(
38+
body: ProviderCompletionParams<'openai'>
39+
): Promise<CompletionResponse | StreamCompletionResponse> {
40+
this.validateInputs(body)
41+
42+
// Uses the OPENAI_API_KEY environment variable, if the apiKey is not provided.
43+
// This makes the UX better for switching between providers because you can just
44+
// define all the environment variables and then change the model field without doing anything else.
45+
const apiKey = this.opts.apiKey ?? process.env.OPENAI_COMPATIBLE_API_KEY
46+
const openai = new OpenAI({
47+
...this.opts,
48+
apiKey,
49+
})
50+
51+
// We have to delete the provider field because it's not a valid parameter for the OpenAI API.
52+
const params: any = body
53+
delete params.provider
54+
55+
if (body.stream) {
56+
const stream = await openai.chat.completions.create(body)
57+
return streamOpenAI(stream)
58+
} else {
59+
return openai.chat.completions.create(body)
60+
}
61+
}
62+
}

src/handlers/utils.ts

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ import { CohereHandler } from './cohere.js'
1212
import { GeminiHandler } from './gemini.js'
1313
import { GroqHandler } from './groq.js'
1414
import { MistralHandler } from './mistral.js'
15+
import { OpenAICompatibleHandler } from './openai-compatible.js'
1516
import { OpenAIHandler } from './openai.js'
1617
import { OpenRouterHandler } from './openrouter.js'
1718
import { PerplexityHandler } from './perplexity.js'
@@ -118,6 +119,16 @@ export const Handlers: Record<string, (opts: ConfigOptions) => any> = {
118119
models.openrouter.supportsN,
119120
models.openrouter.supportsStreaming
120121
),
122+
['openai-compatible']: (opts: ConfigOptions) =>
123+
new OpenAICompatibleHandler(
124+
opts,
125+
models.openrouter.models,
126+
models.openrouter.supportsJSON,
127+
models.openrouter.supportsImages,
128+
models.openrouter.supportsToolCalls,
129+
models.openrouter.supportsN,
130+
models.openrouter.supportsStreaming
131+
),
121132
}
122133

123134
export const getHandler = (

src/models.ts

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -424,4 +424,14 @@ export const models = {
424424
supportsN: true,
425425
generateDocs: false,
426426
},
427+
'openai-compatible': {
428+
models: true,
429+
supportsCompletion: true,
430+
supportsStreaming: true,
431+
supportsJSON: true,
432+
supportsImages: true,
433+
supportsToolCalls: true,
434+
supportsN: true,
435+
generateDocs: false,
436+
},
427437
}

0 commit comments

Comments
 (0)