You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Integrate 9 LLM providers with a single Typescript SDK using OpenAIs format. Free and opensource with no proxy server required.
4
-
5
-
### [Documentation](http://tokenjs.ai)
3
+
Integrate 60+ LLMs with one TypeScript SDK using OpenAI's format. Free and open source. No proxy server required.
6
4
7
5
## Features
8
6
9
-
* Define prompts in OpenAIs format and have them translated automatially for each LLM provider.
10
-
* Support for tools, JSON output, image inputs, streaming, and more.
11
-
* Support for 9 popular LLM providers: AI21, Anthropic, AWS Bedrock, Cohere, Gemini, Groq, Mistral, OpenAI, and Perplexity with more coming soon.
12
-
* Free and opensource under GPLv3.
13
-
* No proxy server required.
7
+
* Use OpenAI's format to call 60+ LLMs from 9 providers.
8
+
* Supports tools, JSON outputs, image inputs, streaming, and more.
9
+
* Runs completely on the client side. No proxy server needed.
10
+
* Free and open source under GPLv3.
11
+
12
+
## Supported Providers
13
+
14
+
* AI21
15
+
* Anthropic
16
+
* AWS Bedrock
17
+
* Cohere
18
+
* Gemini
19
+
* Groq
20
+
* Mistral
21
+
* OpenAI
22
+
* Perplexity
14
23
15
24
## Setup
16
25
@@ -20,173 +29,165 @@ Integrate 9 LLM providers with a single Typescript SDK using OpenAIs format. Fre
20
29
npm install token.js
21
30
```
22
31
23
-
### Environment Variables
24
-
25
-
```env
26
-
OPENAI_API_KEY=<your openai api key>
27
-
GEMINI_API_KEY=<your gemini api key>
28
-
ANTHROPIC_API_KEY=<your api>
29
-
```
30
-
31
32
### Usage
32
33
34
+
Import the Token.js client and call the `create` function with a prompt in OpenAI's format. Specify the model and LLM provider using their respective fields.
token.js uses environment variables to configure access to different LLM providers. Configure your api keys using the following environment variables:
66
+
We recommend using environment variables to configure the credentials for each LLM provider.
70
67
71
-
```
68
+
```bash
72
69
# OpenAI
73
70
OPENAI_API_KEY=
74
-
75
71
# AI21
76
72
AI21_API_KEY=
77
-
78
73
# Anthropic
79
74
ANTHROPIC_API_KEY=
80
-
81
75
# Cohere
82
76
COHERE_API_KEY=
83
-
84
77
# Gemini
85
78
GEMINI_API_KEY=
86
-
87
79
# Groq
88
80
GROQ_API_KEY=
89
-
90
81
# Mistral
91
82
MISTRAL_API_KEY=
92
-
93
83
# Perplexity
94
84
PERPLEXITY_API_KEY=
95
-
96
85
# AWS Bedrock
97
86
AWS_REGION_NAME=
98
87
AWS_ACCESS_KEY_ID=
99
88
AWS_SECRET_ACCESS_KEY=
100
89
```
101
90
102
-
Then you can select the `provider` and `model` you would like to use when calling the `create` function, and token.js will use the correct access credentials for the provider.
103
-
104
-
## Streaming
91
+
### Streaming
105
92
106
-
token.js supports streaming for all providers that support it.
93
+
Token.js supports streaming responses for all providers that offer it.
107
94
108
95
```ts
109
96
import { TokenJS } from'token.js'
110
97
111
98
const tokenjs =newTokenJS()
112
-
const result =awaittokenjs.chat.completions.create({
const result =awaittokenjs.chat.completions.create({
151
+
provider: 'gemini',
152
+
model: 'gemini-1.5-pro',
153
+
messages: [
154
+
{
155
+
role: 'user',
156
+
content: `What's the weather like in San Francisco?`,
157
+
},
158
+
],
159
+
tools,
160
+
tool_choice: 'auto',
161
+
})
158
162
159
-
const result =awaittokenjs.chat.completions.create({
160
-
provider: 'gemini',
161
-
model: 'gemini-1.5-pro',
162
-
messages: [
163
-
{
164
-
role: 'user',
165
-
content: `What's the weather like in San Francisco?`,
166
-
},
167
-
],
168
-
tools,
169
-
tool_choice: 'auto',
170
-
})
163
+
console.log(result.choices[0].message.tool_calls)
164
+
}
165
+
main()
171
166
```
172
167
173
-
## Providers
168
+
## Feature Compatibility
174
169
175
-
Not every feature is supported by every provider and model. This table provides a general overview of what features are supported by each provider. For details on which features are supported by individual models from different providers see the [provider documentation](todo\(md\)/).
170
+
This table provides an overview of the features that Token.js supports from each LLM provider.
|:heavy_minus_sign:| Not supported by the LLM provider, so Token.js cannot support it |
188
189
189
-
If there are more providers or features you would like to see implemented in token.js please let us know by opening an issue!
190
+
**Note**: Certain LLMs, particularly older or weaker models, do not support some features in this table. For details about these restrictions, see our [LLM provider documentation](https://docs.tokenjs.ai/providers).
190
191
191
192
## Contributing
192
193
@@ -216,8 +217,16 @@ pnpm test
216
217
pnpm lint
217
218
```
218
219
219
-
### Open a pull request!
220
+
## Contact Us
221
+
222
+
Please reach out if there's any way that we can improve Token.js!
223
+
224
+
Here are a few ways you can reach us:
225
+
*[Discord](TODO)
226
+
*[Schedule a meeting](https://calendly.com/sam_goldman/tokenjs)
227
+
* Call or text: [+1 (516) 206-6928](tel:+15162066928)
228
+
* Email: [sam@glade.so](mailto:sam@glade.so)
220
229
221
230
## License
222
231
223
-
token.js is free and open source under the GPLv3 license.
232
+
Token.js is free and open source software licensed under [GPLv3](https://github.com/token-js/token.js/blob/main/LICENSE).
0 commit comments