Skip to content

Commit f44584c

Browse files
committed
docs: add contact page, improve tables and examples
1 parent 9fc3a33 commit f44584c

File tree

17 files changed

+765
-564
lines changed

17 files changed

+765
-564
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# token.js
1+
# Token.js
22

33
## 0.0.1
44

README.md

Lines changed: 126 additions & 117 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,25 @@
1-
# token.js
1+
# Token.js
22

3-
Integrate 9 LLM providers with a single Typescript SDK using OpenAIs format. Free and opensource with no proxy server required.
4-
5-
### [Documentation](http://tokenjs.ai)
3+
Integrate 60+ LLMs with one TypeScript SDK using OpenAI's format. Free and open source. No proxy server required.
64

75
## Features
86

9-
* Define prompts in OpenAIs format and have them translated automatially for each LLM provider.
10-
* Support for tools, JSON output, image inputs, streaming, and more.
11-
* Support for 9 popular LLM providers: AI21, Anthropic, AWS Bedrock, Cohere, Gemini, Groq, Mistral, OpenAI, and Perplexity with more coming soon.
12-
* Free and opensource under GPLv3.
13-
* No proxy server required.
7+
* Use OpenAI's format to call 60+ LLMs from 9 providers.
8+
* Supports tools, JSON outputs, image inputs, streaming, and more.
9+
* Runs completely on the client side. No proxy server needed.
10+
* Free and open source under GPLv3.
11+
12+
## Supported Providers
13+
14+
* AI21
15+
* Anthropic
16+
* AWS Bedrock
17+
* Cohere
18+
* Gemini
19+
* Groq
20+
* Mistral
21+
* OpenAI
22+
* Perplexity
1423

1524
## Setup
1625

@@ -20,173 +29,165 @@ Integrate 9 LLM providers with a single Typescript SDK using OpenAIs format. Fre
2029
npm install token.js
2130
```
2231

23-
### Environment Variables
24-
25-
```env
26-
OPENAI_API_KEY=<your openai api key>
27-
GEMINI_API_KEY=<your gemini api key>
28-
ANTHROPIC_API_KEY=<your api>
29-
```
30-
3132
### Usage
3233

34+
Import the Token.js client and call the `create` function with a prompt in OpenAI's format. Specify the model and LLM provider using their respective fields.
35+
36+
```bash
37+
OPENAI_API_KEY=<openai api key>
38+
```
3339
```ts
34-
import { TokenJS, ChatCompletionMessageParam } from 'token.js'
40+
import { TokenJS } from 'token.js'
3541

42+
// Create the Token.js client
3643
const tokenjs = new TokenJS()
3744

38-
const messages: ChatCompletionMessageParam[] = [
39-
{
40-
role: 'user',
41-
content: `How are you?`,
42-
},
43-
]
44-
45-
// Call OpenAI
46-
const result = await tokenjs.chat.completions.create({
47-
provider: 'openai',
48-
model: 'gpt-4o',
49-
messages,
50-
})
51-
52-
// Call Gemini
53-
const result = await tokenjs.chat.completions.create({
54-
provider: 'gemini',
55-
model: 'gemini-1.5-pro',
56-
messages,
57-
})
58-
59-
// Call Anthropic
60-
const result = await tokenjs.chat.completions.create({
61-
provider: 'anthropic',
62-
model: 'claude-2.0',
63-
messages,
64-
})
45+
async function main() {
46+
// Create a model response
47+
const completion = await tokenjs.chat.completions.create({
48+
// Specify the provider and model
49+
provider: 'openai',
50+
model: 'gpt-4o',
51+
// Define your message
52+
messages: [
53+
{
54+
role: 'user',
55+
content: 'Hello!',
56+
},
57+
],
58+
})
59+
console.log(completion.choices[0])
60+
}
61+
main()
6562
```
6663

67-
## Access Credential Configuration
64+
### Access Credentials
6865

69-
token.js uses environment variables to configure access to different LLM providers. Configure your api keys using the following environment variables:
66+
We recommend using environment variables to configure the credentials for each LLM provider.
7067

71-
```
68+
```bash
7269
# OpenAI
7370
OPENAI_API_KEY=
74-
7571
# AI21
7672
AI21_API_KEY=
77-
7873
# Anthropic
7974
ANTHROPIC_API_KEY=
80-
8175
# Cohere
8276
COHERE_API_KEY=
83-
8477
# Gemini
8578
GEMINI_API_KEY=
86-
8779
# Groq
8880
GROQ_API_KEY=
89-
9081
# Mistral
9182
MISTRAL_API_KEY=
92-
9383
# Perplexity
9484
PERPLEXITY_API_KEY=
95-
9685
# AWS Bedrock
9786
AWS_REGION_NAME=
9887
AWS_ACCESS_KEY_ID=
9988
AWS_SECRET_ACCESS_KEY=
10089
```
10190

102-
Then you can select the `provider` and `model` you would like to use when calling the `create` function, and token.js will use the correct access credentials for the provider.
103-
104-
## Streaming
91+
### Streaming
10592

106-
token.js supports streaming for all providers that support it.
93+
Token.js supports streaming responses for all providers that offer it.
10794

10895
```ts
10996
import { TokenJS } from 'token.js'
11097

11198
const tokenjs = new TokenJS()
112-
const result = await tokenjs.chat.completions.create({
113-
stream: true,
114-
provider: 'gemini',
115-
model: 'gemini-1.5-pro',
116-
messages: [
117-
{
118-
role: 'user',
119-
content: `How are you?`,
120-
},
121-
],
122-
})
12399

124-
for await (const part of result) {
125-
process.stdout.write(part.choices[0]?.delta?.content || '')
100+
async function main() {
101+
const result = await tokenjs.chat.completions.create({
102+
stream: true,
103+
provider: 'openai',
104+
model: 'gpt-4o',
105+
messages: [
106+
{
107+
role: 'user',
108+
content: `Tell me about yourself.`,
109+
},
110+
],
111+
})
112+
113+
for await (const part of result) {
114+
process.stdout.write(part.choices[0]?.delta?.content || '')
115+
}
126116
}
117+
main()
127118
```
128119

129-
## Tools
120+
### Function Calling
130121

131-
token.js supports tools for all providers and models that support it.
122+
Token.js supports the function calling tool for all providers and models that offer it.
132123

133124
```ts
134125
import { TokenJS, ChatCompletionTool } from 'token.js'
135126

136127
const tokenjs = new TokenJS()
137128

138-
const tools: ChatCompletionTool[] = [
139-
{
140-
type: 'function',
141-
function: {
142-
name: 'getCurrentWeather',
143-
description: 'Get the current weather in a given location',
144-
parameters: {
145-
type: 'object',
146-
properties: {
147-
location: {
148-
type: 'string',
149-
description: 'The city and state, e.g. San Francisco, CA',
129+
async function main() {
130+
const tools: ChatCompletionTool[] = [
131+
{
132+
type: 'function',
133+
function: {
134+
name: 'get_current_weather',
135+
description: 'Get the current weather in a given location',
136+
parameters: {
137+
type: 'object',
138+
properties: {
139+
location: {
140+
type: 'string',
141+
description: 'The city and state, e.g. San Francisco, CA',
142+
},
150143
},
151-
unit: { type: 'string', enum: ['celsius', 'fahrenheit'] },
144+
required: ['location'],
152145
},
153-
required: ['location', 'unit'],
154146
},
155147
},
156-
},
157-
]
148+
]
149+
150+
const result = await tokenjs.chat.completions.create({
151+
provider: 'gemini',
152+
model: 'gemini-1.5-pro',
153+
messages: [
154+
{
155+
role: 'user',
156+
content: `What's the weather like in San Francisco?`,
157+
},
158+
],
159+
tools,
160+
tool_choice: 'auto',
161+
})
158162

159-
const result = await tokenjs.chat.completions.create({
160-
provider: 'gemini',
161-
model: 'gemini-1.5-pro',
162-
messages: [
163-
{
164-
role: 'user',
165-
content: `What's the weather like in San Francisco?`,
166-
},
167-
],
168-
tools,
169-
tool_choice: 'auto',
170-
})
163+
console.log(result.choices[0].message.tool_calls)
164+
}
165+
main()
171166
```
172167

173-
## Providers
168+
## Feature Compatibility
174169

175-
Not every feature is supported by every provider and model. This table provides a general overview of what features are supported by each provider. For details on which features are supported by individual models from different providers see the [provider documentation](todo\(md\)/).
170+
This table provides an overview of the features that Token.js supports from each LLM provider.
176171

177-
| Provider | Completion | Streaming | Tools | JSON Output | Image Input |
172+
| Provider | Chat Completion | Streaming | Function Calling Tool | JSON Output | Image Input |
178173
| ---------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- |
179-
| openai | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
180-
| anthropic | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
181-
| bedrock | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
182-
| mistral | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | |
183-
| cohere | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | | |
184-
| AI21 | :white\_check\_mark: | :white\_check\_mark: | | | |
174+
| OpenAI | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
175+
| Anthropic | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
176+
| Bedrock | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
177+
| Mistral | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :heavy_minus_sign: |
178+
| Cohere | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
179+
| AI21 | :white\_check\_mark: | :white\_check\_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
185180
| Gemini | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
186-
| Groq | :white\_check\_mark: | :white\_check\_mark: | | :white\_check\_mark: | |
187-
| Perplexity | :white\_check\_mark: | :white\_check\_mark: | | | |
181+
| Groq | :white\_check\_mark: | :white\_check\_mark: | :heavy_minus_sign: | :white\_check\_mark: | :heavy_minus_sign: |
182+
| Perplexity | :white\_check\_mark: | :white\_check\_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
183+
184+
### Legend
185+
| Symbol | Description |
186+
|--------------------|---------------------------------------|
187+
| :white_check_mark: | Supported by Token.js |
188+
| :heavy_minus_sign: | Not supported by the LLM provider, so Token.js cannot support it |
188189

189-
If there are more providers or features you would like to see implemented in token.js please let us know by opening an issue!
190+
**Note**: Certain LLMs, particularly older or weaker models, do not support some features in this table. For details about these restrictions, see our [LLM provider documentation](https://docs.tokenjs.ai/providers).
190191

191192
## Contributing
192193

@@ -216,8 +217,16 @@ pnpm test
216217
pnpm lint
217218
```
218219

219-
### Open a pull request!
220+
## Contact Us
221+
222+
Please reach out if there's any way that we can improve Token.js!
223+
224+
Here are a few ways you can reach us:
225+
* [Discord](TODO)
226+
* [Schedule a meeting](https://calendly.com/sam_goldman/tokenjs)
227+
* Call or text: [+1 (516) 206-6928](tel:+15162066928)
228+
* Email: [sam@glade.so](mailto:sam@glade.so)
220229

221230
## License
222231

223-
token.js is free and open source under the GPLv3 license.
232+
Token.js is free and open source software licensed under [GPLv3](https://github.com/token-js/token.js/blob/main/LICENSE).

0 commit comments

Comments
 (0)