Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/guides/profiles.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ interpreter.loop = True

```YAML
llm:
model: "gpt-4-o"
model: "gpt-4o"
temperature: 0
# api_key: ... # Your API key, if the API requires it
# api_base: ... # The URL where an OpenAI-compatible server is running to handle LLM API requests
Expand Down
3 changes: 1 addition & 2 deletions docs/language-models/hosted-models/anyscale.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ interpreter --model anyscale/<model-name>
```python Python
from interpreter import interpreter

# Set the model to use from AWS Bedrock:
# Set the model to use from Anyscale:
interpreter.llm.model = "anyscale/<model-name>"
interpreter.chat()
```
Expand Down Expand Up @@ -46,7 +46,6 @@ interpreter.llm.model = "anyscale/meta-llama/Llama-2-13b-chat-hf"
interpreter.llm.model = "anyscale/meta-llama/Llama-2-70b-chat-hf"
interpreter.llm.model = "anyscale/mistralai/Mistral-7B-Instruct-v0.1"
interpreter.llm.model = "anyscale/codellama/CodeLlama-34b-Instruct-hf"

```

</CodeGroup>
Expand Down
5 changes: 2 additions & 3 deletions docs/language-models/hosted-models/aws-sagemaker.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,13 @@ We support the following completion models from AWS Sagemaker:
<CodeGroup>

```bash Terminal

interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f
interpreter --model sagemaker/<your-hugginface-deployment-name>
interpreter --model sagemaker/<your-huggingface-deployment-name>
```

```python Python
Expand All @@ -54,7 +53,7 @@ interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f"
interpreter.llm.model = "sagemaker/<your-hugginface-deployment-name>"
interpreter.llm.model = "sagemaker/<your-huggingface-deployment-name>"
```

</CodeGroup>
Expand Down
7 changes: 1 addition & 6 deletions docs/language-models/hosted-models/baseten.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,20 +30,15 @@ We support the following completion models from Baseten:
<CodeGroup>

```bash Terminal

interpreter --model baseten/qvv0xeq
interpreter --model baseten/q841o8w
interpreter --model baseten/31dxrj3


```

```python Python
interpreter.llm.model = "baseten/qvv0xeq"
interpreter.llm.model = "baseten/q841o8w"
interpreter.llm.model = "baseten/31dxrj3"


```

</CodeGroup>
Expand All @@ -54,4 +49,4 @@ Set the following environment variables [(click here to learn how)](https://chat

| Environment Variable | Description | Where to Find |
| -------------------- | --------------- | -------------------------------------------------------------------------------------------------------- |
| BASETEN_API_KEY'` | Baseten API key | [Baseten Dashboard -> Settings -> Account -> API Keys](https://app.baseten.co/settings/account/api_keys) |
| `BASETEN_API_KEY` | Baseten API key | [Baseten Dashboard -> Settings -> Account -> API Keys](https://app.baseten.co/settings/account/api_keys) |
5 changes: 1 addition & 4 deletions docs/language-models/hosted-models/cloudflare.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,20 +31,17 @@ We support the following completion models from Cloudflare Workers AI:
<CodeGroup>

```bash Terminal

interpreter --model cloudflare/@cf/meta/llama-2-7b-chat-fp16
interpreter --model cloudflare/@cf/meta/llama-2-7b-chat-int8
interpreter --model @cf/mistral/mistral-7b-instruct-v0.1
interpreter --model @hf/thebloke/codellama-7b-instruct-awq

```

```python Python
interpreter.llm.model = "cloudflare/@cf/meta/llama-2-7b-chat-fp16"
interpreter.llm.model = "cloudflare/@cf/meta/llama-2-7b-chat-int8"
interpreter.llm.model = "@cf/mistral/mistral-7b-instruct-v0.1"
interpreter.llm.model = "@hf/thebloke/codellama-7b-instruct-awq"

```

</CodeGroup>
Expand All @@ -55,5 +52,5 @@ Set the following environment variables [(click here to learn how)](https://chat

| Environment Variable | Description | Where to Find |
| ----------------------- | -------------------------- | ---------------------------------------------------------------------------------------------- |
| `CLOUDFLARE_API_KEY'` | Cloudflare API key | [Cloudflare Profile Page -> API Tokens](https://dash.cloudflare.com/profile/api-tokens) |
| `CLOUDFLARE_API_KEY` | Cloudflare API key | [Cloudflare Profile Page -> API Tokens](https://dash.cloudflare.com/profile/api-tokens) |
| `CLOUDFLARE_ACCOUNT_ID` | Your Cloudflare account ID | [Cloudflare Dashboard -> Grab the Account ID from the url like: https://dash.cloudflare.com/{CLOUDFLARE_ACCOUNT_ID}?account= ](https://dash.cloudflare.com/) |
5 changes: 1 addition & 4 deletions docs/language-models/hosted-models/deepinfra.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,12 @@ We support the following completion models from DeepInfra:
<CodeGroup>

```bash Terminal

interpreter --model deepinfra/meta-llama/Llama-2-70b-chat-hf
interpreter --model deepinfra/meta-llama/Llama-2-7b-chat-hf
interpreter --model deepinfra/meta-llama/Llama-2-13b-chat-hf
interpreter --model deepinfra/codellama/CodeLlama-34b-Instruct-hf
interpreter --model deepinfra/mistral/mistral-7b-instruct-v0.1
interpreter --model deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1

```

```python Python
Expand All @@ -50,7 +48,6 @@ interpreter.llm.model = "deepinfra/meta-llama/Llama-2-13b-chat-hf"
interpreter.llm.model = "deepinfra/codellama/CodeLlama-34b-Instruct-hf"
interpreter.llm.model = "deepinfra/mistral-7b-instruct-v0.1"
interpreter.llm.model = "deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1"

```

</CodeGroup>
Expand All @@ -61,4 +58,4 @@ Set the following environment variables [(click here to learn how)](https://chat

| Environment Variable | Description | Where to Find |
| -------------------- | ----------------- | ---------------------------------------------------------------------- |
| `DEEPINFRA_API_KEY'` | DeepInfra API key | [DeepInfra Dashboard -> API Keys](https://deepinfra.com/dash/api_keys) |
| `DEEPINFRA_API_KEY` | DeepInfra API key | [DeepInfra Dashboard -> API Keys](https://deepinfra.com/dash/api_keys) |
4 changes: 2 additions & 2 deletions docs/language-models/hosted-models/gpt-4-setup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ or
3. **Add Environment Variable**: In the editor, add the line below, replacing `your-api-key-here` with your actual API key:

```
export OPENAI\_API\_KEY='your-api-key-here'
export OPENAI_API_KEY='your-api-key-here'
```

4. **Save and Exit**: Press Ctrl+O to write the changes, followed by Ctrl+X to close the editor.
Expand All @@ -40,7 +40,7 @@ or
2. **Set environment variable in the current session**: To set the environment variable in the current session, use the command below, replacing `your-api-key-here` with your actual API key:

```
setx OPENAI\_API\_KEY "your-api-key-here"
setx OPENAI_API_KEY "your-api-key-here"
```

This command will set the OPENAI_API_KEY environment variable for the current session.
Expand Down
1 change: 0 additions & 1 deletion docs/language-models/hosted-models/mistral-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ We support the following completion models from the Mistral API:
<CodeGroup>

```bash Terminal

interpreter --model mistral/mistral-tiny
interpreter --model mistral/mistral-small
interpreter --model mistral/mistral-medium
Expand Down
2 changes: 1 addition & 1 deletion docs/language-models/hosted-models/nlp-cloud.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ Set the following environment variables [(click here to learn how)](https://chat

| Environment Variable | Description | Where to Find |
| -------------------- | ----------------- | ----------------------------------------------------------------- |
| `NLP_CLOUD_API_KEY'` | NLP Cloud API key | [NLP Cloud Dashboard -> API KEY](https://nlpcloud.com/home/token) |
| `NLP_CLOUD_API_KEY` | NLP Cloud API key | [NLP Cloud Dashboard -> API KEY](https://nlpcloud.com/home/token) |
3 changes: 1 addition & 2 deletions docs/language-models/hosted-models/perplexity.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ We support the following completion models from the Perplexity API:
<CodeGroup>

```bash Terminal

interpreter --model perplexity/pplx-7b-chat
interpreter --model perplexity/pplx-70b-chat
interpreter --model perplexity/pplx-7b-online
Expand Down Expand Up @@ -77,4 +76,4 @@ Set the following environment variables [(click here to learn how)](https://chat

| Environment Variable | Description | Where to Find |
| ----------------------- | ------------------------------------ | ----------------------------------------------------------------- |
| `PERPLEXITYAI_API_KEY'` | The Perplexity API key from pplx-api | [Perplexity API Settings](https://www.perplexity.ai/settings/api) |
| `PERPLEXITYAI_API_KEY` | The Perplexity API key from pplx-api | [Perplexity API Settings](https://www.perplexity.ai/settings/api) |
2 changes: 1 addition & 1 deletion docs/language-models/hosted-models/togetherai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,4 @@ Set the following environment variables [(click here to learn how)](https://chat

| Environment Variable | Description | Where to Find |
| --------------------- | --------------------------------------------- | ------------------------------------------------------------------------------------------- |
| `TOGETHERAI_API_KEY'` | The TogetherAI API key from the Settings page | [TogetherAI -> Profile -> Settings -> API Keys](https://api.together.xyz/settings/api-keys) |
| `TOGETHERAI_API_KEY` | The TogetherAI API key from the Settings page | [TogetherAI -> Profile -> Settings -> API Keys](https://api.together.xyz/settings/api-keys) |
2 changes: 1 addition & 1 deletion docs/language-models/hosted-models/vertex-ai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Google (Vertex AI)

## Pre-requisites
* `pip install google-cloud-aiplatform`
* Authentication:
* Authentication:
* run `gcloud auth application-default login` See [Google Cloud Docs](https://cloud.google.com/docs/authentication/external/set-up-adc)
* Alternatively you can set `application_default_credentials.json`

Expand Down
4 changes: 2 additions & 2 deletions docs/language-models/hosted-models/vllm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,13 @@ interpreter.chat()
<CodeGroup>

```bash Terminal
interpreter --model vllm/<perplexity-model>
interpreter --model vllm/<vllm-model>
```

```python Python
from interpreter import interpreter

interpreter.llm.model = "vllm/<perplexity-model>"
interpreter.llm.model = "vllm/<vllm-model>"
interpreter.chat()
```

Expand Down
Loading