-
-
Notifications
You must be signed in to change notification settings - Fork 1
API Integration
This guide explains how the Acode AI CLI Assistant Plugin integrates with various AI providers and APIs.
The plugin uses a unified approach to integrate with multiple AI providers, allowing users to switch between services seamlessly while maintaining consistent functionality.
The plugin integrates with OpenAI's API using the @langchain/openai package:
import { ChatOpenAI } from "@langchain/openai";
const openai = new ChatOpenAI({
apiKey: "your-api-key",
model: "gpt-4"
});- Support for all GPT models
- Streaming responses for better user experience
- Rate limit handling
- Error management
-
Models:
https://api.openai.com/v1/models -
Chat Completion:
https://api.openai.com/v1/chat/completions
Uses the @langchain/google-genai package:
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const gemini = new ChatGoogleGenerativeAI({
model: "gemini-pro",
apiKey: "your-api-key"
});- Support for Gemini Pro and Gemini 1.5 Pro
- Safety settings configuration
- Context-aware responses
- Code analysis capabilities
-
Models:
https://generativelanguage.googleapis.com/v1/models
For local AI models using the @langchain/community package:
import { ChatOllama } from "@langchain/community/chat_models/ollama";
const ollama = new ChatOllama({
baseUrl: "http://localhost:11434",
model: "codellama"
});- Local model execution
- No internet required
- Privacy-focused processing
- Custom model support
- Ollama installed locally
- Models pulled locally (e.g.,
ollama pull codellama)
Uses the @langchain/groq package:
import { ChatGroq } from "@langchain/groq";
const groq = new ChatGroq({
apiKey: "your-api-key",
model: "llama3-70b-8192"
});- Extremely fast inference
- Support for Llama3 and Mixtral models
- Cost-effective processing
- Streaming responses
-
Models:
https://api.groq.com/openai/v1/models -
Chat Completion:
https://api.groq.com/openai/v1/chat/completions
Uses the @langchain/anthropic package:
import { ChatAnthropic } from "@langchain/anthropic";
const claude = new ChatAnthropic({
apiKey: "your-api-key",
model: "claude-3-sonnet"
});- Advanced reasoning capabilities
- Excellent code explanation
- Context window management
- Safety and ethical AI practices
-
Messages:
https://api.anthropic.com/v1/messages
Custom integration that provides access to multiple models:
const openRouter = new ChatOpenAI({
apiKey: "your-api-key",
model: "selected-model",
configuration: {
baseURL: "https://openrouter.ai/api/v1",
defaultHeaders: {
"HTTP-Referer": "https://acode.foxdebug.com",
"X-Title": "Renz Ai Cli"
}
}
});- Access to 100+ AI models
- Unified API interface
- Automatic model selection
- Cost comparison tools
-
Models:
https://openrouter.ai/api/v1/models -
Chat Completion:
https://openrouter.ai/api/v1/chat/completions
Integration with Alibaba's Qwen models:
const qwen = new ChatOpenAI({
apiKey: "your-api-key",
model: "qwen-turbo",
configuration: {
baseURL: "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
});- Chinese language optimization
- Multiple Qwen model variants
- Alibaba Cloud integration
- Regional language support
-
Chat Completion:
https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions
Generic integration for any OpenAI-compatible API:
const openaiLike = new ChatOpenAI({
apiKey: "your-api-key",
model: "selected-model",
configuration: {
baseURL: "your-custom-endpoint"
}
});- Custom endpoint support
- Model flexibility
- Self-hosted AI compatibility
- Enterprise API integration
The plugin allows seamless switching between providers:
initiateModel(providerName, token, model) {
switch (providerName) {
case "OpenAI":
this.modelInstance = new ChatOpenAI({ apiKey: token, model });
break;
case "Google":
this.modelInstance = new ChatGoogleGenerativeAI({ apiKey: token, model });
break;
// ... other providers
}
}- Conversation history maintained across provider switches
- Context preserved when possible
- Settings saved per provider
All requests follow a consistent format using LangChain:
const prompt = ChatPromptTemplate.fromMessages([
["system", systemPromptWithContext],
["placeholder", "{chat_history}"],
["human", "{input}"]
]);
const parser = new StringOutputParser();
const chain = prompt.pipe(this.modelInstance).pipe(parser);- Streaming responses for better UX
- Markdown formatting support
- Code block detection and styling
- Error handling and user feedback
- Automatic retry logic
- Exponential backoff
- User notifications for rate limiting
- Provider-specific handling
- AES-GCM encryption for all API keys
- PBKDF2 key derivation from user passphrase
- Secure storage in Acode's local storage
- No plaintext key storage
- All API communications over HTTPS
- Certificate validation
- Secure header management
- Data integrity checks
- User control over data transmission
- Clear indication of when AI is processing
- Option to use local AI (Ollama)
- No telemetry or analytics collection
- Reuse connections where possible
- Efficient request handling
- Concurrent request management
- Response caching for common queries
- Context-aware cache invalidation
- Cache expiration management
- Storage optimization
- Content filtering to reduce token usage
- Smart context inclusion
- Prompt optimization
- Usage tracking and monitoring
To add support for new AI providers:
-
Install Required Packages
npm install @langchain/your-provider-package
-
Add Provider to Constants
export const AI_PROVIDERS = [ "OpenAI", "Google", "Ollama", "Groq", "Anthropic", "YourNewProvider" ];
-
Implement Provider Logic
case AI_PROVIDERS[6]: // YourNewProvider this.modelInstance = new YourNewProvider({ apiKey: token, model: model }); break;
-
Add Model Fetching
async function getModelsFromProvider(provider, apiKey) { switch (provider) { case "YourNewProvider": // Fetch available models break; } }
- Invalid API keys
- Expired credentials
- Provider-specific auth requirements
- Network connectivity issues
- API endpoint downtime
- Firewall or proxy interference
- Selected model not available
- Provider model changes
- Regional restrictions
- Exceeded request quotas
- Provider-specific limits
- Burst vs sustained rate limits
To verify API integration is working:
- Configure API key for a provider
- Send a test request
- Verify response format and content
- Check error handling for invalid requests
If you encounter API integration issues:
- Check the Common Issues documentation
- Visit our GitHub Issues page
- Join our Discussions community for support
- Home
- Getting Started
- Usage Guides
- Advanced Features
- Developer Docs
- Troubleshooting