Skip to content

API Integration

RenzMc edited this page Sep 6, 2025 · 1 revision

๐Ÿ”Œ API Integration

This guide explains how the Acode AI CLI Assistant Plugin integrates with various AI providers and APIs.

๐ŸŽฏ Overview

The plugin uses a unified approach to integrate with multiple AI providers, allowing users to switch between services seamlessly while maintaining consistent functionality.

๐Ÿค– AI Provider Integration

OpenAI Integration

The plugin integrates with OpenAI's API using the @langchain/openai package:

import { ChatOpenAI } from "@langchain/openai";

const openai = new ChatOpenAI({
  apiKey: "your-api-key",
  model: "gpt-4"
});

Features

  • Support for all GPT models
  • Streaming responses for better user experience
  • Rate limit handling
  • Error management

API Endpoints

  • Models: https://api.openai.com/v1/models
  • Chat Completion: https://api.openai.com/v1/chat/completions

Google Gemini Integration

Uses the @langchain/google-genai package:

import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

const gemini = new ChatGoogleGenerativeAI({
  model: "gemini-pro",
  apiKey: "your-api-key"
});

Features

  • Support for Gemini Pro and Gemini 1.5 Pro
  • Safety settings configuration
  • Context-aware responses
  • Code analysis capabilities

API Endpoints

  • Models: https://generativelanguage.googleapis.com/v1/models

Ollama Integration

For local AI models using the @langchain/community package:

import { ChatOllama } from "@langchain/community/chat_models/ollama";

const ollama = new ChatOllama({
  baseUrl: "http://localhost:11434",
  model: "codellama"
});

Features

  • Local model execution
  • No internet required
  • Privacy-focused processing
  • Custom model support

Requirements

  • Ollama installed locally
  • Models pulled locally (e.g., ollama pull codellama)

Groq Integration

Uses the @langchain/groq package:

import { ChatGroq } from "@langchain/groq";

const groq = new ChatGroq({
  apiKey: "your-api-key",
  model: "llama3-70b-8192"
});

Features

  • Extremely fast inference
  • Support for Llama3 and Mixtral models
  • Cost-effective processing
  • Streaming responses

API Endpoints

  • Models: https://api.groq.com/openai/v1/models
  • Chat Completion: https://api.groq.com/openai/v1/chat/completions

Anthropic Claude Integration

Uses the @langchain/anthropic package:

import { ChatAnthropic } from "@langchain/anthropic";

const claude = new ChatAnthropic({
  apiKey: "your-api-key",
  model: "claude-3-sonnet"
});

Features

  • Advanced reasoning capabilities
  • Excellent code explanation
  • Context window management
  • Safety and ethical AI practices

API Endpoints

  • Messages: https://api.anthropic.com/v1/messages

OpenRouter Integration

Custom integration that provides access to multiple models:

const openRouter = new ChatOpenAI({
  apiKey: "your-api-key",
  model: "selected-model",
  configuration: {
    baseURL: "https://openrouter.ai/api/v1",
    defaultHeaders: {
      "HTTP-Referer": "https://acode.foxdebug.com",
      "X-Title": "Renz Ai Cli"
    }
  }
});

Features

  • Access to 100+ AI models
  • Unified API interface
  • Automatic model selection
  • Cost comparison tools

API Endpoints

  • Models: https://openrouter.ai/api/v1/models
  • Chat Completion: https://openrouter.ai/api/v1/chat/completions

Qwen Integration

Integration with Alibaba's Qwen models:

const qwen = new ChatOpenAI({
  apiKey: "your-api-key",
  model: "qwen-turbo",
  configuration: {
    baseURL: "https://dashscope.aliyuncs.com/compatible-mode/v1"
  }
});

Features

  • Chinese language optimization
  • Multiple Qwen model variants
  • Alibaba Cloud integration
  • Regional language support

API Endpoints

  • Chat Completion: https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions

OpenAI-Like Integration

Generic integration for any OpenAI-compatible API:

const openaiLike = new ChatOpenAI({
  apiKey: "your-api-key",
  model: "selected-model",
  configuration: {
    baseURL: "your-custom-endpoint"
  }
});

Features

  • Custom endpoint support
  • Model flexibility
  • Self-hosted AI compatibility
  • Enterprise API integration

๐Ÿ”„ Provider Switching

Implementation

The plugin allows seamless switching between providers:

initiateModel(providerName, token, model) {
  switch (providerName) {
    case "OpenAI":
      this.modelInstance = new ChatOpenAI({ apiKey: token, model });
      break;
    case "Google":
      this.modelInstance = new ChatGoogleGenerativeAI({ apiKey: token, model });
      break;
    // ... other providers
  }
}

State Management

  • Conversation history maintained across provider switches
  • Context preserved when possible
  • Settings saved per provider

๐Ÿ“ก API Communication

Request Format

All requests follow a consistent format using LangChain:

const prompt = ChatPromptTemplate.fromMessages([
  ["system", systemPromptWithContext],
  ["placeholder", "{chat_history}"],
  ["human", "{input}"]
]);

const parser = new StringOutputParser();
const chain = prompt.pipe(this.modelInstance).pipe(parser);

Response Handling

  • Streaming responses for better UX
  • Markdown formatting support
  • Code block detection and styling
  • Error handling and user feedback

Rate Limit Management

  • Automatic retry logic
  • Exponential backoff
  • User notifications for rate limiting
  • Provider-specific handling

๐Ÿ” Security in API Integration

Key Management

  • AES-GCM encryption for all API keys
  • PBKDF2 key derivation from user passphrase
  • Secure storage in Acode's local storage
  • No plaintext key storage

HTTPS Enforcement

  • All API communications over HTTPS
  • Certificate validation
  • Secure header management
  • Data integrity checks

Privacy Considerations

  • User control over data transmission
  • Clear indication of when AI is processing
  • Option to use local AI (Ollama)
  • No telemetry or analytics collection

๐Ÿ“ˆ Performance Optimization

Connection Pooling

  • Reuse connections where possible
  • Efficient request handling
  • Concurrent request management

Caching Strategy

  • Response caching for common queries
  • Context-aware cache invalidation
  • Cache expiration management
  • Storage optimization

Token Efficiency

  • Content filtering to reduce token usage
  • Smart context inclusion
  • Prompt optimization
  • Usage tracking and monitoring

๐Ÿ› ๏ธ Custom Provider Integration

Adding New Providers

To add support for new AI providers:

  1. Install Required Packages

    npm install @langchain/your-provider-package
  2. Add Provider to Constants

    export const AI_PROVIDERS = [
      "OpenAI",
      "Google",
      "Ollama",
      "Groq",
      "Anthropic",
      "YourNewProvider"
    ];
  3. Implement Provider Logic

    case AI_PROVIDERS[6]: // YourNewProvider
      this.modelInstance = new YourNewProvider({
        apiKey: token,
        model: model
      });
      break;
  4. Add Model Fetching

    async function getModelsFromProvider(provider, apiKey) {
      switch (provider) {
        case "YourNewProvider":
          // Fetch available models
          break;
      }
    }

๐Ÿ› Troubleshooting API Integration

Common Issues

Authentication Failures

  • Invalid API keys
  • Expired credentials
  • Provider-specific auth requirements

Connection Problems

  • Network connectivity issues
  • API endpoint downtime
  • Firewall or proxy interference

Model Availability

  • Selected model not available
  • Provider model changes
  • Regional restrictions

Rate Limiting

  • Exceeded request quotas
  • Provider-specific limits
  • Burst vs sustained rate limits

Integration Verification

To verify API integration is working:

  1. Configure API key for a provider
  2. Send a test request
  3. Verify response format and content
  4. Check error handling for invalid requests

๐Ÿ“ž Getting Help

If you encounter API integration issues:

  1. Check the Common Issues documentation
  2. Visit our GitHub Issues page
  3. Join our Discussions community for support

Clone this wiki locally