From 72ca8827c5287b96f647f7bd134679778c0fcd86 Mon Sep 17 00:00:00 2001 From: Edgars Adamoics Date: Wed, 17 Dec 2025 13:55:46 +0000 Subject: [PATCH 1/6] Adds comprehensive README documentation Introduces a detailed README file to provide users with a comprehensive guide to the Speechmatics Python SDK. The README includes: - Quick start instructions for installation and basic usage - Information on key features, use cases, and integration examples - Documentation links and migration guides - Information about Speechmatics technology - Links to resources and community support --- README.md | 1351 ++++++++++++++++++++++-- logo/speechmatics-dark-theme-logo.png | Bin 0 -> 24494 bytes logo/speechmatics-light-theme-logo.png | Bin 0 -> 26219 bytes 3 files changed, 1275 insertions(+), 76 deletions(-) create mode 100644 logo/speechmatics-dark-theme-logo.png create mode 100644 logo/speechmatics-light-theme-logo.png diff --git a/README.md b/README.md index c66cf2d..aec24a5 100644 --- a/README.md +++ b/README.md @@ -1,127 +1,1326 @@ -# Speechmatics Python SDK +
-[![License](https://img.shields.io/badge/license-MIT-yellow.svg)](https://github.com/speechmatics/speechmatics-python-sdk/blob/master/LICENSE) -[![PythonSupport](https://img.shields.io/badge/Python-3.9%2B-green)](https://www.python.org/) + + + + Speechmatics Logo + -A collection of Python clients for Speechmatics APIs packaged as separate installable packages. These packages replace the old [speechmatics-python](https://pypi.org/project/speechmatics-python) package, which will be deprecated soon. +
-Each client targets a specific Speechmatics API (e.g. real-time, batch transcription), making it easier to install only what you need and keep dependencies minimal. +**Speechmatics Python SDK provides convenient access to enterprise-grade speech-to-text APIs from Python applications.** -## Packages +[![PyPI - batch](https://img.shields.io/pypi/v/speechmatics-batch?label=batch)](https://pypi.org/project/speechmatics-batch/) +[![PyPI - rt](https://img.shields.io/pypi/v/speechmatics-rt?label=rt)](https://pypi.org/project/speechmatics-rt/) +[![PyPI - voice](https://img.shields.io/pypi/v/speechmatics-voice?label=voice)](https://pypi.org/project/speechmatics-voice/) +[![Python Versions](https://img.shields.io/pypi/pyversions/speechmatics-batch.svg)](https://pypi.org/project/speechmatics-batch/) +[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/LICENSE) +[![Build Status](https://github.com/speechmatics/speechmatics-python-sdk/actions/workflows/test.yaml/badge.svg)](https://github.com/speechmatics/speechmatics-python-sdk/actions/workflows/test.yaml) -This repository contains the following packages: -### Real-Time Client (`speechmatics-rt`) +**Fully typed** with type definitions for all request params and response fields. **Modern Python** with async/await patterns, type hints, and context managers for production-ready code. -A Python client for Speechmatics Real-Time API. +**55+ Languages • Real-time & Batch • Custom Vocabularies • Speaker Diarization • Speaker ID** + +[Get API Key](https://portal.speechmatics.com/) • [Documentation](https://docs.speechmatics.com) • [Academy Examples](https://github.com/speechmatics/speechmatics-academy) + + +
+ +--- + +## 📋 Table of Contents + +- [Quick Start](#quick-start) +- [Why Speechmatics?](#-why-speechmatics) +- [Use Cases](#-use-cases) +- [Key Features](#-key-features) +- [Authentication](#authentication) +- [Advanced Configuration](#advanced-configuration) +- [Deployment Options](#deployment-options) +- [Community & Support](#community--support) + +--- + +

⚡ Quick Start

+ +### Installation ```bash +# Choose the package for your use case: + +# Batch transcription +pip install speechmatics-batch + +# Real-time streaming pip install speechmatics-rt + +# Voice agents +pip install speechmatics-voice + +# Text-to-speech +pip install speechmatics-tts ``` -### Batch Client (`speechmatics-batch`) +
+📦 Package Details • Click to see what's included in each package + +
+ +**[speechmatics-batch](./sdk/batch/README.md)** - Async batch transcription API +- Upload audio files for processing +- Get transcripts with timestamps, speakers, entities +- Supports all audio intelligence features + +**[speechmatics-rt](./sdk/rt/README.md)** - Real-time WebSocket streaming +- Stream audio for live transcription +- Ultra-low latency (150ms p95) +- Partial and final transcripts + +**[speechmatics-voice](./sdk/voice/README.md)** - Voice agent SDK +- Build conversational AI applications +- Speaker diarization and turn detection +- Optional ML-based smart turn: `pip install speechmatics-voice[smart]` + +**[speechmatics-tts](./sdk/tts/README.md)** - Text-to-speech +- Convert text to natural-sounding speech +- Multiple voices and languages +- Streaming and batch modes -An async Python client for Speechmatics Batch API. +
+ +### Setting Up Development Environment ```bash -pip install speechmatics-batch +git clone https://github.com/speechmatics/speechmatics-python-sdk.git +cd speechmatics-python-sdk + +python -m venv .venv +.venv\Scripts\activate +# On Mac/Linux: source .venv/bin/activate + +# Install development dependencies for all SDKs +make install-dev + +# Install pre-commit hooks +pre-commit install ``` -### Voice Agent Client (`speechmatics-voice`) +### Your First Transcription -A Voice Agent Python client for Speechmatics Real-Time API. +**5-line example** (simplest - start here): +```python +import asyncio +from speechmatics.batch import AsyncClient -```bash -# Standard installation -pip install speechmatics-voice +async with AsyncClient(api_key="YOUR_API_KEY") as client: + result = await client.transcribe("audio.wav") + print(result.transcript_text) +``` + +**Real-time Streaming** (for live audio): +```python +import asyncio +from speechmatics.rt import AsyncClient, ServerMessageType, TranscriptResult + +async def main(): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + @client.on(ServerMessageType.ADD_TRANSCRIPT) + def handle_transcript(message): + result = TranscriptResult.from_message(message) + print(f"Transcript: {result.metadata.transcript}") + + await client.start_session() + # Stream audio here... + +asyncio.run(main()) +``` + +**Simple and Pythonic!** Built with modern async/await patterns. Get your API key at [portal.speechmatics.com](https://portal.speechmatics.com/) + +> [!TIP] +> **Ready for more?** Explore 20+ working examples at **[Speechmatics Academy](https://github.com/speechmatics/speechmatics-academy)** — voice agents, integrations, use cases, and migration guides. + +--- + +## 🏆 Why Speechmatics? + +### Accuracy That Matters + +When 1% WER improvement translates to millions in revenue, you need the best. + +| Metric | Speechmatics | Deepgram | +|--------|--------------|----------| +| **Word Error Rate (WER)** | **6.8%** | 16.5% | +| **Languages Supported** | **55+** | 30+ | +| **Custom Dictionary** | **1,000 words** | 100 words | +| **Speaker Diarization** | **Included** | Extra charge | +| **Real-time Translation** | **30+ languages** | ❌ | +| **Sentiment Analysis** | ✅ | ❌ | +| **On-Premises** | ✅ | Limited | +| **On-Device** | ✅ | ❌ | +| **Air-Gapped Deployment** | ✅ | ❌ | + + + +### Built for Production + +- **99.9% Uptime SLA** - Enterprise-grade reliability +- **SOC 2 Type II Certified** - Your data is secure +- **Flexible Deployment** - SaaS, on-premises, or air-gapped + +--- + +## 🚀 Key Features + +### Real-time Transcription +Stream audio and get instant transcriptions with ultra-low latency. Perfect for voice agents, live captioning, and conversational AI. + +```python +import asyncio +from speechmatics.rt import ( + AsyncClient, + ServerMessageType, + TranscriptionConfig, + TranscriptResult, + AudioFormat, + AudioEncoding, + Microphone, +) + +async def main(): + # Configure audio format for microphone input + audio_format = AudioFormat( + encoding=AudioEncoding.PCM_S16LE, + chunk_size=4096, + sample_rate=16000, + ) + + # Configure transcription with partials enabled + transcription_config = TranscriptionConfig( + language="en", + enable_partials=True, + ) + + async with AsyncClient(api_key="YOUR_API_KEY") as client: + # Handle final transcripts + @client.on(ServerMessageType.ADD_TRANSCRIPT) + def handle_transcript(message): + result = TranscriptResult.from_message(message) + print(f"[final]: {result.metadata.transcript}") + + # Handle partial transcripts (interim results) + @client.on(ServerMessageType.ADD_PARTIAL_TRANSCRIPT) + def handle_partial(message): + result = TranscriptResult.from_message(message) + print(f"[partial]: {result.metadata.transcript}") + + # Initialize microphone (requires: pip install pyaudio) + mic = Microphone(sample_rate=audio_format.sample_rate, chunk_size=audio_format.chunk_size) + if not mic.start(): + print("PyAudio not available - install with: pip install pyaudio") + return + + # Start transcription session + await client.start_session( + transcription_config=transcription_config, + audio_format=audio_format, + ) + + try: + # Stream audio continuously + while True: + frame = await mic.read(audio_format.chunk_size) + await client.send_audio(frame) + except KeyboardInterrupt: + mic.stop() + +asyncio.run(main()) +``` + +### Batch Transcription +Upload audio files and get accurate transcripts with speaker labels, timestamps, and more. + +```python +import asyncio +from speechmatics.batch import AsyncClient, TranscriptionConfig, FormatType + +async def main(): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + # Submit job with advanced features + job = await client.submit_job( + "meeting.mp3", + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker", + enable_entities=True, + punctuation_overrides={ + "permitted_marks": [".", "?", "!"] + } + ) + ) + + # Wait for completion + result = await client.wait_for_completion(job.id, format_type=FormatType.JSON) + + # Access results + print(f"Transcript: {result.transcript_text}") + +asyncio.run(main()) +``` + +### Speaker Diarization +Automatically detect and label different speakers in your audio. + +```python +import asyncio +from speechmatics.batch import AsyncClient, TranscriptionConfig + +async def main(): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + job = await client.submit_job( + "meeting.wav", + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker", + speaker_diarization_config={ + "max_speakers": 4 + } + ) + ) + result = await client.wait_for_completion(job.id) + + # Access full transcript with speaker labels + print(f"Full transcript:\n{result.transcript_text}\n") + + # Access individual results with speaker information + for result_item in result.results: + if result_item.alternatives: + alt = result_item.alternatives[0] + speaker = alt.speaker or "Unknown" + content = alt.content + print(f"Speaker {speaker}: {content}") + +asyncio.run(main()) +``` + +### Custom Dictionary +Add domain-specific terms, names, and acronyms for perfect accuracy. + +```python +import asyncio +from speechmatics.batch import AsyncClient, TranscriptionConfig + +async def main(): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + job = await client.submit_job( + "audio.wav", + transcription_config=TranscriptionConfig( + language="en", + additional_vocab=[ + {"content": "Speechmatics", "sounds_like": ["speech mat ics"]}, + {"content": "API", "sounds_like": ["A P I", "A. P. I."]}, + {"content": "kubernetes", "sounds_like": ["koo ber net ees"]} + ] + ) + ) + +asyncio.run(main()) +``` -# With SMART_TURN (ML-based turn detection) -pip install speechmatics-voice[smart] +### 55+ Languages +Native models for major languages, not just multilingual Whisper. + +```python +import asyncio +from speechmatics.batch import AsyncClient, TranscriptionConfig + +async def main(): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + # Automatic language detection + job = await client.submit_job( + "audio.wav", + transcription_config=TranscriptionConfig( + language="auto" + ) + ) + + # Or specify language directly (e.g., Japanese) + job = await client.submit_job( + "audio.wav", + transcription_config=TranscriptionConfig(language="ja") + ) + +asyncio.run(main()) +``` + +
+📂 More Features • Click to explore Audio Intelligence and Translation examples + +
+ +### Audio Intelligence +Get sentiment, topics, summaries, and more. + +```python +import asyncio +from speechmatics.batch import ( + AsyncClient, + JobConfig, + JobType, + TranscriptionConfig, + SentimentAnalysisConfig, + TopicDetectionConfig, + SummarizationConfig, + AutoChaptersConfig +) + +async def main(): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + # Configure job with all audio intelligence features + config = JobConfig( + type=JobType.TRANSCRIPTION, + transcription_config=TranscriptionConfig(language="en"), + sentiment_analysis_config=SentimentAnalysisConfig(), + topic_detection_config=TopicDetectionConfig(), + summarization_config=SummarizationConfig(), + auto_chapters_config=AutoChaptersConfig() + ) + + job = await client.submit_job("podcast.mp3", config=config) + result = await client.wait_for_completion(job.id) + + # Access all results + print(f"Transcript: {result.transcript_text}") + if result.sentiment_analysis: + print(f"Sentiment: {result.sentiment_analysis}") + if result.topics: + print(f"Topics: {result.topics}") + if result.summary: + print(f"Summary: {result.summary}") + if result.chapters: + print(f"Chapters: {result.chapters}") + +asyncio.run(main()) +``` + +### Translation +Transcribe and translate simultaneously to 50+ languages. + +```python +import asyncio +from speechmatics.batch import ( + AsyncClient, + JobConfig, + JobType, + TranscriptionConfig, + TranslationConfig +) + +async def main(): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + config = JobConfig( + type=JobType.TRANSCRIPTION, + transcription_config=TranscriptionConfig(language="en"), + translation_config=TranslationConfig(target_languages=["es", "fr", "de"]) + ) + + job = await client.submit_job("video.mp4", config=config) + result = await client.wait_for_completion(job.id) + + # Access original transcript + print(f"Original (English): {result.transcript_text}\n") + + # Access translations + if result.translations: + for lang_code, translation_data in result.translations.items(): + translated_text = translation_data.get("content", "") + print(f"Translated ({lang_code}): {translated_text}") + +asyncio.run(main()) ``` -### TTS Client (`speechmatics-tts`) +
-An async Python client for Speechmatics TTS API. +--- +## 🔌 Framework Integrations + +### LiveKit Agents (Voice Assistants) + +Build real-time voice assistants with [LiveKit Agents](https://github.com/livekit/agents) - a framework for building voice AI applications with WebRTC. + +```python +from dotenv import load_dotenv +from livekit import agents +from livekit.agents import AgentSession, Agent, RoomInputOptions +from livekit.plugins import openai, silero, speechmatics, elevenlabs + +load_dotenv() + + +class VoiceAssistant(Agent): + """Voice assistant agent with Speechmatics STT.""" + + def __init__(self) -> None: + super().__init__(instructions="You are a helpful voice assistant. Be concise and friendly.") + + +async def entrypoint(ctx: agents.JobContext): + """ + Main entrypoint for the voice assistant. + + Pipeline: LiveKit Room - Speechmatics STT - OpenAI LLM - ElevenLabs TTS - LiveKit Room + """ + await ctx.connect() + + # Speech-to-Text: Speechmatics with speaker diarization + stt = speechmatics.STT( + enable_diarization=True, + speaker_active_format="<{speaker_id}>{text}", + focus_speakers=["S1"], + ) + + # Language Model: OpenAI + llm = openai.LLM(model="gpt-4o-mini") + + # Text-to-Speech: ElevenLabs + tts = elevenlabs.TTS(voice_id="21m00Tcm4TlvDq8ikWAM") + + # Voice Activity Detection: Silero + vad = silero.VAD.load() + + # Create and start Agent Session + session = AgentSession(stt=stt, llm=llm, tts=tts, vad=vad) + await session.start( + room=ctx.room, + agent=VoiceAssistant(), + room_input_options=RoomInputOptions(), + ) + + # Send initial greeting + await session.generate_reply(instructions="Say a short hello and ask how you can help.") + + +if __name__ == "__main__": + agents.cli.run_app(agents.WorkerOptions(entrypoint_fnc=entrypoint)) +``` + +**Installation:** ```bash -pip install speechmatics-tts +pip install livekit-agents livekit-plugins-speechmatics livekit-plugins-openai livekit-plugins-elevenlabs livekit-plugins-silero ``` -## Development - -### Repository Structure - -``` -speechmatics-python-sdk/ -├── sdk/ -│ ├── batch/ -│ │ ├── pyproject.toml -│ │ └── README.md -│ │ -│ ├── rt/ -│ │ ├── pyproject.toml -│ │ └── README.md -│ │ -│ ├── voice/ -│ │ ├── pyproject.toml -│ │ └── README.md -│ │ -│ ├── tts/ -│ │ ├── pyproject.toml -│ │ └── README.md -│ -├── tests/ -│ ├── batch/ -│ ├── rt/ -│ ├── voice/ -│ └── tts/ -│ -├── examples/ -├── Makefile -├── pyproject.toml -└── LICENSE +**Key Features:** +- Real-time WebRTC audio streaming +- Speechmatics STT with speaker diarization +- Configurable LLM and TTS providers +- Voice Activity Detection (VAD) + +### Pipecat AI (Voice Agents) + +Build real-time voice bots with [Pipecat](https://github.com/pipecat-ai/pipecat) - a framework for voice and multimodal conversational AI. + +```python +import asyncio +import os +from pipecat.pipeline.pipeline import Pipeline +from pipecat.pipeline.runner import PipelineRunner +from pipecat.pipeline.task import PipelineTask +from pipecat.services.openai.llm import OpenAILLMService, OpenAILLMContext +from pipecat.services.speechmatics.stt import SpeechmaticsSTTService, Language +from pipecat.services.speechmatics.tts import SpeechmaticsTTSService +from pipecat.transports.local.audio import LocalAudioTransport + +async def main(): + # Configure Speechmatics STT with speaker diarization + stt = SpeechmaticsSTTService( + api_key=os.getenv("SPEECHMATICS_API_KEY"), + params=SpeechmaticsSTTService.InputParams( + language=Language.EN, + enable_partials=True, + enable_diarization=True, + speaker_active_format="@{speaker_id}: {text}" + ) + ) + + # Configure Speechmatics TTS + tts = SpeechmaticsTTSService( + api_key=os.getenv("SPEECHMATICS_API_KEY"), + voice_id="sarah" + ) + + # Configure LLM (OpenAI, Anthropic, etc.) + llm = OpenAILLMService( + api_key=os.getenv("OPENAI_API_KEY"), + model="gpt-4o" + ) + + # Set up conversation context + context = OpenAILLMContext([ + {"role": "system", "content": "You are a helpful AI assistant."} + ]) + context_aggregator = llm.create_context_aggregator(context) + + # Build pipeline: Audio Input -> STT -> LLM -> TTS -> Audio Output + transport = LocalAudioTransport() + pipeline = Pipeline([ + transport.input(), + stt, + context_aggregator.user(), + llm, + tts, + transport.output(), + context_aggregator.assistant(), + ]) + + # Run the voice bot + runner = PipelineRunner() + task = PipelineTask(pipeline) + + print("Voice bot ready! Speak into your microphone...") + await runner.run(task) + +asyncio.run(main()) ``` -### Setting Up Development Environment +**Installation:** +```bash +pip install pipecat-ai[speechmatics, openai] pyaudio +``` + +**Key Features:** +- Real-time STT with speaker diarization +- Natural-sounding TTS with multiple voices +- Interruption handling (users can interrupt bot responses) +- Works with any LLM provider (OpenAI, Anthropic, etc.) + +For more integration examples including Django, Next.js, and production patterns, visit the [Speechmatics Academy](https://github.com/speechmatics/speechmatics-academy). + +--- + +## 📚 Documentation + +### Package Documentation + +Each SDK package includes detailed documentation: + +| Package | Documentation | Description | +|---------|---------------|-------------| +| **speechmatics-batch** | [README](./sdk/batch/README.md) • [Migration Guide](./sdk/batch/MIGRATION.md) | Async batch transcription | +| **speechmatics-rt** | [README](./sdk/rt/README.md) • [Migration Guide](./sdk/rt/MIGRATION.md) | Real-time streaming | +| **speechmatics-voice** | [README](./sdk/voice/README.md) | Voice agent SDK | +| **speechmatics-tts** | [README](./sdk/tts/README.md) | Text-to-speech | + +### Speechmatics Academy + +Comprehensive collection of working examples, integrations, and templates: [github.com/speechmatics/speechmatics-academy](https://github.com/speechmatics/speechmatics-academy) + +#### Fundamentals +| Example | Description | Package | +|---------|-------------|---------| +| [Hello World](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/01-hello-world) | Simplest transcription example | Batch | +| [Batch vs Real-time](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/02-batch-vs-realtime) | Learn the difference between API modes | Batch, RT | +| [Configuration Guide](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/03-configuration-guide) | Common configuration options | Batch | +| [Audio Intelligence](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/04-audio-intelligence) | Sentiment, topics, and summaries | Batch | +| [Multilingual & Translation](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/05-multilingual-translation) | 50+ languages and real-time translation | RT | +| [Text-to-Speech](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/06-text-to-speech) | Convert text to natural-sounding speech | TTS | +| [Turn Detection](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/07-turn-detection) | Silence-based turn detection | RT | +| [Voice Agent Turn Detection](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/08-voice-agent-turn-detection) | Smart turn detection with presets | Voice | +| [Speaker ID & Focus](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/09-voice-agent-speaker-id) | Speaker identification and focus control | Voice | +| [Channel Diarization](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/10-channel-diarization) | Multi-channel transcription | Voice, RT | + +#### Integrations +| Integration | Example | Features | +|-------------|---------|----------| +| **LiveKit** | [Simple Voice Assistant](https://github.com/speechmatics/speechmatics-academy/tree/main/integrations/livekit/01-simple-voice-assistant) | WebRTC, VAD, diarization, LLM, TTS | +| **LiveKit** | [Telephony with Twilio](https://github.com/speechmatics/speechmatics-academy/tree/main/integrations/livekit/02-telephony-twilio) | Phone calls via SIP, Krisp noise cancellation | +| **Pipecat** | [Simple Voice Bot](https://github.com/speechmatics/speechmatics-academy/tree/main/integrations/pipecat/01-simple-voice-bot) | Local audio, VAD, LLM, TTS | +| **Pipecat** | [Voice Bot (Web)](https://github.com/speechmatics/speechmatics-academy/tree/main/integrations/pipecat/02-simple-voice-bot-web) | Browser-based WebRTC | +| **Twilio** | [Outbound Dialer](https://github.com/speechmatics/speechmatics-academy/tree/main/integrations/twilio/01-outbound-dialer) | Media Streams, ElevenLabs TTS | +| **VAPI** | [Voice Assistant](https://github.com/speechmatics/speechmatics-academy/tree/main/integrations/vapi/01-voice-assistant) | Voice AI platform integration | + +#### Use Cases +| Industry | Example | Features | +|----------|---------|----------| +| **Healthcare** | [Medical Transcription](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/01-medical-transcription-realtime) | Real-time, custom medical vocabulary | +| **Media** | [Video Captioning](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/02-video-captioning) | SRT generation, batch processing | +| **Contact Center** | [Call Analytics](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/03-call-center-analytics) | Channel diarization, sentiment, topics | +| **Business** | [AI Receptionist](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/04-voice-agent-calendar) | LiveKit, Twilio SIP, Google Calendar | + +#### Migration Guides +| From | Guide | Status | +|------|-------|--------| +| **Deepgram** | [Migration Guide](https://github.com/speechmatics/speechmatics-academy/tree/main/guides/migration-guides/deepgram) | Available | + +### Official Documentation +- [API Reference](https://docs.speechmatics.com/api-ref/) - Complete API documentation +- [SDK Repository](https://github.com/speechmatics/speechmatics-python-sdk) - Python SDK source code +- [Developer Portal](https://portal.speechmatics.com) - Get your API key + +--- + +## 🔄 Migrating from speechmatics-python? + +The legacy `speechmatics-python` package has been deprecated. This new SDK offers: + +✅ **Cleaner API** - More Pythonic, better type hints +✅ **More features** - Sentiment, translation, summarization +✅ **Better docs** - Comprehensive examples and guides + +### Migration Guide + +**speechmatics-python:** +```python +from speechmatics.models import BatchTranscriptionConfig +from speechmatics.batch_client import BatchClient + +with BatchClient("API_KEY") as client: + job_id = client.submit_job("audio.wav", BatchTranscriptionConfig("en")) + transcript = client.wait_for_completion(job_id, transcription_format='txt') + print(transcript) +``` + +**speechmatics-python-sdk:** +```python +import asyncio +from speechmatics.batch import AsyncClient, TranscriptionConfig, FormatType + +async def main(): + async with AsyncClient(api_key="API_KEY") as client: + job = await client.submit_job( + "audio.wav", + transcription_config=TranscriptionConfig(language="en") + ) + result = await client.wait_for_completion(job.id, format_type=FormatType.TXT) + print(result) + +asyncio.run(main()) +``` + +📖 **Full Migration Guides:** [Batch Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/batch/MIGRATION.md) • [Real-time Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/rt/MIGRATION.md) + +--- + +## 💡 Use Cases + + +### Healthcare & Medical +HIPAA-compliant transcription for clinical notes, patient interviews, and telemedicine. + +```python +import asyncio +import os +from dotenv import load_dotenv +from speechmatics.batch import AsyncClient, TranscriptionConfig + +load_dotenv() + +async def main(): + api_key = os.getenv("SPEECHMATICS_API_KEY") + + async with AsyncClient(api_key=api_key) as client: + # Add medical terminology for better accuracy + job = await client.submit_job( + "patient_interview.wav", + transcription_config=TranscriptionConfig( + language="en", + additional_vocab=[ + {"content": "hypertension"}, + {"content": "metformin"}, + {"content": "echocardiogram"}, + {"content": "MRI", "sounds_like": ["M R I"]}, + {"content": "CT scan", "sounds_like": ["C T scan"]} + ] + ) + ) + + result = await client.wait_for_completion(job.id) + print(f"Transcript:\n{result.transcript_text}") + +asyncio.run(main()) +``` + +### Voice Agents & Conversational AI +Build Alexa-like experiences with real-time transcription and speaker detection. + +```python +import asyncio +import os +from dotenv import load_dotenv +from speechmatics.rt import Microphone +from speechmatics.voice import ( + VoiceAgentClient, + VoiceAgentConfigPreset, + AgentServerMessageType, +) + +load_dotenv() + +async def main(): + api_key = os.getenv("SPEECHMATICS_API_KEY") + + # Load a preset configuration (options: adaptive, scribe, captions, external, fast) + config = VoiceAgentConfigPreset.load("adaptive") + + # Initialize microphone + mic = Microphone(sample_rate=16000, chunk_size=320) + if not mic.start(): + print("PyAudio not available - install with: pip install pyaudio") + return + + # Create voice agent client + client = VoiceAgentClient(api_key=api_key, config=config) + + @client.on(AgentServerMessageType.ADD_SEGMENT) + def on_segment(message): + for segment in message.get("segments", []): + speaker_id = segment.get("speaker_id", "S1") + text = segment.get("text", "") + print(f"[{speaker_id}]: {text}") + + @client.on(AgentServerMessageType.END_OF_TURN) + def on_turn_end(message): + print("[END OF TURN]") + + try: + await client.connect() + print("Voice agent started. Speak into your microphone (Ctrl+C to stop)...") + + while True: + audio_chunk = await mic.read(320) + await client.send_audio(audio_chunk) + + except KeyboardInterrupt: + print("\nStopping...") + finally: + mic.stop() + await client.disconnect() + +asyncio.run(main()) +``` + +### Call Center Analytics +Transcribe calls with speaker diarization, sentiment analysis, and topic detection. + +```python +import asyncio +import os +from dotenv import load_dotenv +from speechmatics.batch import ( + AsyncClient, + JobConfig, + JobType, + TranscriptionConfig, + SummarizationConfig, + SentimentAnalysisConfig, + TopicDetectionConfig +) + +load_dotenv() + +async def main(): + api_key = os.getenv("SPEECHMATICS_API_KEY") + + async with AsyncClient(api_key=api_key) as client: + config = JobConfig( + type=JobType.TRANSCRIPTION, + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker" + ), + sentiment_analysis_config=SentimentAnalysisConfig(), + topic_detection_config=TopicDetectionConfig(), + summarization_config=SummarizationConfig( + content_type="conversational", + summary_length="brief" + ) + ) + + job = await client.submit_job("call_recording.wav", config=config) + result = await client.wait_for_completion(job.id) + + # Print results + print(f"Transcript:\n{result.transcript_text}\n") + + if result.sentiment_analysis: + sentiment = result.sentiment_analysis.get('sentiment', 'neutral') + score = result.sentiment_analysis.get('score', 0) + print(f"Sentiment: {sentiment} (score: {score})") + + if result.topics and 'summary' in result.topics: + overall = result.topics['summary']['overall'] + topics = [topic for topic, count in overall.items() if count > 0] + print(f"Topics: {', '.join(topics)}") + + if result.summary: + print(f"Summary: {result.summary.get('content')}") + +asyncio.run(main()) +``` + +
+📂 More Use Cases • Click to explore Healthcare, Media & Entertainment, Education, and Meetings examples + +
+ +### Media & Entertainment +Add captions, create searchable archives, generate clips from keywords. + +```python +import asyncio +import os +from dotenv import load_dotenv +from speechmatics.batch import AsyncClient, TranscriptionConfig, FormatType + +load_dotenv() + +async def main(): + api_key = os.getenv("SPEECHMATICS_API_KEY") + + async with AsyncClient(api_key=api_key) as client: + job = await client.submit_job( + "movie.mp4", + transcription_config=TranscriptionConfig(language="en") + ) + + # Get SRT captions + captions = await client.wait_for_completion(job.id, format_type=FormatType.SRT) + + # Save captions + with open("movie.srt", "w", encoding="utf-8") as f: + f.write(captions) + + print("Captions saved to movie.srt") + +asyncio.run(main()) +``` + +### Education & E-Learning +Auto-generate lecture transcripts, searchable course content, and accessibility captions. + +```python +import asyncio +import os +from dotenv import load_dotenv +from speechmatics.batch import AsyncClient, TranscriptionConfig, FormatType + +load_dotenv() + +async def main(): + api_key = os.getenv("SPEECHMATICS_API_KEY") + + async with AsyncClient(api_key=api_key) as client: + job = await client.submit_job( + "lecture_recording.wav", + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker", + enable_entities=True + ) + ) + + result = await client.wait_for_completion(job.id) + + # Save transcript + with open("lecture_transcript.txt", "w", encoding="utf-8") as f: + f.write(result.transcript_text) + + # Save SRT captions for accessibility + captions = await client.wait_for_completion(job.id, format_type=FormatType.SRT) + with open("lecture_captions.srt", "w", encoding="utf-8") as f: + f.write(captions) + + print("Transcript and captions saved") + +asyncio.run(main()) +``` + +### Meetings +Turn meetings into searchable, actionable summaries with action items and key decisions. + +```python +import asyncio +import os +from dotenv import load_dotenv +from speechmatics.batch import ( + AsyncClient, + JobConfig, + JobType, + TranscriptionConfig, + SummarizationConfig, + AutoChaptersConfig +) + +load_dotenv() + +async def main(): + api_key = os.getenv("SPEECHMATICS_API_KEY") + + async with AsyncClient(api_key=api_key) as client: + config = JobConfig( + type=JobType.TRANSCRIPTION, + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker" + ), + summarization_config=SummarizationConfig(), + auto_chapters_config=AutoChaptersConfig() + ) + + job = await client.submit_job("board_meeting.mp4", config=config) + result = await client.wait_for_completion(job.id) + + # Display results + print(f"Transcript:\n{result.transcript_text}\n") + + if result.summary: + summary = result.summary.get('content', 'N/A') + print(f"Summary:\n{summary}\n") + + if result.chapters: + print("Chapters:") + for i, chapter in enumerate(result.chapters, 1): + print(f"{i}. {chapter}") + +asyncio.run(main()) +``` + +
+ +--- + +## Architecture + +### Real-time Flow + +```mermaid +sequenceDiagram + participant App as Your App + participant SM as Speechmatics RT + + App->>SM: Connect WebSocket (WSS) + App->>SM: StartRecognition (config, audio format) + SM->>App: RecognitionStarted + + loop Stream Audio + App->>SM: Audio Chunks (binary) + SM->>App: AudioAdded (ack) + SM->>App: AddPartialTranscript (JSON) + SM->>App: AddTranscript (JSON, final) + end + + App->>SM: EndOfStream + SM->>App: EndOfTranscript +``` + +### Batch Flow + +```mermaid +sequenceDiagram + participant App as Your App + participant API as Batch API + participant Queue as Job Queue + participant Engine as Transcription Engine + + App->>API: POST /jobs (upload audio) + API->>Queue: Enqueue job + API->>App: Return job_id + + Queue->>Engine: Process audio + Engine->>Queue: Store results + + loop Poll Status + App->>API: GET /jobs/{id} + API->>App: Status: running/done + end + + App->>API: GET /jobs/{id}/transcript + API->>App: Return transcript (JSON/TXT/SRT) +``` + +--- + +## Authentication + +> [!CAUTION] +> **Security Best Practice**: Never hardcode API keys in your source code. Always use environment variables or secure secret management systems. + +### Environment Variable (Recommended) ```bash -git clone https://github.com/speechmatics/speechmatics-python-sdk.git -cd speechmatics-python-sdk +export SPEECHMATICS_API_KEY="your_api_key_here" +``` -python -m venv .venv -source .venv/bin/activate +```python +import asyncio +import os +from speechmatics.batch import AsyncClient -# Install development dependencies for SDKs -make install-dev +async def main(): + async with AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) as client: + # Use client here + pass + +asyncio.run(main()) +``` + +### JWT Token (Temporary Keys) + +> [!WARNING] +> **Browser Security**: For browser-based transcription, always use temporary JWT tokens to avoid exposing your long-lived API key. Pass the token as a query parameter: `wss://eu2.rt.speechmatics.com/v2?jwt=` + +```python +import asyncio +from speechmatics.batch import AsyncClient, JWTAuth + +async def main(): + # Generate temporary token (expires after ttl seconds) + auth = JWTAuth(api_key="your_api_key", ttl=3600) + async with AsyncClient(auth=auth) as client: + # Use client here + pass + +asyncio.run(main()) +``` + +--- + +## Advanced Configuration + +### Connection Settings + +```python +import asyncio +from speechmatics.rt import AsyncClient, ConnectionConfig + +async def main(): + # Configure WebSocket connection parameters + conn_config = ConnectionConfig( + ping_timeout=60.0, # Timeout waiting for pong response (seconds) + ping_interval=20.0, # Interval for WebSocket ping frames (seconds) + open_timeout=30.0, # Timeout for establishing connection (seconds) + close_timeout=10.0 # Timeout for closing connection (seconds) + ) + + async with AsyncClient( + api_key="KEY", + url="wss://eu2.rt.speechmatics.com/v2", + conn_config=conn_config + ) as client: + # Use client here + pass + +asyncio.run(main()) +``` + +### Retry & Error Handling + +```python +import asyncio +from speechmatics.batch import AsyncClient, TranscriptionConfig +from speechmatics.batch import BatchError, JobError, AuthenticationError +from tenacity import retry, stop_after_attempt, wait_exponential + +@retry( + stop=stop_after_attempt(3), + wait=wait_exponential(multiplier=1, min=2, max=10) +) +async def transcribe_with_retry(audio_file): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + try: + job = await client.submit_job( + audio_file, + transcription_config=TranscriptionConfig(language="en") + ) + return await client.wait_for_completion(job.id) + except AuthenticationError: + print("Authentication failed") + raise + except (BatchError, JobError) as e: + print(f"Transcription failed: {e}") + raise + +asyncio.run(transcribe_with_retry("audio.wav")) +``` + +### Custom HTTP Client (Batch) + +```python +import asyncio +from speechmatics.batch import AsyncClient, ConnectionConfig + +async def main(): + # Configure HTTP connection settings for batch API + conn_config = ConnectionConfig( + connect_timeout=30.0, # Timeout for connection establishment + operation_timeout=300.0 # Default timeout for API operations + ) + + async with AsyncClient( + api_key="KEY", + conn_config=conn_config + ) as client: + # Use client here + pass + +asyncio.run(main()) +``` + +--- + +## Deployment Options + +### Cloud (SaaS) +Zero infrastructure - just sign up and start transcribing. + +```python +import asyncio +from speechmatics.batch import AsyncClient + +async def main(): + async with AsyncClient(api_key="YOUR_API_KEY") as client: + # Uses global SaaS endpoints automatically + pass + +asyncio.run(main()) ``` -On Windows: +### Docker Container +Run Speechmatics on your own hardware. ```bash -.venv\Scripts\activate +docker pull speechmatics/transcription-engine:latest +docker run -p 9000:9000 speechmatics/transcription-engine ``` -### Install pre-commit hooks +```python +import asyncio +from speechmatics.batch import AsyncClient + +async def main(): + async with AsyncClient( + api_key="YOUR_LICENSE_KEY", + url="http://localhost:9000/v2" + ) as client: + # Use on-premises instance + pass + +asyncio.run(main()) +``` + +### Kubernetes +Scale transcription with k8s orchestration. ```bash -pre-commit install +# Install the sm-realtime chart +helm upgrade --install speechmatics-realtime \ + oci://speechmaticspublic.azurecr.io/sm-charts/sm-realtime \ + --version 0.7.0 \ + --set proxy.ingress.url="speechmatics.example.com" ``` -## Installation +[Full Deployment Guide →](https://docs.speechmatics.com/deployments/kubernetes/) + +--- -Each package can be installed separately: +## 🧪 Testing Your Integration + +**The 5-Minute Test**: Can you install, authenticate, and run a successful transcription in under 5 minutes? ```bash -pip install speechmatics-rt -pip install speechmatics-batch -pip install speechmatics-voice[smart] -pip install speechmatics-tts +# 1. Install (30 seconds) +pip install speechmatics-batch python-dotenv + +# 2. Set API key (30 seconds) +export SPEECHMATICS_API_KEY="your_key_here" + +# 3. Run test (4 minutes) +python3 << 'EOF' +import asyncio +import os +from speechmatics.batch import AsyncClient, TranscriptionConfig, AuthenticationError +from dotenv import load_dotenv + +load_dotenv() + +async def test(): + api_key = os.getenv("SPEECHMATICS_API_KEY") + + # Replace with your audio file path + audio_file = "your_audio_file.wav" + + try: + async with AsyncClient(api_key=api_key) as client: + print("Submitting transcription job...") + job = await client.submit_job(audio_file, transcription_config=TranscriptionConfig(language="en")) + print(f"Job submitted: {job.id}") + + print("Waiting for completion...") + result = await client.wait_for_completion(job.id) + + print(f"\nTranscript: {result.transcript_text}") + print("\nTest completed successfully!") + + except AuthenticationError as e: + print(f"\nAuthentication Error: {e}") + +asyncio.run(test()) +EOF ``` -## Docs +If this fails, [open an issue](https://github.com/speechmatics/speechmatics-python-sdk/issues/new) - we prioritize developer experience. + +--- + +## Community & Support + +### Get Help + +- **GitHub Discussions**: [Ask questions, share projects](https://github.com/speechmatics/speechmatics-python-sdk/discussions) +- **Stack Overflow**: Tag with `speechmatics` +- **Email Support**: devrel@speechmatics.com + + +### Show Your Support + +Share what you built: +- Tweet with [@Speechmatics](https://twitter.com/speechmatics) +- Post in [Show & Tell](https://github.com/speechmatics/speechmatics-python-sdk/discussions/categories/show-and-tell) + +--- + +## 📄 License + +This project is licensed under the MIT License - see the [LICENSE](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/LICENSE) file for details. + +--- + +## 🔗 Links + +- **Website**: [speechmatics.com](https://www.speechmatics.com) +- **Documentation**: [docs.speechmatics.com](https://docs.speechmatics.com) +- **Portal**: [portal.speechmatics.com](https://portal.speechmatics.com) +- **Status Page**: [status.speechmatics.com](https://status.speechmatics.com) +- **Blog**: [speechmatics.com/blog](https://www.speechmatics.com/blog) +- **GitHub**: [@speechmatics](https://github.com/speechmatics) + +--- + +## 🎯 What's Next? + +1. **[Get your free API key →](https://portal.speechmatics.com/)** +2. **[Try the quickstart ↑](#quick-start)** +3. **[Explore examples →](https://github.com/speechmatics/speechmatics-academy)** +4. **[Read the docs →](https://docs.speechmatics.com)** + + +--- + +
-The Speechmatics API and product documentation can be found at https://docs.speechmatics.com +**Built with ❤️ by the Speechmatics Team** -## License +[Twitter](https://twitter.com/speechmatics) • [LinkedIn](https://linkedin.com/company/speechmatics) • [YouTube](https://youtube.com/@speechmatics) -[MIT](LICENSE) +
diff --git a/logo/speechmatics-dark-theme-logo.png b/logo/speechmatics-dark-theme-logo.png new file mode 100644 index 0000000000000000000000000000000000000000..402ed207bc1f47099f7a73b2b47d5566a9e78618 GIT binary patch literal 24494 zcmZs@2UJu`(=NOL5hbf6i9==p2?vEivg9B-i4W$e%nnf*=^AuB4#npM#tF=`*P^964tnvVZ=T3^w$HF}f7h$Z$|4|xwr8r3bh#;C?$MvH|NS2RhYn(5kA>%bq%#Af zp*fOWUawR!=mo~1yOjU@mOMn$N#QKy_(FgCN15}@k4>pyJT#%?KfeP}@O#kmuR#}KWqKYnUiqLcu z{PR_Y3&bABa4z#t=vnPx#ZMo?D?~QGOa4nr=z0OnRU*BDdUHfFfjK0VmVxk}BXl6o zezM40MO0j>ysBDAS+@W0m*rVksSk>D3@-SsF?DfC_mWcmYyMdW?`Tsv_YLmc<493> z{q(;cLOmob^L$Tt_iBqrr7rz*QYawHGpa#{=0iZ!HLUx;o&?^+K54i!TvtrCf+(&0 zZ)q$_y=~XcMz3s!dod^F*+speYsw>$M~D&56a5LT|rqRjt%ippEEp!VYt zcq(;3d10okmFS-jp`?7XM6F!Srj)%QDe9A1q^}wOzn<^$cT;+lR1i(XxH0&5*ZKYH zA+)oX>Ica*XY57sMaJSrrT+aiptnSE@>zwzWSJN9I4=AD)kp6O#mV$yIb~5ZQm3td zzq`0$d495+=tmv}He`iR&O(Q4p56Jwe|SDty78`SMlF!o7TZm8>%;D z-1JHyIzBYk*YKD4qbqv`CoQ!UO5B6$*S7WA)& z5G)MN{LG{WJ*SI#B7h!!#;&k9azM@!SugJzaSSt@YjJ!U3V)BM``==iaDWxYST{^cYd#Mm`PUA&GHH?O)GuTZQvL7kP}*G!3|=hP zI?Wt@^#3W`U$=3t{T3(oKU0gmYpgJ^Gwhf5fEdQxbEglqCJ~;$Vdyzj8Qh|6G=1x2 zVmW@MIdgBs^d@+t<7V!sl%}P=s%Ey86zxR+DOF6~ZgwVHvt;n@0^5J=;*M+yP8V($ z8Vfxyjv5O~T2jn=+Z#U_Tc=H46#RSR04-_SYF;FExG`L$XvTirHP{tu6Su75>*BP) zbP<(DV{(_Z`SJd^a@|3TM6!!=^$jpD470qr)R!0$`7C?uwLq5hTMfec(En%;b(zmD zAbGP-OoBsw&}2z@v>kQe9niYjy;+_(-#^xL>(Bl-!%<}~8+x#XF6(S%U+LQBbA+!M zTn~#zefDPI9;F}WpcR>MyoIP;!22cL`#sCcQ)ATX)on6;J91O>m#rWObzrV}G|SS- zbjE8sLjZM{)`uGRjtULRgik0Ma*g$L>FscS)n7g47*HGg6)0+7{_R$FOEuZNE4O*v zVdSMb<9&IGD|3|dXy~+$TpAy4Z^WMj#bpN1 zRU$rc!oA+3giL*QcO!=RP(PN9Gp;#xmNv@XnkH~Twg*3`U0)KYNn2LTHrx2QU{~0? zbevZ>vi$9!W`o%AdEd^nWN~i4&Cc47viw-KugRqb)m0dpWup>52%$FCV)McXtJu9N zPVFn1GAc?(?p&!KTBY~=uvIFg{bIhEtlQ@&4X>okL@Jq##~MZ@$gOthoGq`U-stup zak6??jgc!UmTL*`F&nFF7&^$M7R^EWMZ%2R`9j|-yrco5FI!F%%#xU=xV&(5;R;To z)tEw`bchxX+R1+MlplA2uNC=Gn-`(BJRA#rt?LHH>T}n;<_vv zR0y>OpW$i`xYqn8Rz&Nq5ubQ^GHaBEXH0&k++emL^!4D^WDZU8N~PR<0nGZh3rTYk*W zw(8647X$e50jkBv4DTXiQ^cKs%fmM~8&Yi(#cOGYi27Yalu z8SZgsE1i}F$Igle$UG}XP?kxKIX#)64e%xQ1bbX2!bQ+>=G9+zD<+Q~mMKRuK}Pj@ zv)BmoeyqvGE`9#9Cvr?T2`p!n#`qS?Qy!%pYmV~4jE1u` zeDUoTQw_p(^+`h`PFS{Xo(xwh5Z7$*hKeMqq0u1xm*E5#sdu3~>!_cV^e5Ihz62r& z))rw9Gfl-|xRaqdF*JyAsicH-C9V;h*fh~Ch;^mkq3?OtBSnx0A_FSVXCo8ia+)&G z;CMR^rAWzcb34LjED!CzsD; z!ry24I=OqnGu(TJRz!mHbqHm|KasQRm%KXlEykttjH`*0prd4!QZ!P>`w_N*wB)P0 z?HUUtf=sL_B!rPfxSYn%bDWuzK*0JMwyA|< zct59YdqduB$>;_tS8*C`doW#ud$-9JqhX^8A@j>0w?@!WoDkGy;8=%@xYH1t^oLl) zhnHhUwz!0$R z_ISa(2OaGHg>ZUQzxbS8AXFg?QFys=cJ2~{+SW7idCJNA7x}U{REjGamzI8dXE9&2 zO=GQ}Ubs`{62sSATqB8EzZuPk_&J}@3#a_H5vDhb)@B+Fy2}g{+b5`rJv5P7kIz2a zN<3E?4WLHB?TCx?im9AK?x=gxz!&h7%%#=X8ziW(+>@wZONb#smAz^u>bZY|u2G~- zZ#S;2i=y}y2>XZ*op%=%*eCr*$3j&$n$jBujN8^GrIJ4T^gEhR`2XBDr#9@Bb(fzR z;68=3Hb9?v(!!##??ipDjTj?$#qW626v$0guE;5+eAdjAcJAc44SjVN%3C^mOKpSb zGUT5sy_)U^btp*#FkM`dhLH@WVWMcs>1lGkRams~VmZZC*|4D+W7%vsBRuV=U^cb8 zK~Uv0HBUMt0?~U-{Ju9=o$vborv(vtZ%1^~syV@xyVCQBOcujt?*z_U_BC}gWLxq)I5S`2 zHJ_=P!6ywR4MPu@S8AB9HnWRzPsx9-{KlYd7w&l@<%`;n2!s zsq+R0F!JU#GP4jP^l$Pwx5_;UJ$E#fh&1WDWZVB-K2t*q!@(W#fw)E;{P(V_bH7 z>#}}8J!2%*4L)&V69?1`rj(MMy48me!*H%kKSA{C4bp?=aXyBrGJ~nT?RNqbM+{@0 zf?~#}#^szVxgoTj$TCXd%=t_Di)&mDbo#6CF`SN!mK??VoC95mSqo6imJ zD106wSj6m8!X5xrl2y?wh04(eF z^H)nTwrOLpyzbHEF4s_H9I(U2O5*l9y>ol$5(I$#pi4zGpQh7%nqNFCBI+3`YiMVg z*Vu0*>N0@mUy~P7=C!c8G*}3u^f2!u)g9-{Dy_^b^i1GHo2;?f$Z==ZZ1}l#({=VM z`yo2Nu)Jsjs$bOv7*0p|uS;bc3HR+1NT95y61Dy1`C0@kt9)KY;n$C{%N$|ka$w8J z6v60MEdkp571>xp9kH{m)8kaOjO#KcE}q*zTP4G=v; zY+~2el`KQ#Wlh*jgcmdPO?T{eu%_xID3v zyW#1t+%&%BQZjQU!lR2cSdx8KtnRJxB<%IuVcoTazUl)+N?!2e#sx@zQ^UBLMk;?- zJaw;m$1Lljhe5Fz5;Y$m8Vd8B-!`OI}%<`}nPKmT0YIArzQ z-$B6)?yJk*cFlvCbl6j%*)_RTb!dv}@HTBM3506Vw_e#D38vd+f({W^BHFSAl?Nz7 z`In@+rRg_><=OJE-wCZ(8U5Ug7vBc!PY+)8GVZ@LyeAm-q+#iW+od=co&-e&#*BHR z(vj^B^rS~HKLj(>tk(2?-E>G{o(?U}7V&8XXtF1$JgLf|&nh`N5A9%Z<(3H`dHiVu z9qE9+c~{bnanbwT+i%?`gHbe{>4gqNj%x{uWjPDbv@W{SjMBj|0p36}b{m2T*JZ`t zn<^1pyN6^(G5>1WFT#h8?{^OUc+H{%Xxs(6-26BG&Pq^K%-I}*%8zu{6Bs$E;L)b6 zyOF4(szN@>?a%2mi`M8(uDWRji1QHEr3=ZkI(d*^`dr3`CQ6F(+HQGQd6mrS*R6<6 zM=_5Xf`o0@+O*Uqt?$qHB0p+qD|^^mUvpVcun3!}hgH&0C8;qfdqgj59By6Bd4l&< zwqx^rtdRV7CnR~)7BD@Ger(=?XW{^rAoWYn;r523F5d}JM6p{q=?Rqp0NvQypP+^VNvFvJJbq$(OLo5pUmGSQrhK z9WUWtAqW}no*b27kbEFs%6@HZAq{HJym#kNS6>5Q?h8DRD%UV_T4fORY9FSUcMB$> z?qwYM)EeR9S$31@VSoC=$v+mtHjA>v9^b0nE?!VXGE1iy_8{nSO+*b-9&Xv#ar$ND&DH>;guupq-y6{a zO2Z5RNBqv0$>JUX6cXscTlSlh(A^-`1|(41<}oc7eRs3@qJgP&?~0vwZmUIGt+Al4 zHFXj+2PJuL0pX*p5x8~Z(Mj$)p^S*E#KnSoQO~g_2qDJBiYv+<42JNgk4tzRT(aVk zC4*4o{Ob9w;qq5q)0Q{wA^*Yj(r?veU%bhZsa4@1v%bWeK7R8u4!uI@{|ZnGzPj-K zLUS;}i@!&0cwGf7_WTnXAL>NdHMgKIvG&Ld7rcdRFIz%Tp=7GhMUr;`)Q7^*;j`}- zOP#Bsi($I1BBH4*^ki1erdyDRWSw9lmCJ7SaEofcvIQ^6;QIE_j}lnrRjTBk8Lj>I z)(2N3eR=*z=1B8+2<0$fo1Kq(#H!>WhYHShW+dh(tL`BSi)f(C1pe%D12V9in&VM= zvE~*m>1mqOieX_ToTPr;BW|(;=GxnAEOO(eM(y|Oj&pg>6TyyvW$9KQUxcW;&BQG^ zuS7n|v-4u0D~1+vCIN02@&u+Lq!vqdC}F5;)kDyDDbQT%ot0qRy0+t>ol*FpV%4C} z`Si}rV5j78R)*Jv(%T&hqG!(qwpbbmzyFjz6f0>LIV_kutLzvNRK7z3Q7hiqxjudu zzPu0Z_;CbT^!X&OYV7-6XjwYC{$nhCb%eZS1sN`@X@cY9-+la_)^XGBm* zMy4HEjH2QXnwf8}7)}%Ht!mFyy{{)}QGuI=%b6Z)ys&z6Ae!=|fq}KIDJfg#tbKk- zv$tqLhs)CL(~HTgO25)Q?OJV#+%abZo13zh{pB;lV{HRL{`qY(u?y*s?`D8-3C*r( z4cl+Qo~(LO^yAZe(?{B+v`4aIU%0=JmefABqqpqLA@0~4sR;_y<`a~s@s2vS;O@IZ z@W%r0`4>zAWInRJmYQ0(c=v9ii4ADi!6{m@u!F0Ra$rcB~eqi(l(k8 z2!k=pswDG|CAeeSYWd28-v@VzJtTepLKix3kF7VL2kvTMTzg7z9}@gUU(OorhiEl2 zGFBIVFPT>n_h5DYYsBH!eLt2(I<%P3Z++@Zi)bN_)5a^!Wt_?3s=I~xKu`2SR? zxM9UyQX%z=#;5zLpXJZwshVO;VI z9O}+k=bkwwiF%qUFelXJ#kr~k4sM1_m+uQ=!zU`WZDS9E>ApnKR27jJTMv10>=kbb z%r`$i9KKN)oYEzEq`VMcpPl9`7Z!5!mmYdV07*R-Wg=SXSa1&<;-os0uh*SR^jGFh zC;UZ6KWOakeCvEty)@dlIvO17ru8x-ZRxo~b<7j-cU@m}}V#ucndjH+oD)fG+YAo(` z+Fle$>XqQpKWtlLH|_+`mz^7K`_Z*ZU7_TNaAvGdLx)wJW%@xqqj!e4rWN9fw`^#R zM!3TSu%F>FeV!a;^pCv(Lr^GZs8XE+L(Aa8+D(P zJ#Ng>Jo+IOT~L~%*^ew5U*J2qBNuON$u#_S2U-8T>*&15<);shqrqHxFmaME|H+ww zBMnR=Bl9(l*vm1v?Z#GmSFbo&<`3$lK9ZyAplF!b@Uaeatc}d4s~0pUejnFX{5l#E zE`E;(x)6^|rL^x`*B!oWu@lwmYxV~*9%69(o-(%qDiT-9V)eZ&;W+r?YKa5cNK)wc@`U-?SCv6i{YT z?(Eld(e(FyVJ(Hp&?Z-RxxO~lMp~7(CO3M|@1033s%g`<+tAHx*omAKzvb=2;Ji#6 zvOYAJE{Q29X~oY=)BJwGvz!J}&ZVZ~fL7{C-}puUcmpP*mSsdcyq%d6V%|`4T$akz zcu96)$1rBYBU^LG1!URo7-Mm9 z(+(ZZJD4X8IswyyiQ9q#m3+2SvnGR9Dgcnuos~04HFEwb@OFJ2U9v}MMM9+^cKIlS zmqM4p@L=zDhj=`$ZM!^j!yiFMIuAjL&js2!f99v(W1n=!7{ts(U70yhIW*^mzGNoW zi1N}#>_|>u4tUij#Ct?{ap%T|1{-Pq z{7Xi^bzY%+xBl2KHLBO9_ivq*FEp`G zww;|>L^YHZQE#eZaD@X)4E2g7foV{%vT$MW{b{U)`Gd*9+b6-h%)A0>iUQeRUYexI z>Yk8Xh2)=;8XIOaOZVp0Ro%p#d<1)R`-|j}I8`Bn4f6GJu$-# zgOEHrqu0LHsvarQ{Q+|~qsAxgg_DTq&m1|!lm5TPdZotvs_+tpnzso>XKUVOIl3Eu z%+ow4n0|l_c%|ZtJioW_?pNH?BHik$)lW^|rBKOlYZlkoRd0R^AtAz*=`<;O2r7ZR z8*u(#QiW@nl@^$zGI=M=UThW3=UZB2wopR2Ws_dpum9YD9h7Kq_+K@x?UgCNY2B>$P=-Gl2SP{NhpdU0aLF7jLz6wHxZ zOwiuT`$t~I!ZnAyOvLTLDv9?)GBCVLC#^SHiN*_mQ+9kx#eM(bYi%l{BBCoCz0vfc zn)4VjbvM9iYCZ!Uk_NqrEu*b%8)X^Vo1R!sEHq-=79811wzlEJm&a_2biQLpB?^Nk z#M3KY{iM$CjHH7QDxJEb_`KSX4*jG0mv*(63R!h2ArxX9Na9{>sr+J(=U&ap(2MMg zn$^Z%Ocz*CwJKX<)G|)R@gFvbJ-o2Fom6w#tE6M~CDX$EpN6zVQ z?wE1&XnXYODG^GKCPk<<9Py>LBCo(Vfs;0Uf99szO0ZH*+FotjmcLQr?`0sxl93x@ zhg(f9AJ>cHvDH6xdGJP>*pu-%7PmG36mF9-vLNSr2UNdn`lwN#(Pd~=vBXm3egGk=8(l)kYdKs;sL!!1n(eR6Mp2hrS{3=uMvs!|V?F`_&SB>{` zX8uGb=a@9pV?&0#N5zf(G*ktSap(Ix8-W!P7Q)flABcAa*V@tN@<~mV5i5EL_m=`c z@aQ(wFPA2tHlOm7zPw_JCq+r|l|f@8pydL~Te-HjGn;j-e4E=#gCc$;s8A(I+M*0B zq0-gHtn$#qZHsrtu-Dg&KaxTn4@gTUSSke~i@YWtI6VUSAQIQ8G2@_-n&+&3o=n>l zZ>_;H&uU9_L1JQC&`x4GBmtp5)VOng0<&D1Cb;mMi|)o1OM##cb%Fy%sH0i~jFw>C zu<(c<9Yn_q>QdA~i>7aM_+9CRw8qS;C3r`jE*AxMtyJ>N4Kxze88!#^9~eV5QMaQ@=aiq?cetYPEzeV z002ZMrEj~|d!_}@kGI3Bz^-Az}TBU*i8HdI#-k=~W#%^y*BHGp=RE0DJr zoaaE4g&EsxRQ*_upZ8l<>u+rf+dR`B#@yqBL@x90z|}1I`zzn{%mAS%)>*=YuQrWC z9cMZ%enLv3W;0F`JX)Y$*rU^D_t3EwCZsA+s0K#|O>lID`@x7z0!bj|S73=9NFC!# zLM`BwaS7Bt-I7R-W$49#GQG++eEC%DJyBjaQ%?FV~HjK5eN?EsCQ6cz~^lRGwZMwIFHS_UvI3_?A z@;Bvk_1HIUi*B6y!e3G=dsF|jPtMYJcCZ5)JcZHaEkyDNxZcuWn#C9eBoL1;i<3LB zXIGWZ8t-dmeJEvsAYY$Kj~x{kD3oC z&U6mHS#xuDKS7}9=KC7?Hq8Qj&6LhE3YBG&YI>wXrEFMhiJ%@;jmskfCCPC_KnXphwKx7q~yaf3baHYQB1Z4^W78N{K=ro&!pqL1(j9^MEBXP z&|=-eK|eKB4rP!U8J2=(A+aF?dNtz>mJUk3CSnM;p*39QM=jNK<+2bY-}aDke|4l} zcjgfiiSUVJzU~l@(BJGKKweatG9LMcI?dC(;Ec765|jN|4R{}q>ioj(TnW0*o zV9O=x9bKMgLBez5Q1+)eZC^it1SPWBLWJmuDVDR9qPU(0g0aXuz+AIP$6}?Rj=f+# zxo#%9LPbLBSn0v$Z=uR0^(m|*=n^Wi647%KhzjmK*nUm5#5%Za^ipo$b2Ftp=Z6a* z7fskxpEUE*f$u9$v^B)MNZm<@rm9g0fRio&{fOkzTx}aG)D!^R@H(r-f{DQeBBh&| zxrEogv8z&yg^&!1flV~48Zo?@Hz>o*nrEA>uX{6f&p5-d!i@8iSue}{W+!FMq%=HU zIng{fnv?f|4(9&WXWQm5QzQe>EozPyo&PIj%Ef)PG|s@_EwwQ|m$;xCNb)8egxIXNW}~ys9~uA#t8$O$;Vff)7Ra5z z{b{1Eg2;-JPNQ~}=Cs#N#KI50MulnWqD`mvP)@ACC2g& z3M~1PYb7;fdoGlfq2Wj)-bWOnJZs^_?Y3d{ z8lU46c2b80tQ&Ms)TI&(;TRjjX)#^KDGHGagLe)YMb|(n+|{a0Cvbr@Df|Pzzb24I zX08ue2BBW)X$bHhDIC+*xgU_ZtUwaJ}Um&}4?nWbbkL9qz1Y*zy`8Fj8$W}X^ z0@0>^{2Po1^)9K}+~|e<{nYtQ?*KBY7hi|?^A*ybAk;EkdZx;X2Tf?ZZ`j3XLmvh3 zZn}UMzg*@Y>43<*o0n#B7LwUzlQS439`O(~7-;*s28^7Ve{Xy~gw zrL#YG*Vxa6f6Aj2lR^9BLCFT|SV$L}?ZNS8>>&9YbFd*25P$iE4P9ePo{AOadJV$4 zl8htc!O$HKdiE0U+dm+r-h5koEZA*N^#w3CD#Et;hs)o#GBtSXgLu1+;jv~zedy^$d#ih8xXl8DV_4kvsAJ_gCgkZHi+tfGjokI` zQL7OhI0W&&7su`EMBT4#mA8}0y2ifMYzLnn3a1m++E-JG+L2*dYa{aRF9u(H?u0S#ylkkj?`8zGOuX$9so08BQUNPTdX zQpYD$69-!c@1)+i$6b!Lt@P$OoX~zn0JX-LQTy@I0SCI^Y!@)3m$P~-WU7hXl&^Dz zkFu9m;KQ&Rs8CM}?^7|eJ{SYpV9RXMf7rp5Yq0DK({shMc z0!~uH(MP%=W64oRa=i|)Hg>|dWLZGXF%qoy{3ljIIX$xVr~x4ki`OcB-qT6+px6?rs}1TIIby=mim4!A?zk;5cu$|(qhKKboqs^&}sZNd#QdlpBCfeY0;f@~sTn@l7iTmpu)p1?Pb@*0L zs&9?;NG1be=`O9DMr#rtXR=QMhE_4K93k}V@%8Z`Gshc|_mv>P-UGN8-i3FhwgE${ z5dvia_e};zS7+{2OI;u3Y*F32l}UyTHGoiU$`*r8V4imkz%A72?lvKWPO4vnS%68Y zhg*k*xcKq-18}80g#rnh53Mfx>IwLM*uRCRXny>!_BvugJpT;T+A)R4&25f3n-TTs zc6a7OBU5Pg$0Yb&&n{;ZFIISe5F2CA{aT##$c45JwrNb$ITr#-*{UEDx0CDo8vPDq zW6GB0ru>+HxGw2u!d>F-&jgS!(x2oF!2$yW15o=)K4O|zwCkQZypLYx)~L0s-v<@j zbZh!(NIsvVic`$*4rxXTYioE`mY2F^9lx5OS&{LzIvBp+^u;%HyNzQifP_yKei+En zr<^7Itf^4CL%fF24cslpD@^|C|EM0a>epe;4HQh!h@g2Ji;Y!{mm-OuR=4B%0V9QU zseM}cm#$5U$N=bl?N%yF*>|6K0h#b*zG%fJSCHS|S?mps+fXO(nbx+Y-M|Q5Vf?K? zl!QBSNgG9$T~)pSq25*((LmJ5B^LR4{w-*>GRb?5rSqZ<{i`~(F;^E@LLEdBh9!rB0Q3>+4jub8NI&QDVd;HB5eR#r?5YiE1f+I6Afn^kI!%nhFcksc^DL(8S6z0(>a1z-$qe!fnHl zWb;h@s3Q5IomIctJLgkBtoE?$-@Tw}D!_S4sr?~@IGnivWaGGEndU2d{WPZK@Het; z%#oCZMgRmuEBOdNi0Lg@3rUk9?r=sgSK~-=)ToGI{ca)4GY$GHxCMhXIx1SQtm4X5 z{%;$%?WYQYZzLeGzvya?-;->~C+xw*Z-Z=!zK)6C<~jsxDnZ=#$yS{>xQs|$IxIA~ z-5w^emhDCU<4pkt%>I97Q+>sK3R+~RysK`96h%|FyvrRn% z%?LfZaHg?P!^uKa$OsS;d^#?8zoeWENw>y?cidaFer=C+U_=6n600@23!a`iA!+FK z{j5!k^2oxASe|_c&5?^Y0izN91QdUTn^jm_h~8&Eta-xSgndEfWqh?N-G^QJfV|Wm zoK^fsL~EN z8XzHPcaHe zDmR8wy(eON7mz~HgZmY~p;vC>`gm-)#@U8lvtyWbTZk8S%^d%0~Xh_{l<~dgD%N>@s zPN4J{|CWx6wCp>#h-d4!RW(0*eKSQ_vN^-8Fi1!sC8tTsc#aJ}i2Ai%qL&zrt@@bA`#)6EZa8 z%7JPVdgacVZ0~+C--dDY`oc3upU*$1l|AHIPZ(ZEnS$gptW89UgeeqKW(l5XCmv59sk2&I_(ix?`#8GQY8Z7QZnXS=8;`yh|D9j2|p*#)!&i4Xu+fl5{JJ*~aS)~&UV ztH0%T{Pp{wOzg0cJhs{d;4&DaF{mA&+k@OlAJ!CKAuH>d$Q@QoAcWj6iAz9`jv@TK zNLP}d*3Fo2zjP0rFqT{B8CpB`ohnG=EkMT|#zp#G&)q7McHdb>?xHs%viAKQ} zRZHw_H$Z=9m9Q)W@pDk#)IXGUQ#N!~+!WVPV0Nz<1?*$>n#UjgSJ;1^#SCvd}KZWEufIg-@ z)>$BGqZ`g~!#?o~i|@8SzkT3(`nB9rw=P<*4tVCvz$KB}*F1hU-{6tM7y1|F__y$e zJObzx?5FVpe9T9&zD0-O=GrpQGEu0+Mr(QvS^iLu2~!=ud5~M+9|K|Tg9CbDtao&P zRQ;S5>tWM4B6aX`45;i&a?4{5a_HAr^n8P;560_pnhcu65|Td$!ig6s0*%p2vg86t z?LT;D6wdJ)n44%yAawfA9p*I>SO^A0xiq|(jr3qtleXX4S4ijcn5VBnzlsdx=(vIZuK6sEBnS(RfQ`$ zyMoOmlTzSTRL@v=vF;XZS|pV_-~jp-P`44vvOvNiU3kY)t>So(+q~ zbDwPw>hYJd5M8y_bX}iq50kqj$YD$vmjRvjlco_T3R$fEEjRPbhSKBsG1XFy$qoq0 zuB#dFAX7h2*^70t-^{hqDg$&5v@Z`XG+7BIcR&BzAqD~h&%FTr_*7r52{f*-<6S$7 zQ+Xg!5PbQs(Vpk8bJUL#i_g`MIM#+lOGorJ+WpHFCo#o?MNbK#9)%)((cfkUh_4~Y z*0XolCKNyZd?$AaQN%DXCTv|xy0a?`aHGlM_Ojl)p6^GI1YAqV;D^3QzWwdcp0U

!90f=XSK~YW_2QfY@FU+~(|t<$$Jvg_kG|&Ot@fS-!w~m4IQm@f`(f*ohQJ&IYMKVT zKK`lI#a*cG_VI~IHN9-q+wn71)X-FqY|-D>p-Rg@#W7rWLdXGvJUxX?5iz*fd13O8 z^klpKns~B^`ISD-r?wIzk!Ox~(GKQWvsZ7|I9k4*R7BP~Su`yW-JHpiO6Ir{Ot`R7h-79p{(IBOVvY?+eI9&%NS?+%*1bN3v|>-qVB6GzZfcp&vKF5v~Rjb!g|^ z?4NJY`EEvSn0WQ#7pepd!OpP5H_1&yb-%p#NU~fCDk()P0@wUVQuXNK!Ig9IqCi7y*?)wnYIf$@A4z2H10ha!!3Y zxb~BM{7s&$9Q%pNM6n@SN{Cu_fu)1aJBA2a6Q#B};yA{c;k{Z1 z&(O8DtL7k)K6F}azw`lFHm=MqRY@_vtb;cxeqWK`Mc*scw}|k-zI)QZ8^23`L&cwJ zu^d%hQ=^>Tp0&4@eQWjipM5lJGUG=;DAqZv(-~>Lo_>#zLd}o2aWa7pq z06DfB#^rY!G{hehLoix6SBM<{)Cv-U#J8ny*Ii|5d}Ge;=`?=t@4Y^&J}d!>GH+=5 z*-*?=eb04Imzld?4EVyMoo@~d;5?rm$mi+ExF&T{eyLK)AP@RQLF9hGC#Bs$9RcVV z?{9#Z!C!TNP>&~_PKd|dG>Z6G&j1q&lf)CC^3Qd$m~x}FbL0mq?WX2+oVCPeN?_WI zD{~y*3VHr${{HR=yyRbdTmo;jF?>Sbjv1^EdDg;e?lhF}wOoXXe_CuO-c7F%`}6^S zXZ50vV~cX4J+k7-)MxAoSrpz2qlk*{gU~=9GTMX+2Ceiu{OBKRJ$j#9-zuJC8x-d6 zm%USQ!_Sd!jbz3Yd%PksE}uW|NqJu;%}fO9?Rs5xCZAUzD1dQybwtAj_raB4T@18m z7JE&o=s;@f{N1G*-a{mwnI_1|PnEJmX0Ydh67F-y8DIz!5-2VqMzv_>2-S4?&F8sd)9v>?fZw&M0d~h(=RZL>{%O~f z{I$U;?4`Z}c+6LT(#VSDE!j>2^Am4FJFJ_1P4Q$MAE;c(*EE`u zGBMxg3h5r|Pyb^<)Zt(G?VM*u$^(b+HSB5HA?z-Eb5$O#^IZgSZG*`Mk;-CeEY_|B5yiEl!so;$w)=i*gSAhu^c>mOG@up zpM6kqf>tN{#IF?(3xZJGW_%c#jA?XX&UQ@aESoG;*X8bm4^XwDNCMtO30v@ zj7<&y{{jR|zJH4kz0fQl=18vakwfNu54!CJS4hahL31G@;T|!{W6#5TJ$Lx_KG_-F z`b#nPFO5koAITtiS$fBnh^k;}Ws{I{yT+uk9_62vU)F>C`OByHTA!#J0hcQdt`Mjj zByCUHFmJqmY(XEOen{{Ki8JeVT|TP*wOiz}!Wb(P@InoA+gQ~CcTNJkGLbL#=*&U~ z+~rt!?fvHn0*NNIcgB0_hDxZI4||u`5_?i5TvzexSZ1-%cQ=qyBy}%p__LgKxFP^H z6n?e2@Sg*g4640jmm-A6zXe6))7NaQx=gx@z|gi}o#M`E7MX|`C0n)u+w-#@;#mBx zkp)MeKSv!pm%Xpb^Oa0*J|Sc-zv`t5t(Dy}TPS2l()hKQlouf-*y-!ce-CVh(KMD+ zFq=^GP7rO~m-vwun=8`N=-WJ@s;t25Tjv<{9TZUOPp$)Q+{6>0Tyt8@KZ8#Mn@iE`;duKII;1_sFKm&W-?%1t%(`KqY}lL` zLXGGJodt;dT3bd8PctX}0cREJAc%A^yZN??L#`&VQIO5ICpg+{*2WT%PvU{!}OA{VJ_+!uDuRzEV= zg3Y9S;3~m$rV#8({BUUHZQ7W(cb9AIpHs=S(5YKO4Ba|EB>Ls}eir)vd60X_1~edo zpkby7D3JL}^fk;>hspIfyn{u}qE32x*}eL7)x*#$&+5|veEO=YV8*#t8Z{R1Q%UR2 zb`gGoAWMCN*5unnf;io4(WrA{7Ls;Bg#%g%@LI%Qn60_E$&ku?O z-Sqdbsz_6mm-A)i7Q9H){|DlD>{&y}ym`^k{2@$=r|7;KmN; zl*)VArsa0K^}}S1Zu;wafG;(xO8K_AH}u|Jt(CS@#xd*DeB-M)agP)1^fqWEvE;BmWOpiBF4h~6pke4LM@P@Je2`SA3dJq--y-;lq zNJzz8_-X&P8d>))p_;zXL6zp+=Ht=yBPkA_RQQVOYgWq*os7Z0rCmS#u99llDu-#; znD;B)DfnHOgKflP_@fVDh8_gLm7UBvY=drgt}9m6jB@ z-<-BSNK0*lL^7QEsveu)6-~TsDdsUqm{h8=<~OtjZcDn|h0E#mk`l-{w-n94%wmI& zgq=C^9Q!%nYP~ZcR)TE)Y8us`U2^8<@RgfVhn0+_9=Pz!iawD>Y-#0>r#YXO%=<;9 z@?`*;4l$x5=Ub>g5g~ZFNpG4ZkG^IlC6RJV23sr)#g!%REiJ8Bg3E)5&t{H-Z`koy z=^*z4v!+$Q(j+#x?O}}Eq8Dy)WQSAH_1!9|B-7oOmEeB?u~I!a-#jF4F47I#I`nbA z`;wGFj^B>Ra-A>wqj-8`)tOxKp1ydD8GkN;vt@cP-8iL!CusGVbx#*uON)Kx!fFa{ zKMaH)O9ilnjox@;n5O{AcU3C+2{ZFXY^)f1tbc5yt4Y=3kjnItX164R-NN7kj%(c* zT6=HF{gv78!%^`-8)zv07wbw5^HMJE`6CtodsKD7XvBdJuX+do=jBDZ$AVV3lg*2L z602go0m4~*A6{WaGhWQN-Rdb6Cy=gfsJ;pAmqXo-TvL4QLL5~q8}%A~(+x@7L9JwS zqj%wJWRScQ!0GS&?^`sERKG=A0_W2LDtm+wrTdMbvxGg|RMhX2o;nwDZBaR$o-<0a zc)y$LLO^lXkAw2O9++XgJ>MLp{GTR{JP^wE`J$r3T~XGRqa3;N<)##J%SMF_l^m;e zlUwc*K5Lz8+k8lpBj>tFY}QqlHup`eqlnG@d-U~v-*5ih_uZL!=b4#jW}bPbmzI!+ z)7#;b`$(HUI#e2kB6>)uXbDVs?DWQgYJ$MQqy5`wOKx18&?b}89k2R`eLh2%oo_m_ zkP9dYN)6Ok;jpe@a5D$=5oWfO>GwEq>vRbW6 z!i{oLpOi&NUS6-1yv`eVS%n5P3d;N5m4I56T=6Adv#UG1>BR&sJ0=CIwHT~R?xnn$ zd!ICMf{1Httu?REE$&V{5rswuW&gQH3l1zrp080i?cqI(HSWbh zrf(WgetKJ^u86UpndQrVOIT-wbLhG-Pihty(X6C9l@>n322xW1X@sO7Eh^Co8SNa7 z!R@;vN!#cKkAkOT&+m7$=ZSo9H{7I${!G`DtsxcaZ*Xrhon8yB?qvgO42>~SO0rJp zj@_s@ZbSgiH?QfA!c9Qpt@zpbC!6bF0nD58- zjGa-2hEJfMV0l&WrTM~9hMjYE3|cHs`%9dY+JYZ$49M$SB5KQWXn6F~7WL)~&#CyV z?5Wd77oa+qSi9|-dXE~M*Y$XIwBTZ-s?*xC-`+}JrDk%XSQ3R+ zEisX=8U`YgiT%J9w8(*Y*{&#ZD}1wGym{;R3gWpHLnJL4AivxnQ3=+PDo~u z_>_~?>i8zoB#I6a!PT#Du)P;)@Pa=924$&;ab(urHm{N-q zOI2iZsqH$;&^H`Ns-Ju5z&A6>xmy( zD?^#tQqVL0F_(zG3|tfXdnS5-+kkPf!FA0Tu81MDg} z`-24xge*0l>@L^;>1C2B%NT+I)gp<%Mm@mw=Y`Fxb>J+(CkrXFw%X-Y_P{w?2nE&>9MCiZ8SZ=r(JyGfT++dIr=bmm>fNxzd-if{Gsa% zKYnzq-pC|yas5n#9-JO_2_7EZlipMPCaq!U7@?Yszp#V5vM_UQx>G5)%Or4-KDAbA z)vvo0izTRoYRx<494w=1tF$bNG31JaZESA2*4W9iZwi5EyYsIYRz0yCPcLY^ySJ0P zz7&O6q<=Lb$$C)^rp%UtQ5ao|?xtCigcTn8<2DwDmXTbIQ(omqUp%e%1~NJ7#*?X7 zA8P`G7AWFmjjnmQ8r~}{lsP0h!te1D@%NZo(dJC#}&NwPrBE>Z?AE_h>>;qm@<5nUhfCjdB?yDw0bEUU`ak zH80Tjw4qXnucGlvtv9b-6KnGj2MjH8 zV=oPflb6*yf--^@g5R8o)^BBPQ8RRB&m<&gWS3DguP5J}WWL>?|FtL5!J-ANMgjZs=|N!meZ-5ytxe8J8o(b=lINv4MpRiAnz77NeLA#$p zRKQ6Si>ZI-dM1S|%c&9pop$Bq^Gp>s^89MHn=EE)Zi8pT8v;64T3)I8_sAQWV z&^OCtY&b@=xmh5VoP0zh3s4uy4l~kBc;1$vT?dnl0wR3koW-$bk%!NuPf>1re`%B} zzhqUy+!Fi8PhL$oqL3QDb&>vx{|s*=nKurR@xFgr--9E@2~246PM7N^y5m|1ybw!X zgp}pJ2wo*gxdCN_*0W;_!}PsVrif)Khn#E!+8&`wCXwnb0Cj`OkYD!fJ)b~5Zq$+y zlIs#@^}s<_lWsV;X$=`z;^n+~@-5XCA90S7p|L-!4T2}P2ZgZ%T9O}q2e{++(C!P; zjWAs6%O;U({kBhK)i0~!BIWhnQ;NQPQSQ5BWx9Ltv#ykmU;SC#-DuCRnXsI2wiH+M zRl5O0eUDpIKuX5m4FN$8x_G%eF1GA(ZmRK0XCTG0LT@^iufR%7D*MO3&GjUAGLMqT8Y*+D*!&MiV~1?=P#1PvRp)uU)m`S3>7W=- zOwTKud5bxdi}pYa5~Sm0=rd`}pw0%#36_%)k72wEjBz z`6}h>Nm@aqhhOfl!3?|%Ca^q40BUe2A0SbtzWUIL{zPxigK=L z;f?)S>rqLnkQhpQodg>(^yIn^3nbaMB^WGn?9(Z-Wx##+!$u03Zjrp8#vZUMe7S60 z0}P)cWutTcfErz^Tt~d3X1Sqs>6`M7L5w6Z-tDesd%YMSbqbIoraT+I?fCVikKBYu z*Qe{GgHzp-MG`|V@)1y}oE16fDb^i1^58bZH7G>nbSrhwh-X;*+?^bQ&n~|f%?CZ@ zxOk?1OmuZQxd&|?=e8x1?hLid!7`nKvUw?DnZ@Qy1^0N;AF9Q(L>2g^vCAnH9GW;3 z*s7tq=7`>V;IAsNOf?lv3@)(LHnaaeow`Ga#=7f|WFOm(7>A(ban+of(&3Nfiw5h@ z%?lZza~5C!D(}^r$Q#JU#ZI2DZJ=v?5R}c_tlgA95Mgi$zbO0qT?W}Z1)aog7ck-{ zAk|+n8*pJ|Rb78Yr-JK84%M_ieEaz{Z@k|qBB053Zxcg}!>-6PE8L~fLm|^u4Lk#) z$&OVqZFiv5bh80PyO%`>h=1SUvVi45NvZs4h3&$lMD3=i$a^>9;im;;)hJ?}VViHC zyRtkcG{s;h4-xCfb~D`NzxpQk%NDj@WKvkGn2GwRrErdARzB1X?tzmtnU??>e+e<$ zza*JcUny2ZbOiPmLc@IqKH6S(RzoXrAeA#Xor8()miVd=SyukXt=uVQX2B!8b?!$OZE??t8wO@W><`g$@;@jeY%gUzePZQ|20^A8VS`8-8w&-ixa}l%!t}~OzA!&)7d!1^=O*_^Mmlc`-)@4L6_XMeJ!xa z^fP*^+D_h%vxy)^(4Uj`tRJWyRK>nTOd|!a8`N16 zuO0YRbd^<8?jdWK+W;$nmmQj)+!|*NI!jyIxB!anJ4UAS0mo@;2|Ft1?S6T_tLKX2d~a|J066#*BQFN3vYxfF=C35>O+q zeixmbT;1J74!#<_!{Y_(-~87f;?T$I4Od1nMvnqs#bBApt{U7l)ssW(b3w@Q>w~5A zkc>W;7;glgcqlorDhAOFeSCVd`awAd!Q!N|H8kNI_nWH57^f%D1}{^y)-l?Z>Q;p51ZjfZzn%gME%6U<;%pD;UiYT!W$yw z(L6J9{uhx}`86?0I)7FYHC3{N4*GU}(^;W7Rh5UhBfIJ7e2XC@&oWp3L~2P*>J}21 zZU`?f`%ME~ai30D{99bkfhV8Y!v|BMIvqV#VBLlloeokg-;buSffZ;O6W&tMUaRLf zjpNy|CPhuFPBWB_tS;|h%t#GeR7~;6UXEudXXs$3-NJ(HW}zuZ@1EX@+@Yj+y98z^ zM0QQ=UVAWcs{bA{r(HgG7Q-cdi?1v`eD`8bX5_{iiW-~Fe5Lzog6|n?zWzF10Tw%; zC4|2+4|i7|J?oozu~iuE85~rkF(rLRPXOhb(7eK?VGSzi$T$8 z^#;APqLtm@PA~C;J-%3f8Y14HR6GaLP?(G5h3n7?c)x8UKMmzdGYc5Sx+|99uo=#xqZ^zAFFlNN*4zV zc?~$svD_ncZndom3MhCcAj_{zdy$wQJu+B1H1uK+)3kx{E!ePt2OD4D?X&b>eiecI zgZwrR_+Ns?MUAeD0t!*F5(;8_C=eK{z09*-1)+e$dr@-m`Pmgjev&LNxwNfJBNy)H z!f!tioc}(+Hopt591rW0y3a_(K%+wja~7+z@FJJ4?Zs1rJBdZ+{U|=G+(nyDh84(^ zL}oVGr-l{0duSSiJJAs|Cd(}sH5rd+>~2zv2L3SUSDw}!d{@X>KWs=k_@R@rA`Xt~ zxpEDaHR$em?k-0G8#^E$X*U_UOS*@f@U8ilg-n-`W~XOdR4^F?JFYQ5B_;=*IPR zsbt5B#kO*Mwk5g&lyEKFw-a;1c zoY7m#*sp8~I^Cse4g1GAzrdka1IhLAPp|OtnQI`l{|yf`1yDX9`sT)gEaEqGL<_-v zJTUH7^NA!c89rOZe;%Zs;+^pJ;IJhqv4e2uZwNWnBZyMCsT=ePaUg3y+WP%-8b|b2 zi5VKuY&sVXf``AsrWKa9X(tU5uK)jW#uL0dC~WM+)uix$m=FF0yF8~4pqD%>a{Fx6 zYbzk_zrhuXGuWvGk5vZ@{}20hfZZ`)IQX+vs7{s9;pXA}DCIXO3h+*-FJ^z?5o7tgtcQ^rr9g>>|X!^Nmf2KIKPKUjdO>xTmb7Bh3E8_Uc1I9zUf| zDg=iRy{+}1JWm=6EM3vjH3FYvbSK9%vZmlare){=(;s!wXUk*8N>QeF&LaMmOGpE_ z#hl3n(y-DIVWqhL*5X^^KPulQ-i8N%%tE0(b*1*h<=XHB7tSxSkVT-(UUnsF7=qtL4)@ZZ2mX^q9oSDnekb|!rnYxBRV^WyYl0&4;K4V>OxnFyk( zHm&fne|QRc!XCXMg=N#68;G|T|z#o{GaDg0OryVMak%JD|7tW6UhEtDn`n+ zd9vc}{&eKOK?PvAn2)g;`d&jUO>a@{G1Q|}<~uWhE#Nl*m&7ybre}g;YU8y#eYpxN zjKz!%ep?l<1}&1m;esHWkD`tkT~)SclD?iR%4->~)J5TM_QTPxJmgvqLXD`Q;Gr)6 dybhJF?@cu3SJxc(1c%rVq^o6gwNTS0^nYAd?0!o9xp`=6*P(ltJ z(j^lAKF|BD|9hYJ|5?i=3+{8zo;`bJuDND55n38bHwo?#KoE3OMfrga1l{O{Agr7C z*TGNj`5dx<|86)c8@WJ`_#ezatelxELkOaWR36Ccd1h|S?zsKz8SCAfKb<)_)D27} zDAc1b#$JC!-++COBDJ!^6*rUvUq7&vfKi|1{cBtO>JdUtF5@?+R2^H~Z8DQPhie zIa^pKr_HLJDUB&<6HUI=RUzA<|JPGV3(|8~g^>=9t);F1k_~BHS1fL`uqG2!a5QLp zL)2-7|6~hs?Y}>aO@V8}DBmkV_Qw zn?#8rvspZH;a?xBd+CVlWG(deexmCFP5-Znxqp2OV*6sDSe!oXBZyd#5(~p#R~Zut z-t7|qw?UVwAB2erl#RQF(1?b;?#Z+i`IiX$drk70X<4Q0!Qv;exb3Hs!Hzrsqt^#= zBND3K0&}&qB?RH+GG8$7F%0{^Un1ibNt0${O{P*`NH(v$8=B_v-@k>Rfx}^Oq)^)Q z!!ZwSqLdGf|NCn_%nHSIzfo+Di;~;9)BJxo=;5?SxInF~NeBb;P}7< z|H=CQY=iT&35MZ8>jWw9-oRE#Euw#)L0cp8?lfQuCY_O1tIx95|GUP=`%Jn+cICy? z+vCEd@U{P6Q$17r#}+Ea%6EXNx`x<8|8)jO!#w7&T2O;5`P7xy>i=gGLmB$tlv+Ph zIa@*C#U4}3QTzP+j9E}8Y?3JPnQH&<%BW)t#k#eL3^M2C@YVju9f;)l5oos1d&68P z$A6jhEa`zP9ObC+{(rWSmvfs*mp$~}m@fW8tH-|&VkoL!9()G%|M`rB0|Gs0$GG17 zC8NN$J%r@#oa{fskO5JmX$B!AlotyBIc}qL0zN$Jc9Zj(+1L6e10?3g2Zx!5BnJg6;FaN<+&3Sw-J6h}R}-6EbThvkD=2(+ zexh)`+FD4*9P*D_$#8r>=49PVd2|I=6BFGPoD4XlUspPJYCqOI7W>*{0IO@$A8Z_V z*>;%yC7sgh5gtXA=c7_!K2Pn}^hPg{8gF3Na<8H3>vnzV$FNV732Kq{;JwJl8vL3d z_HJ0)x0UICo{<|-`Bh{k@sEPJjx~mB4@Vl`xQuu*OR}#h9A?rrp4_~_9u;QN>wQU7 z9@86ZS$f5o%e*Hh`L&A1j-7&co_#J8 z6+sBzO(Y|c%Ww4etAb9sy{65+G@%M1X$ScED}may!nir77IZtqwPHyuXr|)#V~z1G zGjFQ)-!}TppR{xZWc{D!e&6>gV;oMB>e@tQstNwxAsj*e81s;wF&$HQ^(tIXLM6R) z{mv)XSlkZ6y=$nV^bE&X+%Lr5<`20WhFzJmbyB^0-!!6rv{yCJsgk=l@zsWS5cPGs zaYqnTPE6_JKS3jKgam3Q&dznb$UE#0m|s?|UgNdRp?q5Xwp1R!-h-r6pjK698Qvo_ zT^=SBfy-JIlcJ)3>+uQvSwm$TF@vcOUSL=E&C8-Ov7{E~-rBBh{>Bm}%`YT!%ZP3h zXG1F|t}nW#iR$A^iaH#$Vm%wsg6Y;vO0LSiM}Jx)_6K)n*C!=UxAwhSDZ20acH&+% zy}7@PhMh*z*p<<-vzWy0y<2r4xBFPnOsV025?kO{=8hti%;~T=D3G`QKHpEWv=wKn zA@5YtQz?hPg23^wf6LrF&I==zQhq;CKAF;%w{$P4esGnfb6h-pJZ)NUjCDukW1qET zoeC%8&`YKmIdWFtN24W%4~~Y!V{iIfwSLTaxV_$|JGQw2-_BT)-M9 z3HP>)|Bmbk%E`#}mj7gLMmMe`$}zt1p2}6K5#pyD_F6<*7HX%JF5yz=;KIew6mbJ+-Qccj^1nN+=m% zoyZxxSx3)(`E2!$`5vzOCywZhqT?Z~rNP0us@dT7Qlkff8suzUXE&nh_CANBL0%-K z(v0FBXZDtkH0`&oWcCxTy=&_R#gEV26S14iNo$ZUcr5Drn5OjhU)&C*-w0^5xc=fx zQTm0N*g=~^GF@ZbH|jUxhWG}Xjq!=AVLe~a4b0!;%sj%&6z=l4A83z_4Hg4fB5K0L ze3uKQ?wO&_8u^TqP3!=MFVdkr6!g{G37+!|EQ6G=`K^scRKnWqP(=3RhN}`bud9J; z*qZtIoyT6ifeYbeoX+da{6{c|`du$c=ZrXh?oC^T2UldXzmAO4m69Mt)rHkjdgTDiGhZ{US-@ z_vMM{M$-%$S5?ag4}wR^H4JM}PZrd}=ap8qRIBgcw5+*g+87|vHrh#ni)Qas+z)bb zTP!OWFPhWQRCTKl>gW}3dx~H~xvaJY+U?<)$_9o%A?gpBa9W4Q=jjOnidAUYnVIiV zRHi{l^V$DoqrZ_pDfMi@n?HE|+*&n^BqJ1~V(gs^agmJEt!VPE{Z5ER3&TThzxbj& zmSV{6y@DW$kDSsuG}kNiu=kTyaV8gCT)>BrvE8vlgnW1EHOuOUB(t)xh#vk%$OPKd z^oLw7r#~-m!GW$90{5CSnDH?Lib=(c!!wPJr`^h)I!T|Jt1nPC)!Jx$TYvs(;&p>z z?q2E8ps%>~m8sf2s_7KVynjepQObf26%F&+@+HPIzkms9yl~6^lC3C42b? zS^L|WuCHYcac^yPZyY~I`b&h6*zC|F%}_zF)dbxb;rDl)@{2dWzmBa8i|}e|TqBtL zbTqPE_?zlgv%Kl-{w5ZW--29hUZgNOgiYeO?uTRZ4!W4=@I`$`rhp9*guFm?8LK&M zh*FfvyGCo{@?I01Y+6KBEerzp)mcV#+){*7Uia1pw%f9-K!&r>u8X7`5+}YsCL=bQ zQ)}9debzC2*Dvnx+r^LOb@i_sPSsH%Fh3Ck0m_hX@zl}vG<+qGzB#^MDSk+Lr?FW} zK(T%wb`3(pQ>R5}br$Mgq(8FKV+Bqo6Q-C@JSR(T!Er1wTAQjgcp-yphDL5{SH~=& z{LYHl=v605re=a1&P=*5YC=4Plu;q+Gjnw@RJKL*-2O~oYw-QR`LmPfBZ#D`sv&e@ z()gO3Zplb0+c&MOzAE2@!h))0O%I%u@A|RYz^VqpDPo&aiVPs8aQwOQ{{Z!Q-v9vgx_rm~~ z?Xb|F!>TLXhe2$Hz&Cf_8m4|AH(PM7-JldchB4o@epJ?f#9&q?Uxg>!FV;lQO=;^W zb|+f{g|AP}X8yPxM3t>qB@-`Sd|!i9T?dZr z1NLm)HxPKCtSiQUrW)1C@9 z^6U)^nhpI2uW~izv%w%@JitdLn061T+L5#?Qq4J8RI6?kyOWk= zLN9u8d2|Xud(AQ4zO)g5uC36@6k|iaJg9fMjH+Je2s+ z3|=fo$*-8#cNoYU1hTk%j`9L_b6<;S=rWM8TEYqT`3(*l6|!mN@$rUu@c127OGF@< zevv;C%8w1_A-2AVfi0PdNRc}cnKUhW93C-V!Vx;U(jVYK!Qb=Yg7?rykM+I|ntJ=NY*JxJya+Bj1;zsvQR zwW~h1R-`6*@p(pC4C)2*DTjNID96PP-r*Bkxzh#2wNU$t7vaUqU3&X@vc=j1 zTY)7DweLaoPc?eD#B`zy$=t9^h~uM=PcM{KaDr4FhYx<#p4MC75M_ssO4=7%b8;fP zB!Wq{TZ4eKVg#W!QJ5m1g46v%_Rg<=cYG+izan6`xooh~#z%iju7%}dY&spSEde1j z!J`IRYH2aCb)zp(1E$hhyackNt918q~uJk*K3&54a?a4i^3BTQB^DFb@*yY|otc z3Sb4z{I#VXB!44wc1k_qBF6Vl`uxPY^?TcH$5Th5e}ytG<~wfOk$3-m!vd1wW()12 z1u~&rTu|Z0_DyvBEDAMv>PX9)4^1IX?j3Sd5^_B_;j}v`l&I33yfY_O{-DD5*qtLh zD#|dShwba9DN@R{pX1_C9r{0U*YT0SBO)liNrWS?<+}UcbNoCEdtvODV5Jmh#YcN1 zBGnu3S1`9z$2P4T{t4J1L(CGgnbn;)EpKn8ONoXBF4nup&4A)H$}q`JRvUiyz@s4_ zLqgo3j@ZG4WI8RM@AAw!(GvD2psV0R3onSPI1`Ng%J`t_sMsL zNf@wLAX$p?S^ECE!80K}OgYJc`Q`BXT~{uLzORT%BCy9YmLe1Qkg}?&4^PXY3?(hI zdP@+1Dw!EZd$=5XJAEuXrYQLn`75{v2*CV~i9f03mbkwAn1WYxF3pIxle($jP5tB* zH&Y}(6R%loVW@$MAaaefwGqVx;O@*GJX;8wL3L@=D+fPmW)&=unTp5O`e4+L7q~dj zH5ntGRJ0GH$6|eW3;GDCfeyz}Q97T7FD))oKD*Ce95rrBXIxrI40Xa^67@Hx4awn~ z@_tQD&|H#I`SC3KJZ*nI)fb;(Yu^Jm)S81-3_{)?XUqwleQ5X-hZ;#HIAUA0B2hJQ zj=*|Vy-cO>wqV*IOSr>xx_C0%b&8tCj-({Kh7@y z_PiFU@$BXGyI=d4MD8~Ne?Vsne*;f2YAvL2nL)V3lViCx{p$lO!4T)hnRlXYL@nFh zgB;s=5Tt4C&ZVgTM3{~(6o;3dZm695cN5itT29L%MDXQ;cC&M^hatGggI?zKl!;6Z zC3-ISx9VEvV=uhXeDRAWy!M$BT+6^;K7u|wWptwCL?8X*W*(6k)L(>pPW;1LplAae z-~r0dTuoK`KN*r6=)K0@{!mdKw>|FqfT3-=Tj%^9f{OcM0WfX zW@re~Ko_4m>0&9Qm=QmbtJ(TRX4NDj`#M&HryZ85ZYfG`=e|| zBGL(+Bk5M)gF{=K2R1JX?@n@42q#$kafPDQy%{d-RwcAutTdL}##LqMwRoGl>sJ*= z!dQMiUq{G1k@CSqI-}|ru+=M?{QPtw1J%ro;3)U`@LOFsI&^4DM}x4AP^+Vnqd-l3;)s@ z$&V|Y_|v~fi3(ev%mYK32BOH;ib<8Yt-P@QHY~h!qr12E>Y7mc?^{@ zKu)Z2zKl@PG)5GF`g-YA(R~+H&4ivts=#tU52>6r@>5s~B_H;K7e*SUq)*Gal9@E# zWLWa_JYW?lTlTTwzw5ye9N|GCPvIhYbr8#>9W+$*mM!Qn+oT-pZC|15$#ML(jiEJH zSMQ2bOC}i`%$~*5HQr#*o0F~kJot8?0MfSE*U^s@ZA~^*bFJ`~ZjNIJcB0}r->!c= z*iexh#;Wpo!tN|q9)FTO09jQ0Lm*hIx3Kq01<-}xMaz@?B80hqf!Z26-eovak0R-e z^F)Xz9)_i>f0ojMM-_CBY%>P)#^lV;GhdXhEL7YFa1BB}3D1bH_kbgO^lGh7+ezd; zMAVXW`VB_47$n*@9K^mHN-R`i2P~{(VHDNuP%Wud2{o)V_h9R%hD+ObNXEf3Mrj@8 zC~O1Uik8Q(C68JtTsmMvxemvnOhhTV)OCi|E4q#I+oYSbez&(9ldvj2*hW7Z3y{a( zS;S*phq1jfy2sI|H=VKH`+kvyu=>;Kbbk@yc-YL3#U6CN>X6`8Z91+3)gz`-GufW`^pWvW(6KFImC| z%xsh%8c99!+BAxL4*<=T#eRBt;LTwA8!^a-zVqA-f5bNC;Q32YIP~Ik1pgM%tzM$l zF}m0 zpb&qoG2#EjbareLqN$K1_tWw~5dW2vg+sXlPF59lZ;mxN3Q4w$Jdw}xS@!;pE}F~( zFZ!Da$(Tr?_~ebkYpe~r!}3A7_w@CS3ACn_x45*Ur_3;;#zU}pmN{c!xq1#yAM+iq zC|fYGbg0s44smi*t2%f;Db`)%^vAB1#KG9GeoF)GyQw;>=d;4Ls*Syy*frG9P}qGh z#HrVl^I6Sn-){9?furUpGeX~!Atbk1S$Qn{@x&4xD7u=+$=aVMTmt~~fkupm#9 z--r%c8>dvd2;N!&mDM??r}>yeL15qm(iyAD)lG(ns`r9@*jJQ#Hmo-$)Qa4PHT{S$ zRhAJeW<1e<&3fI$w|uWIK&`Km@+vjrGgW76-EkiGv*@3B8r^GpTRcaM17QhKt=|b2 z{l6_73`Log(-ZGq`V{*{jVbZT5xmghEh^ial(rhcTMPWyAC<)O^N)Oko`6Azbt19&K?PImFfEn?LEx| z?*2`vW7wKq6UFBGP>i2aI&n$-Sl0_QmWm=y3P=g+49pg;yJf^rmc7T=iCs{kv#r&1W=t_VO@#kLw$`^vZOqnGlYtc z%P=_>^kP=;&wSkW{f%7K7E!S`o`?-sjDScx&2+N&33hhi|mhJNy z8={xg-GO2cb6N0YKX|A=e68+!j)f$a_v^tzGTjxWV4@uLJ&eWi3~UYT+I7Q0zD=Dr z{8RO&EtG@>PJXUG_!2HIcch}De>y32B=@(-1ERIOsa^D&?B|9Q`nI?1%=>$A)(>$c z&vA%rOlIycYh!oBhCUiRi>kk+{%S}lpwRHnjV8TjCX`qTx|jf>*R+1bzGJyb^R;zL z-BzVmn%FmDq~<3a`o+txfqDl}@b|RoqOsqGo+WiIiYZ&Akc@5d92OuQ@B`)6g}cxD z9)!_JbZi15_dl}$cQPFfM`t_5Ax&GdpXOx@-dmfwyCbg~8t~QKZq0|7e5I(=2MaS% zxO?{w`w;W3YU0SiiJ7E{Oy<)wwDg{2>{2)5<$!psWRbrP9uyx}>H#S=++axZ7yCp} zlW~Ce`%|6|*#Rkp?8-`?2K|O)k-SB~mj4PF5J%p$PSDwwlhFC`SQ_e++k$z3o=RT! z_vxkyc#wWoBC0@c&kYAl_`73uu*}DXbD!@5o^YR!bQDZE}yJ-#+ta zUZv$v|6g30o?jkF*gI7`CiJmw4@Z7NC6bBG+BO7%sD}u=#!oh7wT6=Za8|_274Zy?2gYH!%2GWHVpTYThYevgMKp#=pe^{rW5KAJbmwoW`$DG-sxkx=&otwiJ>e-2GmKdC`_U*YIYFk`S{kQ8e>&;5A5j)KeHz0z{uJj*vIv z5V9Qe!HyjAu6iqZZBqp~1SBTlGI)DJV~5brxzx3pa<1ItN4V{1(#g}|elo0%OY{%U zPuq^*`L&q*hKb2k_h|^le!R2{NVA!XBf=t#6mKNypfk}N>fjMZeW=7 zY6H=6{V2BmEosvf)N$$D;%=@(G+Aj#e)9Yr;G6XUt`ws}{jdg|=C0Xb5{@1ET>&S* z;$pMsrNGA^ymU`I=oed~-L*L2htSh|pMrO|8p9pBu^WhO_9g3GL?GU4bXHHidC;FP zH2qrXs>?)rgeT;NdOa=ni^hqc>pkSgLbihBxwYO!bHWB^r|oRHm35G4ZJjd5M`YHI z>qOplm*o+Qrk;cL%ndX2ZX^gN;s8TKj^qTOd_qNLJQckgN85eYq{|ST?n?|Ez8}d? zZ|Xh4KGycatNvUzU*HtHVb-3v^{lYv`yc2R2j%xd z>0>P$9^A}js|3li-^d%UI_OQ>@^?2JneH3+Aeja33A?w;zf^|$O?&hpNY!~xX^xHd zzI8cC0+~3KxoB98ir=pQwaTMFgKIfM_%`c4E=D z#1ABnsD|e>rZlmTzb$2lfYUa6lbY3x#+kwjzk?po^n3csq(lSqu4jx=h5D+0pKNJi z&Ld8A(#VtoboKENW`RdOiwzxHT0c+FXkDE8db2pNfvINeYE+FAkM--uT$m@KRgXklu@rEivsh& zzzzh-;5mB@uO13O7Hk_ZbAh96ed>=%ge2}~b2k_`@6Ni@415Kl2cp-n@wtrffQx&Q zPP!c0E+9Ix+rF|_2J9qVoxW#%<2!6zPJXq8Q2J8I#wOP@e0SEvX@g4U`CH7u$Y^mi zY1=1p_Z7f@U1PO(K&MXE@86D#4UE|QswbVJnQ*+`q3c1eWTL|@aSeI}Y$cv=jx)7Q z7W48V$)c@HiWX0bPPcAYilrPTE#0@|plb=9QXw#-tT$6IS8W zaj(c$wJc8kir4OP`zc!&4GiJQKuh13=OZ=gc*so)^W2YJlwneNJZRUShjdt;Vc6z& zr;ukrE8GiHk2A~SbtcuWq}aH}q>}UC2^`4rj#ueC(GIhJfDM`*nL;kZLtDQj90t?O z>!>tJvYGvkVA!o{URGBN9i=L6KLhKwl7Pt56!`JbtJD=drN)3`4Q9Ur-b(Qu=4HdF z-9m|;5(~|}&_5*}aIW_)I5MQp?YFDS2^K*UUGN5G2+)-|EO^H?hfg1ZwlF77^N1mg zWihApCvk~D@_zOuvP*9c%g%8$FM4YFw0Ks_*Z>3MBKkpT1A3(uNs_1fPiBY(GEiNe zFJvp3bgOQw=~R+*J{-#e5*rIMDD{SiJZF;&cWNyC24XO_;sU|08fEn#!cyxkNvG*p zg$LY+dcFGQsBO{w4fkcC%Ye_w?rnIfzhX|TX&CLn^S_QLe*)+nJtL0fa(Fk5tQ|Z4 zrIaA0de%F*JydRK@0#h6&@(5ywf?sIfL?FF_F;#K;y{5n9U?O7v(k=;m~(!y?aW4x zueuZbm*o}$vkFze;5!96Sddw1l7YtIq7>ORFw&<^%Z~T3^mZi%Vr*DlI^sXcJ3JU? zk1NAP&OPNXdbt5~uBv<8;09(2O4DY9N<_AKcapYMQYIxLP`mWq*!em_x7osjn7LYT z);^aO@_2K{X7oH@MQ+Dn0(N&e4oLNdi@k!=k6`3;H`_IVRNT0c*;?0i!5 zUfrH#(%H$AWw1(SOEjW`@{?;=&?l14)_pN8*LQ>$%_0t4G&;1-G-+bXonml?;L<|8 z9T8(k*?R8#(U_&#Xq;andf$JyR&-(;&4Nd2s!5gNKrcqR!RFZr^#YX#99=g`)9fX? zX?wMbO<&Z-YL|8i9>FYvn%^GxZcv&RsIl&OV?px5doYWral&#wxV*`ys>sC`TD3u$ z1Zco0n;M3dB*h}hv7am;kOgEa1__t^wHDSdqdA+u{7 z3gry4wE%2jwe<@GdX8V`TdsAh&y3v~+05l4q9bMd4%<8dT%dj%zn zSP^={3`M?TKcUih4cj|U!Eqb|L$3Dbg)@k!vmt5f<|})PZuBWL+X!8o>s-slL~a>8 zV7jtfV(Vo_)z1&MU)f3%D)J&TT%)I>hC}5f@L;3hzS6pfRXN}Epfx7X6$lHGzU7SB0qhoo~m zY0k;QBVLCDMiK9P0_C-};NNqfOg|~%THBs7>=+Sl!6*eOg$f9XC^RMU=(Pq+!Ye)= zDL0rE9jo!?hiBHhMYDl6hytCk3J|w~e#S~=xD>Un^6UJ#dJN^Nv9Bs&nEDj-i$6(| zf~B`hr%lco*Ix;kq#M~=r#Q{mP>Ygx15k-2+czK+`X~a0n@E3a=hbud+*1 zLuP=~nYu;H2qDc1TPh30;ZTPHe8FoK944TUk;Gu?KK_7Pv?`+d)JXkL^^;jtRF7oJ z_+&$)sXUZxzX~>?dKU(H;%xru%2IAhI7}(LN}JA4x-M8sh3T1XI=6?%u+7MB#V6^K z3OpRH+LO%j2!J6)X{Q?vMwjqbiNn^I-)2ncbv?wQ-lC+3BaV(ei(~D3s*% z=bY@cJpQ1aV*ZXU`_$1dzgB?OI;TLWqy$<16?@@ZW(8ItGttb+bO6`Mqb8BGn)`@M z?dQW59Hewpx7&6-y1~w2u!GyO6ZG19Y1tSd$S&#h#u`RFsQXQJDOKgt3P|)I%T;nG zyR&vRg`Ft-jVw#-J;#M!m<;?iA_OMos!TR`{(Gi{Ggi&N+y+R(D5oh=ljOqiwObrw zwgOmOraW@-Ln9xK`1@q{&f~injQcYh%zA;S0z_#q19BZehF=aC*#1e4Rn)Bn;bg#9ISN>-vi6*e8)j`?oqS>3W1{aG2vO4LQ@! zB8t>m=CJ5**V!MrFs%D*-BFUmoeo!+05q$QF{mS;K8#I1b$4-s1j693%PV#y60I7N zY;PH^!6)4NIrhUUw@cvhF1arE*wD}0UfsqK?gLP7}u%qJ4m!l%qG)u z{-|#rpoqiqU|i2AEJ{;nkCq=mue#*5C>E)bcVj{Bi5p_kWJM5?9E6&u7|`ElIT?9x zl~-uJUWO~yTD^7j2}qjIA@js}-0TQz;N_suIa7FZBs1qi z$I9+{8w*LB>5aD=3ZCeYlEa@=t)jFE*BFo1bz_hAVO>|6f7PS_qWc(CB4uOtxbz+j zdbtpqNlvgBpDDs}WWFwi|7xE^^EM6?e@|!-3sQ+>l6D!2^Y+r4!>jn34R~ImhKNK3 zZwk40cO{Q5+I#?-f{<8D(y!rW92ZAJS`FkqMI!vk+>BHDA9E-)_!lV1w8>KvjyfBX z$eyxgrkTB8CrYv9I|76W1jQ!(V332L@0d+1-e~0nutlkfUMFQ(X zqsby%Ec$Agy1V3uIc=y}VmJOr-?*K)e_jAXaN|A~?H$mPI_##|fAGh@ zQte|C`v`SGWF@N7@dPtK?p%lm6N8}RcpG88?Pm~jTphJGZcjVmo5kx`Gx}>B%!C$a zzoKn^H{;z?jU!ZtmBcxHQQ{c~eIQrr3fPUsscMRMouBCbI8J~DF@;_nxH~M#IYb%` zwz4J8rsthT3y!SUpq-^Pp`h7<4i~``W?m8VmE>T*0nPQi(%Z^svD z^$P5QyP~$~*dwx^l=dD`YW7bzfDN8L2YVRE7T%R%sXdbh5R~EW`!|-KM0RB2+ePlw z^Vb#??tqIW@;18vM*tC4fv&Kn9tX3aD7A_K2!*_wd5!`aD=Mvb;iZIrMZC|F(*Xf3~+P*ZduqCF&Slt2x z$XCpMy0;Mv-)E~U0Z@dH=IY7*x76mTlOx?qskDn+Pd|b@5cN&uk`jU}c=oCXA4rH7 z*)XqXq)ph5e`|@#3fIBpnm;OQ2R`_;Q`Oq|3rhG;7&T6dxAfGD&bq}(3R&F)GE%33 z6A#(ZihsT+D8}FJ6z2*-kC^1m0? z1J%q75@Kod&T>viSOt^avnhvn;(TY>M{c>ehIeTIr?dX3l^DL;_Gob)3`9wHKWg3; z5D*x7aN;AVkRb0t)ohB55#{UfE2R6?5D`tEiHI+lC#HUQd1XuaP{CXW9Mhq zdI&rQ`zeVpFdROMXqPKUPgUa2R`8fD$LKS6G=KH z9?49qir?aUIzMO!mjmMu{1)T|MgK$}Ad|@O?#t3Abnkt2e#~}^wqpQ(cG8N}=ZEX7 zc~uIupcDc~n%BhkVV9JZx1x<03ZX}S!hQLzl{!X)QKS=uu3TckfD{#8y;n)1m&_Ri zlS|l+u3(JD^FKEdiWKvp~=vLWY?zl3`{) zN}pZ!J1S@5?tXJ$-qSlwTi?gC5aNlgIfVU^!3021R6OBaFLPn@oTX53qOKnJ&y^eM zrd08!qu8z+1v-0UfLr7O%#mIEINPK_Cj0{*P+&J*1b6FUm)~2}Sz&>Utphxrb&^l6 z{sB*;HSSaAPjPJdd7nq(qCYenh_YYN>4f#DpL zvHxHuN%wGQP)DNy%icS|6(mcD-WEibx!pO89x%SqucgtdD7EDDTWY0%uAO4`m!1XL z5ak7rB7~;Zd=c{g*Mdmp49kCiUNwx;dH<#mRk_&?c)O^w)vE_PuO*gVts^?HKqQb6 zHJ?dhk5*F}D!onA?S>}a1Xhd;=7H!e8bBy->^peZW1?xE1gd_@3d(E$3;)B9`Y z%6Cz?a!G>qPk&twTqm{O6UEyYQiMICKo5+%v#!^UX&NVO>1FCXXq<9|pt@%uF3|w| z3~Ulj(y6n&dbfCQ=SDdOidt@7HiR+wJh; zY$(1N)a>gHm9(4Omlw#KK|9X%PSI%fZT>GMgUOfHuLc&KuyU_ifdrA%y>fq&AsEE_ zcQdOMnC0#v;nVJ}}$XGMO3KJ?TQJV=8X&d~uWcTr?Q ze^P|y)6w(7o3@)CJpLLJj2L-PpTgE1VM2z?(ndaA_+%VeQBE|D`w@xUuLC0J#kVow zRM!Y=F)(JtZdp!@GOam&{NyD%uQ~`m31+fid-p+zC)M#j@VhI4`uNw8fu`}ue zrwPMnn7Mb+vZnw~Aq0c*fCk0FyB>%LA=ze+-JCeAfBZ0yemkto2OBc?ED+fS{q-?B zyPpJ4f2oW;gC~P{YfHMb%?~gNgrrO!``wHoQ$c6A8X%aCTh4v9Z=GXscuj#<38ezW zx3rdwZWvL$%9V8(e!NL~l5><3OhRe>-Q)q78$#XeH#%n!Xx4LSa?ZCtWQBRWwBBK7 zlM4m2VCV=72x{7;UkB|KgXqDnXdimO&@lx22>j(-PxRnaI?eRtlg#W^uTFAF(M;Hg zK3Q;1_0>+BPMW!gG`(wH#RrzyXEw+)rmMn_nQSeaIbQD9NQ@0dlE|g@bj6j< zc?f&qGdye!lKB9D`bTONu@hLnBa>j`<_}RAB+`>O#uw+= z2+*KUgXc`NWVN>_w!+gm{zR+eg0o~_xA6@BGxJ4Was6hN%L~y`5Rs2%_&`a?!#-~d z1b}k@8UP)E;M@3O&+wB0QKG_sQU{!U`MeKcN|{nQeBj^rgB@Acub3|k#tc;A+TJ`X zGQR4!MW9yz{vriEHlLaSZw3``r&WFVYy)p%vr3$=_REwPvGQ${W83AgI)o}&)Oz5j zLf>{#!CJn>X=JO!`}Onip&nJA7CM6D^|kR4i5$xya4wk_gJFsg|0k^`gaXgRLs{(6 zJdsO3!Oy?F?OQfB@VtU2&C;gb&d)BI>A^tY{aU4?6I}Sop%j>LYi$s%qUx(qEO&PC zcP~A4!zKmeilBHdop<5BI|Rny_;m!nJBkZra8$i!CTFhwCmsvg4O^MrWwFHSGZ^^u zhD<=*Y93mfytk;|>pd5c!#2eC#hja>U85@IVe;8*ZV9+)KtLgPdP^=fjO0B@CvcmD zOY%kiyFXN!?1>Y4B*w9!GjcL$(7^0OC~!cu`11uvTD)9vD_NjMD%NWsk9IA;#TB9$d$} z&a)WI21Nh{9Y7X_U;&l({|vcs@=%r8j+cR+_`-h8WIS>alO?KisAd=E%GhK7zJD?s zfbn;p^V{vLm`jYFDt@&Yx}y-1R6W^2YDGJmv-kR5M`xo4T$$9?1Nc24IzKA|+a*vm z_fCMequ@=OP(!%zo4IWg)*}(2WU}wnVDmb~uhUc1h-{0+K0>IjVtUy!hxF?wV5o}- zTx|m?ef&j!AIcTk)6!c;^wNmbh1j!scbX_VWpYDHFF?AW)ct33XvT?ZL>w6Hxx>qR zZWPG!-|o$SN6OY0HS9#kJnPjv{zg|w?+vobMzOGX?hPKq3+;_+4TLc&T7IK|v~oR( zHSAqC3}EC|iKzz7`>{AGR=;8N1{m=oe3*Ozxu2*y&E%+l(mkzV84CrCUiPz5UEam) zIlSfk+-q!lEqt2egW^~3>x7kwWo3C<%9}dPTz=3(@u{G`H!*d2hXrZ9D;=9A@gmPs zsz#tRrdE>J^&w|+TS1+9;tbG?_{9qppk)1Yk!S5}qM}hWgg_TwHg(7Nc+(}_>%~jg zOHh-cM7JE+3Z6udaIE^gz;Mv3Dr2CfE6~)51(DY8!Yl|u84pD!O|9uE7FkjOEZc_- zjM1155;RH_+?6;7S1%cYRn=Y6EeRF}?GjXFO=D3HmA~|Xb}c6`xHI{^?D%g z6A%xMz`xV;#3E_l-_-I)=2j+_& zPcT9b16Ngx=5MBQgS4Dr_GJe`Cr%ln91NqL(D|A}OKf2{L95n}yzsDcC2d2T+SQs| zV5`&_gy{UHO*Zn?PuO^%ICG@$8OP!ZM=VxO3<{5c{yVsq%=SGon{eU1p(*e~ZfE5) zfBxDU%q`ZsAum^s0-YLt)1WuHT~feDAdAON>aLrJURbd!`q}`QTmvz~L*|Qh#FxO~ zkKmf-_ob|yK_LOfo&~K&)StKi?<|17yYd#q6?sxtQ;0XO40(W3?M57XFvz?BCR-?h zj0idn0ACwi(Nurh^9!^GD285~)s+`Jz7+mlku{%;RjV8z5cN+DGk3nV%ooXLj*oPq z3d2rh(ww=%K>~%6ShC-yvDM~_M0S2nFu}~-!*51HD`e zbpkSIHS>eW+)2bot1mklFsl860O6;iPUUypgp@;>{(j&^f%<))U*Cqi&g=@+AcNLw z{HXXCBZ=)>zKF5Pq871@F~!wm{Dp+XLx7W(oFA9nzKIR7X^;E^{LJ(dz~DK(+N#qG zH}sULpr(}6>@YC@nIXBy2VGgymeFA`nDyA+W`T^g&|d5U7{bPaiXu-c)RtZ3uOB`w*2XL!v;5iw^)RM;7bCrN#5>EHmWsA?@5YiIm zW5#V3z%$61@f7Tj&-?M&JbB<9&-ak$pr_aO+;9N#56%`vcufnLw(qG7Zet^#O1v|Aby2#C-JpX1p&f&BwQp`f7ZG?qC5YB@LAk!h`{Oq(4&5xsx`F00RB1w zaGvE>lgh+8i}8vi)a}CZUH7zBTGJ-xYAig_v?~Q3@So=(!NBDV)2Cw!%VN=uTWfCG z#ULyzfrcs4H+A}<4&u49Re+0|$Gn`eL(=e(+a7vVO)(Q2s%|IgBo_GcQqm*e8X;5C z5Su!!Hchz$7+WlFTxrl*eglr^Y+IgB*fKEUx8JfGbeip%EIW_WD6Aoz0*3h5jb7`PgZ2U2(+dy z!7H77SQL_y;p|A%9GTGuurZrzkpxQ=)X`-82jeYy-{VNI6@GI?Zsv!i%ctEVNB!P0 zRySYgq#WSMTI|=}_u2g$`%wR!ex{CYKvR@TH1Dx!z@vnA5&GL@3TWlA$9(JIlGUIs z5A~Utj?_!$SJ)avm%m6!GgCb~9tbd|K=DaYK;ENcYvawQVjY#-6B^IuES4E#7jThw z-P#BJacsT|RuVIz$Ky863_Ne5erTKGl(&VJ*eH%@w4nQ^00_4~<_?c4wlPlk$lCyx ziciHKIl?-9lejshFf;09bo!@%aK-MJ91uPr3dx{B=z$|2JXNz?-SX~wl?U!Oo$0Ud zUSKH}p-Eid%2J)0V)bdqh7o|{NGe@Fvs>!1P_otv|g04atL+ZmuCy4c!fT0Ve{Z%1!o2G|a+!=Y%cPgn{ZP z^z`;j(sR%me;RCa8wU24>SrYZU*)@|?&~0%D&@<=KSr97xQqTdz5D2WTyqS_i_mVe zQ;JB$gQYpk*ZUdnJfS9{%eFvh>GAmme5>2sa#&!sYfpI|GuI&^lM)lvbR6^icovnO z76LONeiP+wR^PQ_6r(j%e@gz94}k{tdi@o9wV6Jmm3u$M4bAL;hJJ1xh)Np>$R&TX ztLiK4@LvDNoGch~E({EA3mD}zMc(LrrPOjwkM+tL+VbZ-2LaCX{VOLTZQeNY>nJn-?IDp{21QU=^2NKIZM933~o&Z=N4X|KSY0% zluZbcv+aVB(8Aq?pes!OBiqmHZ9RpkaD2BwC{@r zm59e$m}QPlu9&4rxqV)rsMK;gwevvjJC!H|F2sp3|R_PTh?FMOZ!KOL<(Y^#9fZE4b2vU;C%;-N#V|0;ax z7i8q^3q7P&;^}a5f?ZHClL{_;z!OCF5wu}g1(99q%bqz^?6FFwjNKPF^mC1Ci?}aV z?CVM2LO&s$QB>%P)8YSyOH2tg}POv0cMZ$LLOp-ZBKTj zrx_g4=Kw+v(NNJcBa~5{mnBS>mqBX9d5Km20^8Ie#613U3yH5nNWi6hwnrD?2o6}? zrHAJ(h+v-jnLG+MC{|0EuXedtrY72Iy7mz5t)RLGVri)$y3Eg}+LlYa;d2g(<$yZi zkcj}80x|!7M>ZsXdu1$q=irCBR=QO%ckD{qXI|Ph$9qcxZ@S|e*P>)==bGr7n=H}B z9wj71AMrptEqSy=$XMPE_q8{?nBt@$=iQ%&$ddP_5yJddrvNl{U=+1s8 z`ngwsxcE-0?LiJKCoj8M4pyNo`i}kz7L{{cprVuiEz88ehXE2g>N~Go>34az#tPiz ztlZk^=7iMn(X(r&H!KW;qQWjYm=8P6bWD)KULt4|q7b7d8gY6g?aN@|^)0TAvdP08 zGr1MR)S+Zg>44Xn^8vHER6O1Wv^=6#YTQ9CyEbxjM6&v)d8%cq{$@1pWaLg-AkT5` zVUJmwyYUKTWy0}`Ia;7?0jYwpl-k}BsNd=8=k7UTpefTaZ5{U&niZB6KX?+q z(8>Qm&-%}-C7?eCl`5=Ov}p%coHdh{K{CyS~M?Mc|G^?78g*3j(SaLAuUDjWeJX%mR+uS$F6JC#Um13qL5lo%_Fr+^1FZc zScCD=>O5GWRM)4iB7b@H(Ps|pGn}`D4C=4m{;+RFaa~>e@*}bjV@+FWFBR_l)OB;0 zGj@F23Tjy1#o%`Wb5r|M4qY(^0w~-TQEe2aY>{kTE=X08o#26(jIpGi>zR#ZwSQi8 zH(XBNwWC;#x3D)ZiK)I>b2+^9hnu0R!-clQLcE-k`8K_?#-or*6wYSt`rO9^4?O0q zY_$``(AGB}m<$b>)*nV3k z#W$Pl2GgylZaA}czcg?H%Q($1d^BP$2^oQ8~|7vm!!rCGL(t!@7Eie|k}vq+prP!cG;1=hlQb9Xsv74sUp_ zNtJHmVU&78c%som&_{cPUrX#CnykhD_ zl=w^~c6sm<=5B`{Y?~f?VKBZwRrJ&V@OQ_g+NCj6W$f`|C*#^vFg4>7Er&3nk_9NR z41(~{_`W^wY;H44U1K`T?%7?UqrmibwT6o^QDEjazdu8XQt?zbsd{#A{GeO5))ToS z-P%N-xtWjii+z1Nx29u#Doxw=FCcG>mAFgp&|*I^Px1ONE=XNYd8!@{lhoA)&l-VT zz#@=KZca(^o%4Ddawl(HBxQZxiEDU2QA$v0^pvaX_vi zY8Fi>;=MC1wh44Vgyb3Yo!NhDR-c~oaZ}xY9R6gqonMAMpa;=dyIo{O@g92t=%te- zR>m2jfJ<)VY>d?~%b5P80a6nxzKnGnY!^*mAdDCy#-0%c%oU=*DxDMpH20BkR`s4% zZHw&EWMz_t&-6jTajAX;+T_)g>5VG7H>t5cpXW$;@0B`TSpD*$O<<8!$=>w4(k% z!!}??rM&T`yH~491;d+tfrsw#KNUv>0tQOC!1F>buq5^`o@0{`$X)>|qa_by487LS zfefO+t*1BIo0PXHlJR}{syWx{deY$~u??8zmE4H!-wpChK?&h7c+cGBz@EQ1rC;K4WeKV zO5mj3hqAVG&(YT6S!l*gDLx9?@LtwfjX#q;Boh%I7Tk;zfZ8fs{BCm3+U?e0Sr&a*w!kW2EwTk1XDf+`YA{IiIpAMxNK^onOS{ zW%lsFj+5Q%H!)_0Kokp{_l|}!)QXcquO>kV^yFyQ0o4|fGsX;LujQ;M%H0W4cAc&z zr8R6a1sIA!Oq2$7Ni-jA(g81BC9-Laj}BgwJ3oCJB{aI03}=MGCBFTY;Bl?2ou;!O z)d2#T)^(K8+{}ZsdU-!}M7dCe9!f5o)SU&wDj~1Ec)4=H1X3p;anE<15FaQvS<^s= zOLFQq5j#ka6QE66xd%dux${;Exd}GmLZhSUB!N(sXWIgySpQypHw401S}{brIkK$R z?R9_?J0uI0v5S~*33fXQLhe++FHtO*ADs9<|cyw?_UUI@?Kb zrrpI;%KKO9>u;wYnl%8S9#5}+3>@xSj>M2H}KB@IX@rd5{ zn?Xi&bqvU`z>wJZtDfHRGv)A05QwxUbDTF~m94`{+6BR~&q4z+QRjSAezI0a3Y|Q+ zj|Y#Lk1q%rt)>JbW`gFm24gZEkx>N^lX^Jnsv{hR-e>MpjQ0^={JF*(D##t@ijj;h zUBvNwt**-LKni4nN1tNOQ6Zc^z$?=}Fw)j~cxOeT_qSX3oQv?wcsd9qqKMVnz#2ob zwczT#xm$69m+XccGX{V$IuTR#Ie5v5{1l-M-}Mf%C&%j!929ujj6Dm$X4w3oTcj5k zWav)s5=#b(rc`FWq9y4-OZ68&fw{cME#lsF?^$GwS4JVuf26tz``X3blLPbs6%ujw z{P`nn5u>>YgYsmOnZL+$D>?Wc91`M8g`*?3hq#$tlAt3h!KV3Ie~n`1if!HmuNqU1 zY^(?w3zwc~o{byhx!zHZJc)yK*+p)GQb}d5@NWCob8#q|-%588_HE#(L0}HfGcR`b z?jaR(1OgbyArvnEQ4@7aK>az{Q9jTNMLrv0zx&WTCAbL1)`9K7@e`yK-u5WLAuOAaDh)xZtOn9B###lL!;slAXc7FBw@ZBO>W%H0l zDaYj#&L1mQsu9s(zq@PH;MV%+l>6<~S8FO2piom65t5`OpvKE%6o@ytm3s#<3Oh%` z_W1&P>`<8Da<2HS$}SS;&j)zx#t4j>uov9tn*XR?Ds~=w0py3OXGh^4;JMifeLh<7 zQMBZ*@0}gw+vkcel>%u%n74StB0Et<-4C9lg!q~vy-3fV=ja)$&s!3$u13L2?1+Q> zgw3J^kMi01j5MvbPYE8WGb9$gZ=W}5gDirz5GCMc&TBwEQfiCj}Dz~>QanWs`vszS^ zE84CAz~M|cKh6byh=J+5n~}_nRI0qOa5Qs=Tm6_%CEEI2^I?9BmdVP*t`cxybW|k; za|0teMypAy8mvSZKQfrE9*x%nM} zfj1;W_2X0xoD5?1i>L&(46#ls5TkJKIu zk6&$r>y`;3WR{Z!ZmKGYpCkoT0LJ3ZyIg&-uZb97I#xku1beGS-~Egd7`a%vmYujP zR@HhIQdfrZ-h1kF;L^KyZLGi+!dRu%rV^evJ_x4{@n(eN>8VL|dd3ml8`6cG**Z}V;1C!$0BgBkcZo;tC59C9Sd}r%L@yu(y zxh$3QQjU|A?Km<*92FTl1a@w@v}&I_9~%{_foj|vK^3{YSx+{e$H6AZFX)p;do#R`uyFkuS3*sP?|5AtYFpILx8CoGMNti+9ykapX8B*EhW zyS{IRr;5r@FN;lUQ>#?+IY!&Y{R4aP4#;+`HVjo%Z`n$v_Aq>PBcJ9~Zi4l11<%8U z@mlED;omaQO$ljI15qtU61CW+15kf&k-PJ&t?LSPo|M^?#MvMOy!jJ=+ceHG-)`lm z**we4Y}yMP3kpw7cXi$*uglTkr1R>dFOxn&`V3qZp{PQpx0!lqcz3wKSw zceQ6nPviG~4@J<=O}%y-RS9Qyi$J4mT1%d9c4>%#WK+BSORn)X&dXb6EX3xa$Kg9w zF{Y3Y5hrlvrlqyIwf$#>jAxs7^io1K}_GnvU7&f2`$z`SmQXfORnruj31Dr)QQ7G};d=1{cL*UNPk zo5gEf$P_n%KeKKd2Tf#~x0w0PLR}PSoV|RyQrD_>ZX9?=*Ih)8#%GOX9W`t;T7Hkh zMnj-2+rzS5=n%cvhu>8`qrN$sOqmk7DIdv!8ra>F{|G3uImN?79_T5O=h&&;Y&xE{ zJ5;6{chih@c5;0}C6Dz^b$i2M`>(T+zkT>8&`xpEls{N)s{?}c{RYP|D@JWFRU`_2 zF&oH7t6ORM_kt_m#!L`auCnVN$0F~yDJT=uz}EVKU?2-~eWmGOkdsYSJl$`l{=B4tYn#^X6ohRe%ky>1*fqetQ+1-*6i7TgjeD*jgFzm-H)ytq>=ST@$> z0ry*238^CP_&_H$o}-JOs7k7B7UR83u8d!3s3)T2H-70KjeGflx`N{Jit{ylDbh|U zH@Zh?zqK~})!0v#I6^FO0l8EV#rZ1%+{>LZ2tmyz!|^LBY(xbFJ;&w1>?G8jDD(Zs zfkOFF{ad-CiVN;w1n={r|Lo4`mV&-f8aw&Zyr1{7lEVMRf{v!bU?EE z14!~8+P1Y`GE2Yd_X~4Qc3Ggcpy;1{sFPxyI$HVyCkw>6s>J>~$IzC)Ps&qONr1rS zg?)!j5RXqn6aVr=0`Stdx*?|b^X*Za|5^41XbGvoJF9?Tz%EZ@r&5_rMk|3>&)@V- zIRc#roL38UDOz(A1f!n)D<^{q@}n=o+LVQ(m1Mx_Uo|(WAihdm12&s)$jkWLizv+ zC&MOcL9bI)YQFfFFHn|14uMwS$5TvQZyf`>&A^tN5sxb511g9iRdpVQzHL~zW75E zey67@zIea;qe5+(FC}2qJW@G-SM2#C_dyKgZxUi4cMs4~^-JdDNTM>CMQ}>M%ZbIT z^UEzIcy0q4lu1Q#C<}dyaUC Date: Wed, 17 Dec 2025 13:59:05 +0000 Subject: [PATCH 2/6] Removes bold formatting from migration guide links Updates the README to remove bold formatting from the "Full Migration Guides" section. This improves the visual consistency of the document and avoids unnecessary emphasis on the links. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index aec24a5..2c112af 100644 --- a/README.md +++ b/README.md @@ -698,7 +698,7 @@ async def main(): asyncio.run(main()) ``` -📖 **Full Migration Guides:** [Batch Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/batch/MIGRATION.md) • [Real-time Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/rt/MIGRATION.md) +**Full Migration Guides:** [Batch Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/batch/MIGRATION.md) • [Real-time Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/rt/MIGRATION.md) --- From 7e5acd52794de8d3af14b24dcf3611dc1459af9b Mon Sep 17 00:00:00 2001 From: Edgars Adamoics Date: Thu, 18 Dec 2025 09:05:00 +0000 Subject: [PATCH 3/6] Updates examples and adds env variable Refactors the examples in the README to use environment variables for the API key and includes an async close on the client in the batch example. Also adds prefer_current_speaker to the speaker diarization config example. --- README.md | 45 ++++++++++++++++++++++++++++++++------------- 1 file changed, 32 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 2c112af..fc13374 100644 --- a/README.md +++ b/README.md @@ -108,35 +108,54 @@ pre-commit install ### Your First Transcription -**5-line example** (simplest - start here): +**Batch Transcription** (simplest - start here): ```python import asyncio +import os from speechmatics.batch import AsyncClient -async with AsyncClient(api_key="YOUR_API_KEY") as client: - result = await client.transcribe("audio.wav") +async def main(): + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) + result = await client.transcribe("sample.wav") print(result.transcript_text) + await client.close() + +asyncio.run(main()) ``` **Real-time Streaming** (for live audio): ```python import asyncio -from speechmatics.rt import AsyncClient, ServerMessageType, TranscriptResult +import os +from speechmatics.rt import ( + AsyncClient, ServerMessageType, TranscriptResult, Microphone, + AudioFormat, AudioEncoding, TranscriptionConfig +) async def main(): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - @client.on(ServerMessageType.ADD_TRANSCRIPT) - def handle_transcript(message): - result = TranscriptResult.from_message(message) - print(f"Transcript: {result.metadata.transcript}") + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) + mic = Microphone(sample_rate=16000, chunk_size=4096) + + @client.on(ServerMessageType.ADD_TRANSCRIPT) + def on_transcript(message): + result = TranscriptResult.from_message(message) + if result.metadata.transcript: + print(result.metadata.transcript) + + mic.start() + await client.start_session( + transcription_config=TranscriptionConfig(language="en"), + audio_format=AudioFormat(encoding=AudioEncoding.PCM_S16LE, sample_rate=16000) + ) - await client.start_session() - # Stream audio here... + while True: + audio = await mic.read(4096) + await client.send_audio(audio) asyncio.run(main()) ``` -**Simple and Pythonic!** Built with modern async/await patterns. Get your API key at [portal.speechmatics.com](https://portal.speechmatics.com/) +**Simple and Pythonic!** Get your API key at [portal.speechmatics.com](https://portal.speechmatics.com/) > [!TIP] > **Ready for more?** Explore 20+ working examples at **[Speechmatics Academy](https://github.com/speechmatics/speechmatics-academy)** — voice agents, integrations, use cases, and migration guides. @@ -284,7 +303,7 @@ async def main(): language="en", diarization="speaker", speaker_diarization_config={ - "max_speakers": 4 + "prefer_current_speaker": True } ) ) From 8cc65d6795a08d51216aa1c6abc6134858bf546b Mon Sep 17 00:00:00 2001 From: Edgars Adamoics Date: Fri, 19 Dec 2025 15:29:32 +0000 Subject: [PATCH 4/6] Updates README with usage examples and features Enhances the README with detailed examples for batch, realtime, TTS, and voice agent functionalities. Also, includes installation instructions, key features, and use cases for the Speechmatics Python SDK. --- README.md | 935 ++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 631 insertions(+), 304 deletions(-) diff --git a/README.md b/README.md index fc13374..0c37093 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,7 @@ **Fully typed** with type definitions for all request params and response fields. **Modern Python** with async/await patterns, type hints, and context managers for production-ready code. -**55+ Languages • Real-time & Batch • Custom Vocabularies • Speaker Diarization • Speaker ID** +**55+ Languages • Realtime & Batch • Custom Vocabularies • Speaker diarization • Speaker ID** [Get API Key](https://portal.speechmatics.com/) • [Documentation](https://docs.speechmatics.com) • [Academy Examples](https://github.com/speechmatics/speechmatics-academy) @@ -33,8 +33,9 @@ - [Quick Start](#quick-start) - [Why Speechmatics?](#-why-speechmatics) -- [Use Cases](#-use-cases) - [Key Features](#-key-features) +- [Use Cases](#-use-cases) +- [Documentation](#-documentation) - [Authentication](#authentication) - [Advanced Configuration](#advanced-configuration) - [Deployment Options](#deployment-options) @@ -52,7 +53,7 @@ # Batch transcription pip install speechmatics-batch -# Real-time streaming +# Realtime streaming pip install speechmatics-rt # Voice agents @@ -72,7 +73,7 @@ pip install speechmatics-tts - Get transcripts with timestamps, speakers, entities - Supports all audio intelligence features -**[speechmatics-rt](./sdk/rt/README.md)** - Real-time WebSocket streaming +**[speechmatics-rt](./sdk/rt/README.md)** - Realtime WebSocket streaming - Stream audio for live transcription - Ultra-low latency (150ms p95) - Partial and final transcripts @@ -108,53 +109,169 @@ pre-commit install ### Your First Transcription -**Batch Transcription** (simplest - start here): +**Batch Transcription** - transcribe audio files: + ```python import asyncio import os +from dotenv import load_dotenv from speechmatics.batch import AsyncClient +load_dotenv() + async def main(): client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) - result = await client.transcribe("sample.wav") + result = await client.transcribe("audio.wav") print(result.transcript_text) - await client.close() + await client.close() asyncio.run(main()) ``` -**Real-time Streaming** (for live audio): +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` + +**Realtime streaming** - live microphone transcription: + ```python import asyncio import os +from dotenv import load_dotenv from speechmatics.rt import ( - AsyncClient, ServerMessageType, TranscriptResult, Microphone, - AudioFormat, AudioEncoding, TranscriptionConfig + AsyncClient, + ServerMessageType, + TranscriptionConfig, + TranscriptResult, + AudioFormat, + AudioEncoding, + Microphone, ) +load_dotenv() + async def main(): client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) mic = Microphone(sample_rate=16000, chunk_size=4096) @client.on(ServerMessageType.ADD_TRANSCRIPT) - def on_transcript(message): + def on_final(message): result = TranscriptResult.from_message(message) if result.metadata.transcript: - print(result.metadata.transcript) + print(f"[final]: {result.metadata.transcript}") + + @client.on(ServerMessageType.ADD_PARTIAL_TRANSCRIPT) + def on_partial(message): + result = TranscriptResult.from_message(message) + if result.metadata.transcript: + print(f"[partial]: {result.metadata.transcript}") mic.start() - await client.start_session( - transcription_config=TranscriptionConfig(language="en"), - audio_format=AudioFormat(encoding=AudioEncoding.PCM_S16LE, sample_rate=16000) + + try: + await client.start_session( + transcription_config=TranscriptionConfig(language="en", enable_partials=True), + audio_format=AudioFormat(encoding=AudioEncoding.PCM_S16LE, sample_rate=16000), + ) + print("Speak now...") + + while True: + await client.send_audio(await mic.read(4096)) + finally: + mic.stop() + await client.close() + + +asyncio.run(main()) +``` + +**Installation:** +```bash +pip install speechmatics-rt python-dotenv pyaudio +``` + +**Text-to-Speech** - convert text to audio: + +```python +import asyncio +import os +from dotenv import load_dotenv +from speechmatics.tts import AsyncClient, Voice, OutputFormat + +load_dotenv() + +async def main(): + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) + + response = await client.generate( + text="Hello! Welcome to Speechmatics text to speech.", + voice=Voice.SARAH, + output_format=OutputFormat.WAV_16000 ) - while True: - audio = await mic.read(4096) - await client.send_audio(audio) + audio_data = await response.read() + with open("output.wav", "wb") as f: + f.write(audio_data) + print("Audio saved to output.wav") + + await client.close() + +asyncio.run(main()) +``` + +**Installation:** +```bash +pip install speechmatics-tts python-dotenv +``` + +**Voice agent** - real-time transcription with speaker diarization and turn detection: + +```python +import asyncio +import os +from dotenv import load_dotenv +from speechmatics.rt import Microphone +from speechmatics.voice import VoiceAgentClient, VoiceAgentConfigPreset, AgentServerMessageType + +load_dotenv() + +async def main(): + client = VoiceAgentClient( + api_key=os.getenv("SPEECHMATICS_API_KEY"), + config=VoiceAgentConfigPreset.load("adaptive") + ) + + @client.on(AgentServerMessageType.ADD_SEGMENT) + def on_segment(message): + for segment in message.get("segments", []): + print(f"[{segment.get('speaker_id', 'S1')}]: {segment.get('text', '')}") + + @client.on(AgentServerMessageType.END_OF_TURN) + def on_turn_end(message): + print("[END OF TURN]") + + mic = Microphone(sample_rate=16000, chunk_size=320) + mic.start() + + try: + await client.connect() + print("Voice agent ready. Speak now...") + + while True: + await client.send_audio(await mic.read(320)) + finally: + mic.stop() + await client.disconnect() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-voice speechmatics-rt python-dotenv pyaudio +``` + **Simple and Pythonic!** Get your API key at [portal.speechmatics.com](https://portal.speechmatics.com/) > [!TIP] @@ -172,13 +289,13 @@ When 1% WER improvement translates to millions in revenue, you need the best. |--------|--------------|----------| | **Word Error Rate (WER)** | **6.8%** | 16.5% | | **Languages Supported** | **55+** | 30+ | -| **Custom Dictionary** | **1,000 words** | 100 words | -| **Speaker Diarization** | **Included** | Extra charge | -| **Real-time Translation** | **30+ languages** | ❌ | -| **Sentiment Analysis** | ✅ | ❌ | -| **On-Premises** | ✅ | Limited | -| **On-Device** | ✅ | ❌ | -| **Air-Gapped Deployment** | ✅ | ❌ | +| **Custom dictionary** | **1,000 words** | 100 words | +| **Speaker diarization** | **Included** | Extra charge | +| **Realtime translation** | **30+ languages** | ❌ | +| **Sentiment analysis** | ✅ | ❌ | +| **On-premises** | ✅ | Limited | +| **On-device** | ✅ | ❌ | +| **Air-gapped deployment** | ✅ | ❌ | @@ -192,11 +309,13 @@ When 1% WER improvement translates to millions in revenue, you need the best. ## 🚀 Key Features -### Real-time Transcription +### Realtime transcription Stream audio and get instant transcriptions with ultra-low latency. Perfect for voice agents, live captioning, and conversational AI. ```python import asyncio +import os +from dotenv import load_dotenv from speechmatics.rt import ( AsyncClient, ServerMessageType, @@ -207,6 +326,8 @@ from speechmatics.rt import ( Microphone, ) +load_dotenv() + async def main(): # Configure audio format for microphone input audio_format = AudioFormat( @@ -221,158 +342,279 @@ async def main(): enable_partials=True, ) - async with AsyncClient(api_key="YOUR_API_KEY") as client: - # Handle final transcripts - @client.on(ServerMessageType.ADD_TRANSCRIPT) - def handle_transcript(message): - result = TranscriptResult.from_message(message) + # Create client + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) + + # Handle final transcripts + @client.on(ServerMessageType.ADD_TRANSCRIPT) + def handle_transcript(message): + result = TranscriptResult.from_message(message) + if result.metadata.transcript: print(f"[final]: {result.metadata.transcript}") - # Handle partial transcripts (interim results) - @client.on(ServerMessageType.ADD_PARTIAL_TRANSCRIPT) - def handle_partial(message): - result = TranscriptResult.from_message(message) + # Handle partial transcripts (interim results) + @client.on(ServerMessageType.ADD_PARTIAL_TRANSCRIPT) + def handle_partial(message): + result = TranscriptResult.from_message(message) + if result.metadata.transcript: print(f"[partial]: {result.metadata.transcript}") - # Initialize microphone (requires: pip install pyaudio) - mic = Microphone(sample_rate=audio_format.sample_rate, chunk_size=audio_format.chunk_size) - if not mic.start(): - print("PyAudio not available - install with: pip install pyaudio") - return + # Initialize microphone (requires: pip install pyaudio) + mic = Microphone(sample_rate=audio_format.sample_rate, chunk_size=audio_format.chunk_size) + if not mic.start(): + print("PyAudio not available - install with: pip install pyaudio") + return - # Start transcription session + try: + # start_session() establishes WebSocket connection and starts transcription await client.start_session( transcription_config=transcription_config, audio_format=audio_format, ) + print("Speak now...") - try: - # Stream audio continuously - while True: - frame = await mic.read(audio_format.chunk_size) - await client.send_audio(frame) - except KeyboardInterrupt: - mic.stop() + # Stream audio continuously + while True: + frame = await mic.read(audio_format.chunk_size) + await client.send_audio(frame) + except KeyboardInterrupt: + pass + finally: + mic.stop() + await client.close() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-rt python-dotenv pyaudio +``` + ### Batch Transcription Upload audio files and get accurate transcripts with speaker labels, timestamps, and more. ```python import asyncio +import os +from dotenv import load_dotenv from speechmatics.batch import AsyncClient, TranscriptionConfig, FormatType +load_dotenv() + async def main(): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - # Submit job with advanced features - job = await client.submit_job( - "meeting.mp3", - transcription_config=TranscriptionConfig( - language="en", - diarization="speaker", - enable_entities=True, - punctuation_overrides={ - "permitted_marks": [".", "?", "!"] - } - ) + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) + + # Submit job with advanced features + job = await client.submit_job( + "example.wav", + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker", + enable_entities=True, + punctuation_overrides={ + "permitted_marks": [".", "?", "!"] + } ) + ) + + # Wait for completion + result = await client.wait_for_completion(job.id, format_type=FormatType.JSON) - # Wait for completion - result = await client.wait_for_completion(job.id, format_type=FormatType.JSON) + # Access results + print(f"Transcript: {result.transcript_text}") - # Access results - print(f"Transcript: {result.transcript_text}") + await client.close() asyncio.run(main()) ``` -### Speaker Diarization +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` + +### Speaker diarization Automatically detect and label different speakers in your audio. ```python import asyncio +import os +from dotenv import load_dotenv from speechmatics.batch import AsyncClient, TranscriptionConfig +load_dotenv() + async def main(): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - job = await client.submit_job( - "meeting.wav", - transcription_config=TranscriptionConfig( - language="en", - diarization="speaker", - speaker_diarization_config={ - "prefer_current_speaker": True - } - ) + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) + + job = await client.submit_job( + "example.wav", + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker", + speaker_diarization_config={ + "prefer_current_speaker": True + } ) - result = await client.wait_for_completion(job.id) + ) + result = await client.wait_for_completion(job.id) + + # Access full transcript with speaker labels + print(f"Full transcript:\n{result.transcript_text}\n") - # Access full transcript with speaker labels - print(f"Full transcript:\n{result.transcript_text}\n") + # Access individual results with speaker information + for result_item in result.results: + if result_item.alternatives: + alt = result_item.alternatives[0] + speaker = alt.speaker or "Unknown" + content = alt.content + print(f"Speaker {speaker}: {content}") - # Access individual results with speaker information - for result_item in result.results: - if result_item.alternatives: - alt = result_item.alternatives[0] - speaker = alt.speaker or "Unknown" - content = alt.content - print(f"Speaker {speaker}: {content}") + await client.close() asyncio.run(main()) ``` -### Custom Dictionary +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` + +### Custom dictionary Add domain-specific terms, names, and acronyms for perfect accuracy. ```python import asyncio -from speechmatics.batch import AsyncClient, TranscriptionConfig +import os +from dotenv import load_dotenv +from speechmatics.rt import ( + AsyncClient, + ServerMessageType, + TranscriptionConfig, + TranscriptResult, + AudioFormat, + AudioEncoding, + Microphone, + ConversationConfig, +) + +load_dotenv() + async def main(): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - job = await client.submit_job( - "audio.wav", - transcription_config=TranscriptionConfig( - language="en", - additional_vocab=[ - {"content": "Speechmatics", "sounds_like": ["speech mat ics"]}, - {"content": "API", "sounds_like": ["A P I", "A. P. I."]}, - {"content": "kubernetes", "sounds_like": ["koo ber net ees"]} - ] - ) + api_key = os.getenv("SPEECHMATICS_API_KEY") + if not api_key: + print("Error: SPEECHMATICS_API_KEY not set") + return + + transcript_parts = [] + + audio_format = AudioFormat( + encoding=AudioEncoding.PCM_S16LE, + chunk_size=4096, + sample_rate=16000, + ) + + transcription_config = TranscriptionConfig( + language="en", + enable_partials=True, + additional_vocab=[ + {"content": "Speechmatics", "sounds_like": ["speech mat ics"]}, + {"content": "API", "sounds_like": ["A P I", "A. P. I."]}, + {"content": "kubernetes", "sounds_like": ["koo ber net ees"]}, + {"content": "Anthropic", "sounds_like": ["an throp ik", "an throw pick"]}, + {"content": "OAuth", "sounds_like": ["oh auth", "O auth", "O. Auth"]}, + {"content": "PostgreSQL", "sounds_like": ["post gres Q L", "post gres sequel"]}, + {"content": "Nginx", "sounds_like": ["engine X", "N jinx"]}, + {"content": "GraphQL", "sounds_like": ["graph Q L", "graph quel"]}, + ], + conversation_config=ConversationConfig( + end_of_utterance_silence_trigger=0.5, # seconds of silence to trigger end of utterance + ), + ) + + mic = Microphone(sample_rate=16000, chunk_size=4096) + if not mic.start(): + print("PyAudio not installed") + return + + client = AsyncClient(api_key=api_key) + + @client.on(ServerMessageType.ADD_TRANSCRIPT) + def on_final(message): + result = TranscriptResult.from_message(message) + if result.metadata.transcript: + print(f"[final]: {result.metadata.transcript}") + transcript_parts.append(result.metadata.transcript) + + @client.on(ServerMessageType.ADD_PARTIAL_TRANSCRIPT) + def on_partial(message): + result = TranscriptResult.from_message(message) + if result.metadata.transcript: + print(f"[partial]: {result.metadata.transcript}") + + @client.on(ServerMessageType.END_OF_UTTERANCE) + def on_utterance_end(message): + print("[END OF UTTERANCE]\n") + + try: + await client.start_session( + transcription_config=transcription_config, + audio_format=audio_format, ) + print("Speak now...") + + while True: + await client.send_audio(await mic.read(4096)) + except KeyboardInterrupt: + pass + finally: + mic.stop() + await client.close() + print(f"\nFull transcript: {' '.join(transcript_parts)}") + asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-rt python-dotenv pyaudio +``` + ### 55+ Languages Native models for major languages, not just multilingual Whisper. ```python import asyncio +import os +from dotenv import load_dotenv from speechmatics.batch import AsyncClient, TranscriptionConfig +load_dotenv() + async def main(): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - # Automatic language detection - job = await client.submit_job( - "audio.wav", - transcription_config=TranscriptionConfig( - language="auto" - ) - ) + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) - # Or specify language directly (e.g., Japanese) - job = await client.submit_job( - "audio.wav", - transcription_config=TranscriptionConfig(language="ja") - ) + # Automatic language detection + job = await client.submit_job( + "audio.wav", + transcription_config=TranscriptionConfig(language="auto") + ) + result = await client.wait_for_completion(job.id) + print(f"Detected language transcript: {result.transcript_text}") + + await client.close() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` +

📂 More Features • Click to explore Audio Intelligence and Translation examples @@ -383,6 +625,8 @@ Get sentiment, topics, summaries, and more. ```python import asyncio +import os +from dotenv import load_dotenv from speechmatics.batch import ( AsyncClient, JobConfig, @@ -394,40 +638,52 @@ from speechmatics.batch import ( AutoChaptersConfig ) +load_dotenv() + async def main(): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - # Configure job with all audio intelligence features - config = JobConfig( - type=JobType.TRANSCRIPTION, - transcription_config=TranscriptionConfig(language="en"), - sentiment_analysis_config=SentimentAnalysisConfig(), - topic_detection_config=TopicDetectionConfig(), - summarization_config=SummarizationConfig(), - auto_chapters_config=AutoChaptersConfig() - ) + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) - job = await client.submit_job("podcast.mp3", config=config) - result = await client.wait_for_completion(job.id) + # Configure job with all audio intelligence features + config = JobConfig( + type=JobType.TRANSCRIPTION, + transcription_config=TranscriptionConfig(language="en"), + sentiment_analysis_config=SentimentAnalysisConfig(), + topic_detection_config=TopicDetectionConfig(), + summarization_config=SummarizationConfig(), + auto_chapters_config=AutoChaptersConfig() + ) + + job = await client.submit_job("example.wav", config=config) + result = await client.wait_for_completion(job.id) - # Access all results - print(f"Transcript: {result.transcript_text}") - if result.sentiment_analysis: - print(f"Sentiment: {result.sentiment_analysis}") - if result.topics: - print(f"Topics: {result.topics}") - if result.summary: - print(f"Summary: {result.summary}") - if result.chapters: - print(f"Chapters: {result.chapters}") + # Access all results + print(f"Transcript: {result.transcript_text}") + if result.sentiment_analysis: + print(f"Sentiment: {result.sentiment_analysis}") + if result.topics: + print(f"Topics: {result.topics}") + if result.summary: + print(f"Summary: {result.summary}") + if result.chapters: + print(f"Chapters: {result.chapters}") + + await client.close() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` + ### Translation Transcribe and translate simultaneously to 50+ languages. ```python import asyncio +import os +from dotenv import load_dotenv from speechmatics.batch import ( AsyncClient, JobConfig, @@ -436,29 +692,39 @@ from speechmatics.batch import ( TranslationConfig ) +load_dotenv() + async def main(): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - config = JobConfig( - type=JobType.TRANSCRIPTION, - transcription_config=TranscriptionConfig(language="en"), - translation_config=TranslationConfig(target_languages=["es", "fr", "de"]) - ) + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) - job = await client.submit_job("video.mp4", config=config) - result = await client.wait_for_completion(job.id) + config = JobConfig( + type=JobType.TRANSCRIPTION, + transcription_config=TranscriptionConfig(language="en"), + translation_config=TranslationConfig(target_languages=["es", "fr", "de"]) + ) + + job = await client.submit_job("sample.mp4", config=config) + result = await client.wait_for_completion(job.id) - # Access original transcript - print(f"Original (English): {result.transcript_text}\n") + # Access original transcript + print(f"Original (English): {result.transcript_text}\n") - # Access translations - if result.translations: - for lang_code, translation_data in result.translations.items(): - translated_text = translation_data.get("content", "") - print(f"Translated ({lang_code}): {translated_text}") + # Access translations + if result.translations: + for lang_code, segments in result.translations.items(): + translated_text = " ".join(seg.get("content", "") for seg in segments) + print(f"Translated ({lang_code}): {translated_text}") + + await client.close() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` +
--- @@ -493,7 +759,7 @@ async def entrypoint(ctx: agents.JobContext): """ await ctx.connect() - # Speech-to-Text: Speechmatics with speaker diarization + # Speech to text: Speechmatics with speaker diarization stt = speechmatics.STT( enable_diarization=True, speaker_active_format="<{speaker_id}>{text}", @@ -531,7 +797,7 @@ pip install livekit-agents livekit-plugins-speechmatics livekit-plugins-openai l ``` **Key Features:** -- Real-time WebRTC audio streaming +- real-time WebRTC audio streaming - Speechmatics STT with speaker diarization - Configurable LLM and TTS providers - Voice Activity Detection (VAD) @@ -543,6 +809,7 @@ Build real-time voice bots with [Pipecat](https://github.com/pipecat-ai/pipecat) ```python import asyncio import os +from dotenv import load_dotenv from pipecat.pipeline.pipeline import Pipeline from pipecat.pipeline.runner import PipelineRunner from pipecat.pipeline.task import PipelineTask @@ -551,6 +818,8 @@ from pipecat.services.speechmatics.stt import SpeechmaticsSTTService, Language from pipecat.services.speechmatics.tts import SpeechmaticsTTSService from pipecat.transports.local.audio import LocalAudioTransport +load_dotenv() + async def main(): # Configure Speechmatics STT with speaker diarization stt = SpeechmaticsSTTService( @@ -609,7 +878,7 @@ pip install pipecat-ai[speechmatics, openai] pyaudio ``` **Key Features:** -- Real-time STT with speaker diarization +- real-time STT with speaker diarization - Natural-sounding TTS with multiple voices - Interruption handling (users can interrupt bot responses) - Works with any LLM provider (OpenAI, Anthropic, etc.) @@ -627,7 +896,7 @@ Each SDK package includes detailed documentation: | Package | Documentation | Description | |---------|---------------|-------------| | **speechmatics-batch** | [README](./sdk/batch/README.md) • [Migration Guide](./sdk/batch/MIGRATION.md) | Async batch transcription | -| **speechmatics-rt** | [README](./sdk/rt/README.md) • [Migration Guide](./sdk/rt/MIGRATION.md) | Real-time streaming | +| **speechmatics-rt** | [README](./sdk/rt/README.md) • [Migration Guide](./sdk/rt/MIGRATION.md) | Realtime streaming | | **speechmatics-voice** | [README](./sdk/voice/README.md) | Voice agent SDK | | **speechmatics-tts** | [README](./sdk/tts/README.md) | Text-to-speech | @@ -639,7 +908,7 @@ Comprehensive collection of working examples, integrations, and templates: [gith | Example | Description | Package | |---------|-------------|---------| | [Hello World](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/01-hello-world) | Simplest transcription example | Batch | -| [Batch vs Real-time](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/02-batch-vs-realtime) | Learn the difference between API modes | Batch, RT | +| [Batch vs Realtime](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/02-batch-vs-realtime) | Learn the difference between API modes | Batch, RT | | [Configuration Guide](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/03-configuration-guide) | Common configuration options | Batch | | [Audio Intelligence](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/04-audio-intelligence) | Sentiment, topics, and summaries | Batch | | [Multilingual & Translation](https://github.com/speechmatics/speechmatics-academy/tree/main/basics/05-multilingual-translation) | 50+ languages and real-time translation | RT | @@ -662,10 +931,11 @@ Comprehensive collection of working examples, integrations, and templates: [gith #### Use Cases | Industry | Example | Features | |----------|---------|----------| -| **Healthcare** | [Medical Transcription](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/01-medical-transcription-realtime) | Real-time, custom medical vocabulary | +| **Healthcare** | [Medical Transcription](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/01-medical-transcription-realtime) | Realtime, custom medical vocabulary | | **Media** | [Video Captioning](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/02-video-captioning) | SRT generation, batch processing | | **Contact Center** | [Call Analytics](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/03-call-center-analytics) | Channel diarization, sentiment, topics | | **Business** | [AI Receptionist](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/04-voice-agent-calendar) | LiveKit, Twilio SIP, Google Calendar | +| **Seasonal** | [Santa Voice Agent](https://github.com/speechmatics/speechmatics-academy/tree/main/use-cases/05-santa-voice-agent) | LiveKit, Twilio SIP, ElevenLabs TTS, custom voice | #### Migration Guides | From | Guide | Status | @@ -706,18 +976,21 @@ import asyncio from speechmatics.batch import AsyncClient, TranscriptionConfig, FormatType async def main(): - async with AsyncClient(api_key="API_KEY") as client: - job = await client.submit_job( - "audio.wav", - transcription_config=TranscriptionConfig(language="en") - ) - result = await client.wait_for_completion(job.id, format_type=FormatType.TXT) - print(result) + client = AsyncClient(api_key="API_KEY") + + job = await client.submit_job( + "audio.wav", + transcription_config=TranscriptionConfig(language="en") + ) + result = await client.wait_for_completion(job.id, format_type=FormatType.TXT) + print(result) + + await client.close() asyncio.run(main()) ``` -**Full Migration Guides:** [Batch Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/batch/MIGRATION.md) • [Real-time Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/rt/MIGRATION.md) +**Full Migration Guides:** [Batch Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/batch/MIGRATION.md) • [Realtime Migration Guide](https://github.com/speechmatics/speechmatics-python-sdk/blob/main/sdk/rt/MIGRATION.md) --- @@ -737,29 +1010,37 @@ load_dotenv() async def main(): api_key = os.getenv("SPEECHMATICS_API_KEY") - - async with AsyncClient(api_key=api_key) as client: - # Add medical terminology for better accuracy - job = await client.submit_job( - "patient_interview.wav", - transcription_config=TranscriptionConfig( - language="en", - additional_vocab=[ - {"content": "hypertension"}, - {"content": "metformin"}, - {"content": "echocardiogram"}, - {"content": "MRI", "sounds_like": ["M R I"]}, - {"content": "CT scan", "sounds_like": ["C T scan"]} - ] - ) + client = AsyncClient(api_key=api_key) + + # Use medical domain for better accuracy with clinical terminology + job = await client.submit_job( + "patient_interview.wav", + transcription_config=TranscriptionConfig( + language="en", + domain="medical", + additional_vocab=[ + {"content": "hypertension"}, + {"content": "metformin"}, + {"content": "echocardiogram"}, + {"content": "MRI", "sounds_like": ["M R I"]}, + {"content": "CT scan", "sounds_like": ["C T scan"]} + ] ) + ) - result = await client.wait_for_completion(job.id) - print(f"Transcript:\n{result.transcript_text}") + result = await client.wait_for_completion(job.id) + print(f"Transcript:\n{result.transcript_text}") + + await client.close() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` + ### Voice Agents & Conversational AI Build Alexa-like experiences with real-time transcription and speaker detection. @@ -819,6 +1100,16 @@ async def main(): asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-voice speechmatics-rt python-dotenv pyaudio +``` + +
+📂 More Use Cases • Click to explore Call Center, Healthcare, Media & Entertainment, Education, and Meetings examples + +
+ ### Call Center Analytics Transcribe calls with speaker diarization, sentiment analysis, and topic detection. @@ -840,48 +1131,56 @@ load_dotenv() async def main(): api_key = os.getenv("SPEECHMATICS_API_KEY") - - async with AsyncClient(api_key=api_key) as client: - config = JobConfig( - type=JobType.TRANSCRIPTION, - transcription_config=TranscriptionConfig( - language="en", - diarization="speaker" - ), - sentiment_analysis_config=SentimentAnalysisConfig(), - topic_detection_config=TopicDetectionConfig(), - summarization_config=SummarizationConfig( - content_type="conversational", - summary_length="brief" - ) + client = AsyncClient(api_key=api_key) + + config = JobConfig( + type=JobType.TRANSCRIPTION, + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker" + ), + sentiment_analysis_config=SentimentAnalysisConfig(), + topic_detection_config=TopicDetectionConfig(), + summarization_config=SummarizationConfig( + content_type="conversational", + summary_length="brief" ) + ) - job = await client.submit_job("call_recording.wav", config=config) - result = await client.wait_for_completion(job.id) + job = await client.submit_job("call_recording.wav", config=config) + result = await client.wait_for_completion(job.id) - # Print results - print(f"Transcript:\n{result.transcript_text}\n") + # Print results + print(f"Transcript:\n{result.transcript_text}\n") - if result.sentiment_analysis: - sentiment = result.sentiment_analysis.get('sentiment', 'neutral') - score = result.sentiment_analysis.get('score', 0) - print(f"Sentiment: {sentiment} (score: {score})") + if result.sentiment_analysis: + segments = result.sentiment_analysis.get("segments", []) + counts = {"positive": 0, "negative": 0, "neutral": 0} + for seg in segments: + sentiment = seg.get("sentiment", "").lower() + if sentiment in counts: + counts[sentiment] += 1 + overall = max(counts, key=counts.get) + print(f"Sentiment: {overall.capitalize()}") + print(f"Breakdown: {counts['positive']} positive, {counts['neutral']} neutral, {counts['negative']} negative") - if result.topics and 'summary' in result.topics: - overall = result.topics['summary']['overall'] - topics = [topic for topic, count in overall.items() if count > 0] - print(f"Topics: {', '.join(topics)}") + if result.topics and 'summary' in result.topics: + overall = result.topics['summary']['overall'] + topics = [topic for topic, count in overall.items() if count > 0] + print(f"Topics: {', '.join(topics)}") - if result.summary: - print(f"Summary: {result.summary.get('content')}") + if result.summary: + print(f"Summary: {result.summary.get('content')}") + + await client.close() asyncio.run(main()) ``` -
-📂 More Use Cases • Click to explore Healthcare, Media & Entertainment, Education, and Meetings examples - -
+**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` ### Media & Entertainment Add captions, create searchable archives, generate clips from keywords. @@ -896,25 +1195,32 @@ load_dotenv() async def main(): api_key = os.getenv("SPEECHMATICS_API_KEY") + client = AsyncClient(api_key=api_key) - async with AsyncClient(api_key=api_key) as client: - job = await client.submit_job( - "movie.mp4", - transcription_config=TranscriptionConfig(language="en") - ) + job = await client.submit_job( + "movie.mp4", + transcription_config=TranscriptionConfig(language="en") + ) + + # Get SRT captions + captions = await client.wait_for_completion(job.id, format_type=FormatType.SRT) - # Get SRT captions - captions = await client.wait_for_completion(job.id, format_type=FormatType.SRT) + # Save captions + with open("movie.srt", "w", encoding="utf-8") as f: + f.write(captions) - # Save captions - with open("movie.srt", "w", encoding="utf-8") as f: - f.write(captions) + print("Captions saved to movie.srt") - print("Captions saved to movie.srt") + await client.close() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` + ### Education & E-Learning Auto-generate lecture transcripts, searchable course content, and accessibility captions. @@ -928,33 +1234,40 @@ load_dotenv() async def main(): api_key = os.getenv("SPEECHMATICS_API_KEY") - - async with AsyncClient(api_key=api_key) as client: - job = await client.submit_job( - "lecture_recording.wav", - transcription_config=TranscriptionConfig( - language="en", - diarization="speaker", - enable_entities=True - ) + client = AsyncClient(api_key=api_key) + + job = await client.submit_job( + "lecture_recording.wav", + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker", + enable_entities=True ) + ) - result = await client.wait_for_completion(job.id) + result = await client.wait_for_completion(job.id) - # Save transcript - with open("lecture_transcript.txt", "w", encoding="utf-8") as f: - f.write(result.transcript_text) + # Save transcript + with open("lecture_transcript.txt", "w", encoding="utf-8") as f: + f.write(result.transcript_text) - # Save SRT captions for accessibility - captions = await client.wait_for_completion(job.id, format_type=FormatType.SRT) - with open("lecture_captions.srt", "w", encoding="utf-8") as f: - f.write(captions) + # Save SRT captions for accessibility + captions = await client.wait_for_completion(job.id, format_type=FormatType.SRT) + with open("lecture_captions.srt", "w", encoding="utf-8") as f: + f.write(captions) - print("Transcript and captions saved") + print("Transcript and captions saved") + + await client.close() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` + ### Meetings Turn meetings into searchable, actionable summaries with action items and key decisions. @@ -975,43 +1288,50 @@ load_dotenv() async def main(): api_key = os.getenv("SPEECHMATICS_API_KEY") + client = AsyncClient(api_key=api_key) + + config = JobConfig( + type=JobType.TRANSCRIPTION, + transcription_config=TranscriptionConfig( + language="en", + diarization="speaker" + ), + summarization_config=SummarizationConfig(), + auto_chapters_config=AutoChaptersConfig() + ) - async with AsyncClient(api_key=api_key) as client: - config = JobConfig( - type=JobType.TRANSCRIPTION, - transcription_config=TranscriptionConfig( - language="en", - diarization="speaker" - ), - summarization_config=SummarizationConfig(), - auto_chapters_config=AutoChaptersConfig() - ) + job = await client.submit_job("board_meeting.mp4", config=config) + result = await client.wait_for_completion(job.id) - job = await client.submit_job("board_meeting.mp4", config=config) - result = await client.wait_for_completion(job.id) + # Display results + print(f"Transcript:\n{result.transcript_text}\n") - # Display results - print(f"Transcript:\n{result.transcript_text}\n") + if result.summary: + summary = result.summary.get('content', 'N/A') + print(f"Summary:\n{summary}\n") - if result.summary: - summary = result.summary.get('content', 'N/A') - print(f"Summary:\n{summary}\n") + if result.chapters: + print("Chapters:") + for i, chapter in enumerate(result.chapters, 1): + print(f"{i}. {chapter}") - if result.chapters: - print("Chapters:") - for i, chapter in enumerate(result.chapters, 1): - print(f"{i}. {chapter}") + await client.close() asyncio.run(main()) ``` +**Installation:** +```bash +pip install speechmatics-batch python-dotenv +``` +
--- ## Architecture -### Real-time Flow +### Realtime flow ```mermaid sequenceDiagram @@ -1074,12 +1394,15 @@ export SPEECHMATICS_API_KEY="your_api_key_here" ```python import asyncio import os +from dotenv import load_dotenv from speechmatics.batch import AsyncClient +load_dotenv() + async def main(): - async with AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) as client: - # Use client here - pass + client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) + # Use client here + await client.close() asyncio.run(main()) ``` @@ -1096,9 +1419,9 @@ from speechmatics.batch import AsyncClient, JWTAuth async def main(): # Generate temporary token (expires after ttl seconds) auth = JWTAuth(api_key="your_api_key", ttl=3600) - async with AsyncClient(auth=auth) as client: - # Use client here - pass + client = AsyncClient(auth=auth) + # Use client here + await client.close() asyncio.run(main()) ``` @@ -1122,13 +1445,13 @@ async def main(): close_timeout=10.0 # Timeout for closing connection (seconds) ) - async with AsyncClient( + client = AsyncClient( api_key="KEY", url="wss://eu2.rt.speechmatics.com/v2", conn_config=conn_config - ) as client: - # Use client here - pass + ) + # Use client here + await client.close() asyncio.run(main()) ``` @@ -1146,19 +1469,21 @@ from tenacity import retry, stop_after_attempt, wait_exponential wait=wait_exponential(multiplier=1, min=2, max=10) ) async def transcribe_with_retry(audio_file): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - try: - job = await client.submit_job( - audio_file, - transcription_config=TranscriptionConfig(language="en") - ) - return await client.wait_for_completion(job.id) - except AuthenticationError: - print("Authentication failed") - raise - except (BatchError, JobError) as e: - print(f"Transcription failed: {e}") - raise + client = AsyncClient(api_key="YOUR_API_KEY") + try: + job = await client.submit_job( + audio_file, + transcription_config=TranscriptionConfig(language="en") + ) + return await client.wait_for_completion(job.id) + except AuthenticationError: + print("Authentication failed") + raise + except (BatchError, JobError) as e: + print(f"Transcription failed: {e}") + raise + finally: + await client.close() asyncio.run(transcribe_with_retry("audio.wav")) ``` @@ -1176,12 +1501,12 @@ async def main(): operation_timeout=300.0 # Default timeout for API operations ) - async with AsyncClient( + client = AsyncClient( api_key="KEY", conn_config=conn_config - ) as client: - # Use client here - pass + ) + # Use client here + await client.close() asyncio.run(main()) ``` @@ -1198,9 +1523,9 @@ import asyncio from speechmatics.batch import AsyncClient async def main(): - async with AsyncClient(api_key="YOUR_API_KEY") as client: - # Uses global SaaS endpoints automatically - pass + client = AsyncClient(api_key="YOUR_API_KEY") + # Uses global SaaS endpoints automatically + await client.close() asyncio.run(main()) ``` @@ -1218,12 +1543,12 @@ import asyncio from speechmatics.batch import AsyncClient async def main(): - async with AsyncClient( + client = AsyncClient( api_key="YOUR_LICENSE_KEY", url="http://localhost:9000/v2" - ) as client: - # Use on-premises instance - pass + ) + # Use on-premises instance + await client.close() asyncio.run(main()) ``` @@ -1269,20 +1594,22 @@ async def test(): # Replace with your audio file path audio_file = "your_audio_file.wav" + client = AsyncClient(api_key=api_key) try: - async with AsyncClient(api_key=api_key) as client: - print("Submitting transcription job...") - job = await client.submit_job(audio_file, transcription_config=TranscriptionConfig(language="en")) - print(f"Job submitted: {job.id}") + print("Submitting transcription job...") + job = await client.submit_job(audio_file, transcription_config=TranscriptionConfig(language="en")) + print(f"Job submitted: {job.id}") - print("Waiting for completion...") - result = await client.wait_for_completion(job.id) + print("Waiting for completion...") + result = await client.wait_for_completion(job.id) - print(f"\nTranscript: {result.transcript_text}") - print("\nTest completed successfully!") + print(f"\nTranscript: {result.transcript_text}") + print("\nTest completed successfully!") except AuthenticationError as e: print(f"\nAuthentication Error: {e}") + finally: + await client.close() asyncio.run(test()) EOF From 5b22d7d23a29f892c9459dd991aff21cd6f54918 Mon Sep 17 00:00:00 2001 From: Edgars Adamoics Date: Fri, 19 Dec 2025 15:39:16 +0000 Subject: [PATCH 5/6] Fixed broken status page link to README --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 0c37093..39d0e8a 100644 --- a/README.md +++ b/README.md @@ -1626,6 +1626,7 @@ If this fails, [open an issue](https://github.com/speechmatics/speechmatics-pyth - **GitHub Discussions**: [Ask questions, share projects](https://github.com/speechmatics/speechmatics-python-sdk/discussions) - **Stack Overflow**: Tag with `speechmatics` - **Email Support**: devrel@speechmatics.com +- **Status Page**: [status.speechmatics.com](https://status.speechmatics.com/) ### Show Your Support @@ -1647,7 +1648,7 @@ This project is licensed under the MIT License - see the [LICENSE](https://githu - **Website**: [speechmatics.com](https://www.speechmatics.com) - **Documentation**: [docs.speechmatics.com](https://docs.speechmatics.com) - **Portal**: [portal.speechmatics.com](https://portal.speechmatics.com) -- **Status Page**: [status.speechmatics.com](https://status.speechmatics.com) +- **Status Page**: [status.speechmatics.com](https://status.speechmatics.com/) - **Blog**: [speechmatics.com/blog](https://www.speechmatics.com/blog) - **GitHub**: [@speechmatics](https://github.com/speechmatics) From c3a577a63915450b17b49106844e1ef1c6d84e43 Mon Sep 17 00:00:00 2001 From: Edgars Adamoics Date: Mon, 22 Dec 2025 12:02:06 +0000 Subject: [PATCH 6/6] Enhances README with examples and details Updates the README to include more detailed examples for batch transcription, realtime streaming, text-to-speech, and voice agent functionalities. Adds sections on key features like speaker diarization, custom dictionaries, audio intelligence, and translation with corresponding code snippets. Provides information on framework integrations, focusing on LiveKit Agents and Pipecat AI, improving user understanding and adoption. --- README.md | 201 ++++++++++++++++++++++++++++++------------------------ 1 file changed, 110 insertions(+), 91 deletions(-) diff --git a/README.md b/README.md index 39d0e8a..d4ad600 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,7 @@ **Fully typed** with type definitions for all request params and response fields. **Modern Python** with async/await patterns, type hints, and context managers for production-ready code. -**55+ Languages • Realtime & Batch • Custom Vocabularies • Speaker diarization • Speaker ID** +**55+ Languages • Realtime & Batch • Custom vocabularies • Speaker diarization • Speaker ID** [Get API Key](https://portal.speechmatics.com/) • [Documentation](https://docs.speechmatics.com) • [Academy Examples](https://github.com/speechmatics/speechmatics-academy) @@ -31,7 +31,7 @@ ## 📋 Table of Contents -- [Quick Start](#quick-start) +- [Quickstart](#quick-start) - [Why Speechmatics?](#-why-speechmatics) - [Key Features](#-key-features) - [Use Cases](#-use-cases) @@ -107,9 +107,23 @@ make install-dev pre-commit install ``` +**Simple and Pythonic!** Get your API key at [portal.speechmatics.com](https://portal.speechmatics.com/) + ### Your First Transcription -**Batch Transcription** - transcribe audio files: +> [!NOTE] +> All examples use `load_dotenv()` to load your API key from a `.env` file. Create a `.env` file with `SPEECHMATICS_API_KEY=your_key_here`. + +There are several different methods of generating your first transcription: + +- **Batch Transcription** - transcribe audio files +- **Realtime Streaming** - live microphone transcription +- **Text-to-Speech** - convert text to audio +- **Voice Agent** - real-time transcription with speaker diarization and turn detection + +#### Batch Transcription + +Transcribe audio files: ```python import asyncio @@ -133,7 +147,9 @@ asyncio.run(main()) pip install speechmatics-batch python-dotenv ``` -**Realtime streaming** - live microphone transcription: +#### Realtime Streaming + +Live microphone transcription: ```python import asyncio @@ -151,9 +167,11 @@ from speechmatics.rt import ( load_dotenv() +CHUNK_SIZE = 4096 + async def main(): client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) - mic = Microphone(sample_rate=16000, chunk_size=4096) + mic = Microphone(sample_rate=16000, chunk_size=CHUNK_SIZE) @client.on(ServerMessageType.ADD_TRANSCRIPT) def on_final(message): @@ -177,7 +195,7 @@ async def main(): print("Speak now...") while True: - await client.send_audio(await mic.read(4096)) + await client.send_audio(await mic.read(CHUNK_SIZE)) finally: mic.stop() await client.close() @@ -191,7 +209,9 @@ asyncio.run(main()) pip install speechmatics-rt python-dotenv pyaudio ``` -**Text-to-Speech** - convert text to audio: +#### Text-to-Speech + +Convert text to audio: ```python import asyncio @@ -205,7 +225,7 @@ async def main(): client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) response = await client.generate( - text="Hello! Welcome to Speechmatics text to speech.", + text="Hello! Welcome to Speechmatics Text-to-Speech", voice=Voice.SARAH, output_format=OutputFormat.WAV_16000 ) @@ -225,7 +245,9 @@ asyncio.run(main()) pip install speechmatics-tts python-dotenv ``` -**Voice agent** - real-time transcription with speaker diarization and turn detection: +#### Voice Agent + +Real-time transcription with speaker diarization and turn detection: ```python import asyncio @@ -272,8 +294,6 @@ asyncio.run(main()) pip install speechmatics-voice speechmatics-rt python-dotenv pyaudio ``` -**Simple and Pythonic!** Get your API key at [portal.speechmatics.com](https://portal.speechmatics.com/) - > [!TIP] > **Ready for more?** Explore 20+ working examples at **[Speechmatics Academy](https://github.com/speechmatics/speechmatics-academy)** — voice agents, integrations, use cases, and migration guides. @@ -281,6 +301,12 @@ pip install speechmatics-voice speechmatics-rt python-dotenv pyaudio ## 🏆 Why Speechmatics? +### Built for Production + +- **99.9% Uptime SLA** - Enterprise-grade reliability +- **SOC 2 Type II Certified** - Your data is secure +- **Flexible Deployment** - SaaS, on-premises, or air-gapped + ### Accuracy That Matters When 1% WER improvement translates to millions in revenue, you need the best. @@ -299,101 +325,72 @@ When 1% WER improvement translates to millions in revenue, you need the best. -### Built for Production - -- **99.9% Uptime SLA** - Enterprise-grade reliability -- **SOC 2 Type II Certified** - Your data is secure -- **Flexible Deployment** - SaaS, on-premises, or air-gapped - --- ## 🚀 Key Features -### Realtime transcription +### Realtime Transcription Stream audio and get instant transcriptions with ultra-low latency. Perfect for voice agents, live captioning, and conversational AI. +
+Code example - Click to expand + ```python import asyncio import os from dotenv import load_dotenv -from speechmatics.rt import ( - AsyncClient, - ServerMessageType, - TranscriptionConfig, - TranscriptResult, - AudioFormat, - AudioEncoding, - Microphone, -) +from speechmatics.rt import Microphone +from speechmatics.voice import VoiceAgentClient, VoiceAgentConfigPreset, AgentServerMessageType load_dotenv() async def main(): - # Configure audio format for microphone input - audio_format = AudioFormat( - encoding=AudioEncoding.PCM_S16LE, - chunk_size=4096, - sample_rate=16000, - ) - - # Configure transcription with partials enabled - transcription_config = TranscriptionConfig( - language="en", - enable_partials=True, + # Voice SDK with adaptive turn detection - optimised for conversational AI + client = VoiceAgentClient( + api_key=os.getenv("SPEECHMATICS_API_KEY"), + config=VoiceAgentConfigPreset.load("adaptive") ) - # Create client - client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) - - # Handle final transcripts - @client.on(ServerMessageType.ADD_TRANSCRIPT) - def handle_transcript(message): - result = TranscriptResult.from_message(message) - if result.metadata.transcript: - print(f"[final]: {result.metadata.transcript}") + # Handle transcription segments with speaker labels + @client.on(AgentServerMessageType.ADD_SEGMENT) + def on_segment(message): + for segment in message.get("segments", []): + print(f"[{segment.get('speaker_id', 'S1')}]: {segment.get('text', '')}") - # Handle partial transcripts (interim results) - @client.on(ServerMessageType.ADD_PARTIAL_TRANSCRIPT) - def handle_partial(message): - result = TranscriptResult.from_message(message) - if result.metadata.transcript: - print(f"[partial]: {result.metadata.transcript}") + # Detect when speaker finishes their turn + @client.on(AgentServerMessageType.END_OF_TURN) + def on_turn_end(message): + print("[END OF TURN]") - # Initialize microphone (requires: pip install pyaudio) - mic = Microphone(sample_rate=audio_format.sample_rate, chunk_size=audio_format.chunk_size) - if not mic.start(): - print("PyAudio not available - install with: pip install pyaudio") - return + mic = Microphone(sample_rate=16000, chunk_size=320) + mic.start() try: - # start_session() establishes WebSocket connection and starts transcription - await client.start_session( - transcription_config=transcription_config, - audio_format=audio_format, - ) - print("Speak now...") + await client.connect() + print("Voice agent ready. Speak now...") - # Stream audio continuously while True: - frame = await mic.read(audio_format.chunk_size) - await client.send_audio(frame) - except KeyboardInterrupt: - pass + await client.send_audio(await mic.read(320)) finally: mic.stop() - await client.close() + await client.disconnect() asyncio.run(main()) ``` **Installation:** ```bash -pip install speechmatics-rt python-dotenv pyaudio +pip install speechmatics-voice speechmatics-rt python-dotenv pyaudio ``` +
+ ### Batch Transcription Upload audio files and get accurate transcripts with speaker labels, timestamps, and more. +
+Code example - Click to expand + ```python import asyncio import os @@ -434,9 +431,14 @@ asyncio.run(main()) pip install speechmatics-batch python-dotenv ``` -### Speaker diarization +
+ +### Speaker Diarization Automatically detect and label different speakers in your audio. +
+Code example - Click to expand + ```python import asyncio import os @@ -481,9 +483,14 @@ asyncio.run(main()) pip install speechmatics-batch python-dotenv ``` -### Custom dictionary +
+ +### Custom Dictionary Add domain-specific terms, names, and acronyms for perfect accuracy. +
+Code example - Click to expand + ```python import asyncio import os @@ -583,9 +590,14 @@ asyncio.run(main()) pip install speechmatics-rt python-dotenv pyaudio ``` +
+ ### 55+ Languages Native models for major languages, not just multilingual Whisper. +
+Code example - Click to expand + ```python import asyncio import os @@ -615,13 +627,13 @@ asyncio.run(main()) pip install speechmatics-batch python-dotenv ``` -
-📂 More Features • Click to explore Audio Intelligence and Translation examples - -
+
### Audio Intelligence -Get sentiment, topics, summaries, and more. +Get sentiment, topics, summaries, and chapters from your audio. + +
+Code example - Click to expand ```python import asyncio @@ -677,8 +689,13 @@ asyncio.run(main()) pip install speechmatics-batch python-dotenv ``` +
+ ### Translation -Transcribe and translate simultaneously to 50+ languages. +Transcribe and translate simultaneously to multiple languages. + +
+Code example - Click to expand ```python import asyncio @@ -731,6 +748,8 @@ pip install speechmatics-batch python-dotenv ## 🔌 Framework Integrations +For more integration examples including Django, Next.js, and production patterns, visit the [Speechmatics Academy](https://github.com/speechmatics/speechmatics-academy). + ### LiveKit Agents (Voice Assistants) Build real-time voice assistants with [LiveKit Agents](https://github.com/livekit/agents) - a framework for building voice AI applications with WebRTC. @@ -797,14 +816,14 @@ pip install livekit-agents livekit-plugins-speechmatics livekit-plugins-openai l ``` **Key Features:** -- real-time WebRTC audio streaming +- Realtime WebRTC audio streaming - Speechmatics STT with speaker diarization - Configurable LLM and TTS providers - Voice Activity Detection (VAD) ### Pipecat AI (Voice Agents) -Build real-time voice bots with [Pipecat](https://github.com/pipecat-ai/pipecat) - a framework for voice and multimodal conversational AI. +Build Realtime voice bots with [Pipecat](https://github.com/pipecat-ai/pipecat) - a framework for voice and multimodal conversational AI. ```python import asyncio @@ -878,13 +897,11 @@ pip install pipecat-ai[speechmatics, openai] pyaudio ``` **Key Features:** -- real-time STT with speaker diarization +- Real-time STT with speaker diarization - Natural-sounding TTS with multiple voices - Interruption handling (users can interrupt bot responses) - Works with any LLM provider (OpenAI, Anthropic, etc.) -For more integration examples including Django, Next.js, and production patterns, visit the [Speechmatics Academy](https://github.com/speechmatics/speechmatics-academy). - --- ## 📚 Documentation @@ -896,7 +913,7 @@ Each SDK package includes detailed documentation: | Package | Documentation | Description | |---------|---------------|-------------| | **speechmatics-batch** | [README](./sdk/batch/README.md) • [Migration Guide](./sdk/batch/MIGRATION.md) | Async batch transcription | -| **speechmatics-rt** | [README](./sdk/rt/README.md) • [Migration Guide](./sdk/rt/MIGRATION.md) | Realtime streaming | +| **speechmatics-rt** | [README](./sdk/rt/README.md) • [Migration Guide](./sdk/rt/MIGRATION.md) | Realtime Streaming | | **speechmatics-voice** | [README](./sdk/voice/README.md) | Voice agent SDK | | **speechmatics-tts** | [README](./sdk/tts/README.md) | Text-to-speech | @@ -953,9 +970,9 @@ Comprehensive collection of working examples, integrations, and templates: [gith The legacy `speechmatics-python` package has been deprecated. This new SDK offers: -✅ **Cleaner API** - More Pythonic, better type hints -✅ **More features** - Sentiment, translation, summarization -✅ **Better docs** - Comprehensive examples and guides +- **Cleaner API** - More Pythonic, better type hints +- **More features** - Sentiment, translation, summarization +- **Better docs** - Comprehensive examples and guides ### Migration Guide @@ -1150,7 +1167,6 @@ async def main(): job = await client.submit_job("call_recording.wav", config=config) result = await client.wait_for_completion(job.id) - # Print results print(f"Transcript:\n{result.transcript_text}\n") if result.sentiment_analysis: @@ -1303,7 +1319,6 @@ async def main(): job = await client.submit_job("board_meeting.mp4", config=config) result = await client.wait_for_completion(job.id) - # Display results print(f"Transcript:\n{result.transcript_text}\n") if result.summary: @@ -1331,7 +1346,7 @@ pip install speechmatics-batch python-dotenv ## Architecture -### Realtime flow +### Realtime Flow ```mermaid sequenceDiagram @@ -1402,6 +1417,7 @@ load_dotenv() async def main(): client = AsyncClient(api_key=os.getenv("SPEECHMATICS_API_KEY")) # Use client here + # ... await client.close() asyncio.run(main()) @@ -1421,6 +1437,7 @@ async def main(): auth = JWTAuth(api_key="your_api_key", ttl=3600) client = AsyncClient(auth=auth) # Use client here + # ... await client.close() asyncio.run(main()) @@ -1451,6 +1468,7 @@ async def main(): conn_config=conn_config ) # Use client here + # ... await client.close() asyncio.run(main()) @@ -1506,6 +1524,7 @@ async def main(): conn_config=conn_config ) # Use client here + # ... await client.close() asyncio.run(main())