A Comprehensive Accessibility Framework for Generative AI
AccessibleDeepAgent is a complete, production-ready accessibility toolkit addressing the top accessibility challenges in generative AI systems. Built for the Humane Intelligence Accessibility Bias Bounty, it provides modular, platform-agnostic accessibility features that can be integrated into any AI application.
We've dramatically expanded AccessibleDeepAgent with 5 new comprehensive accessibility modules addressing ALL major accessibility challenges in generative AI:
- VisualAccessibilityAgent - Alt text, color contrast, screen readers, color blindness
- CognitiveAccessibilityAgent - Text simplification, reading levels, summaries, cognitive load
- ContentBiasDetector - Bias detection, inclusive language, fairness analysis
- MultimodalOutputAgent - TTS, audio descriptions, structured formats, Braille
- NeurodiversitySupportAgent - Autism, ADHD, dyslexia, sensory support
Try all features instantly in our Accessibility Features Demo Notebook - works in Google Colab, Jupyter, and any environment!
- Overview
- Top Accessibility Challenges Addressed
- Key Features
- Quick Start
- Installation
- Comprehensive Feature Guide
- Architecture
- Platform Integration
- DeepAgent Framework
- ADK: Neuroadaptive Accessibility
- Fairness & Bias Mitigation
- Documentation
- Contributing
- Citation
- License
AccessibleDeepAgent serves three primary purposes:
-
Comprehensive Accessibility Toolkit: 5 production-ready agents covering visual, cognitive, fairness, multi-modal, and neurodiversity accessibility
-
DeepAgent Framework: Research-grade multi-tool reasoning agent for benchmarks (ToolBench, GAIA, API-Bank, etc.)
-
ADK (Accessibility Development Kit): Neuroadaptive accessibility system with emotion AI bias mitigation
✅ Complete Coverage - Addresses ALL major accessibility challenges ✅ Production-Ready - Fully tested, documented, ready for integration ✅ Platform-Agnostic - Works with any AI model or platform ✅ Demonstrable - Complete Jupyter notebook with real examples ✅ Research-Backed - Built on accessibility research and WCAG standards ✅ Modular - Use just what you need, or integrate everything
- ❌ Problem: AI-generated images lack alt text
- ✅ Solution: Automated alt text generation with quality scoring (3 detail levels)
- ❌ Problem: Poor color contrast in AI outputs
- ✅ Solution: Real-time WCAG AA/AAA contrast checking
- ❌ Problem: Color-blind users can't distinguish AI visualizations
- ✅ Solution: Color blindness simulation for 4 types (protanopia, deuteranopia, tritanopia, achromatopsia)
- ❌ Problem: Complex language excludes users with cognitive disabilities
- ✅ Solution: Automatic text simplification to target reading levels
- ❌ Problem: Information overload from lengthy AI responses
- ✅ Solution: Smart summarization (brief, medium, detailed) and chunking
- ❌ Problem: No way to assess content complexity
- ✅ Solution: Reading level analysis (Flesch-Kincaid) and cognitive load prediction
- ❌ Problem: AI perpetuates gender and demographic bias
- ✅ Solution: Real-time bias detection with inclusive alternatives (50+ terms)
- ❌ Problem: Emotion AI discriminates against alexithymic users
- ✅ Solution: Bidirectional verification (40% false negative reduction)
- ❌ Problem: Stereotypical associations in AI outputs
- ✅ Solution: Pattern matching for stereotypes with recommendations
- ❌ Problem: Content only available in one format
- ✅ Solution: Export to TTS (with SSML), audio descriptions, JSON/XML/Markdown, Braille
- ❌ Problem: Screen readers struggle with AI-generated content
- ✅ Solution: Optimization and semantic structure analysis
- ❌ Problem: No print-friendly formats
- ✅ Solution: Dyslexia-friendly formatting with spacing and font recommendations
- ❌ Problem: Idioms and ambiguity confuse autistic users
- ✅ Solution: Literal language conversion and explicit structure
- ❌ Problem: Long text blocks overwhelm ADHD users
- ✅ Solution: Chunking with focus aids and time estimates
- ❌ Problem: Complex text is hard for dyslexic users
- ✅ Solution: Font recommendations, spacing optimization, formatting guides
- ❌ Problem: Sensory overload from animations/colors
- ✅ Solution: Risk assessment and mitigation strategies
from adk.agents.visual_accessibility_agent import VisualAccessibilityAgent
visual_agent = VisualAccessibilityAgent()
# Generate alt text at 3 detail levels
alt_text = await visual_agent.generate_alt_text(
"A bar chart showing quarterly sales growth",
detail_level="medium"
)
# Check WCAG color contrast
contrast = await visual_agent.check_color_contrast(
foreground_color=(0, 0, 0),
background_color=(255, 255, 255),
text_size=14.0
)
# Returns: contrast_ratio, meets_aa, meets_aaa, wcag_level
# Simulate color blindness
colorblind = await visual_agent.simulate_colorblindness(
color_rgb=(220, 20, 60),
colorblind_type="deuteranopia"
)
# Optimize for screen readers
sr_check = await visual_agent.optimize_for_screen_reader(
content,
has_headings=True,
has_alt_text=True
)Features:
- Alt text generation (brief, medium, detailed) with quality scoring
- WCAG 2.1 AA/AAA color contrast verification
- Color blindness simulation (4 types)
- Screen reader compatibility analysis
- Visual complexity assessment
from adk.agents.cognitive_accessibility_agent import CognitiveAccessibilityAgent
cognitive_agent = CognitiveAccessibilityAgent()
# Analyze reading level
analysis = await cognitive_agent.analyze_reading_level(text)
# Returns: grade_level, reading_ease, category, interpretation
# Simplify to target level
simplified = await cognitive_agent.simplify_text(
complex_text,
target_level="middle_school"
)
# Generate summaries
summary = await cognitive_agent.generate_summary(
long_text,
summary_type="brief",
max_sentences=2
)
# Predict cognitive load
load = await cognitive_agent.predict_cognitive_load(
content,
has_images=5,
has_interactions=3
)
# Convert to plain language (federal guidelines)
plain = await cognitive_agent.convert_to_plain_language(text)
# Chunk content for progressive disclosure
chunks = await cognitive_agent.chunk_content(text, chunk_size=200)Features:
- Text simplification with reading level targeting
- Reading level analysis (Flesch-Kincaid, Reading Ease)
- Content summarization (brief, medium, detailed)
- Cognitive load prediction and recommendations
- Progressive disclosure chunking
- Plain language conversion
from adk.agents.content_bias_detector import ContentBiasDetector
bias_detector = ContentBiasDetector()
# Comprehensive bias detection
result = await bias_detector.detect_bias(
ai_content,
check_types=["gender", "inclusive_language", "stereotypes", "cultural"]
)
# Returns: bias_score, fairness_score, issues, recommendations
# Get inclusive alternatives
alternatives = await bias_detector.suggest_inclusive_alternatives(content)
# Generate full report with grade
report = await bias_detector.generate_bias_report(content)
# Returns: grade (A-F), summary, actionable_stepsFeatures:
- Gender bias detection and balance analysis
- 50+ non-inclusive term replacements
- Stereotype pattern matching
- Cultural sensitivity checking
- Representation analysis
- Comprehensive bias reports with grades
from adk.agents.multimodal_output_agent import MultimodalOutputAgent
multimodal_agent = MultimodalOutputAgent()
# Prepare for text-to-speech
tts = await multimodal_agent.prepare_for_tts(
text,
speech_rate="medium",
add_pauses=True
)
# Returns: tts_text, ssml, estimated_duration
# Generate audio descriptions
audio_desc = await multimodal_agent.generate_audio_description(
visual_content,
detail_level="standard"
)
# Export to structured formats
json_output = await multimodal_agent.generate_structured_output(
content,
format_type="json",
include_metadata=True
)
# Generate print-friendly format
print_friendly = await multimodal_agent.generate_print_friendly(
content,
font_size=14,
line_spacing=1.5
)
# Prepare for Braille translation
braille = await multimodal_agent.generate_braille_ready(text, grade=2)Features:
- TTS preparation with SSML, abbreviation expansion, pause insertion
- Audio descriptions for visual content
- Structured format export (JSON, XML, YAML, Markdown)
- Print-friendly formatting (dyslexia-optimized)
- Braille-ready text preparation
from adk.agents.neurodiversity_support_agent import NeurodiversitySupportAgent
neurodiversity_agent = NeurodiversitySupportAgent()
# Autism-friendly adaptation
autism = await neurodiversity_agent.adapt_for_autism(content)
# Returns: adapted_content, autism_friendly_score, changes_made
# ADHD support with chunking
adhd = await neurodiversity_agent.adapt_for_adhd(
content,
chunk_size=150,
add_focus_aids=True
)
# Dyslexia formatting recommendations
dyslexia = await neurodiversity_agent.adapt_for_dyslexia(content)
# Returns: formatting_recommendations, challenges_identified
# Sensory overload detection
sensory = await neurodiversity_agent.detect_sensory_overload_risk(
content,
has_animations=True,
color_count=10
)
# Executive function support
ef_support = await neurodiversity_agent.provide_executive_function_support(
task_description
)
# Returns: checklist, time_estimates, aidsFeatures:
- Autism: Literal language, reduced ambiguity, explicit structure
- ADHD: Chunking, focus aids, time estimates, progress tracking
- Dyslexia: Font recommendations, spacing, word length analysis
- Sensory: Overload risk assessment and mitigation
- Executive function: Task breakdown, checklists, time management
# Clone repository
git clone https://github.com/Tuesdaythe13th/AccessibleDeepAgent.git
cd AccessibleDeepAgent
# Install dependencies
pip install -r requirements.txt
# Run demo notebook
jupyter notebook accessibility_features_demo.ipynbfrom adk.agents.visual_accessibility_agent import VisualAccessibilityAgent
from adk.agents.cognitive_accessibility_agent import CognitiveAccessibilityAgent
from adk.agents.content_bias_detector import ContentBiasDetector
# Initialize agents
visual_agent = VisualAccessibilityAgent()
cognitive_agent = CognitiveAccessibilityAgent()
bias_detector = ContentBiasDetector()
# Make your AI output accessible
ai_output = "Your AI-generated content here..."
# Check reading level
reading = await cognitive_agent.analyze_reading_level(ai_output)
print(f"Reading level: Grade {reading['grade_level']}")
# Simplify if needed
if reading['grade_level'] > 8:
simplified = await cognitive_agent.simplify_text(ai_output)
ai_output = simplified['simplified_text']
# Check for bias
bias = await bias_detector.detect_bias(ai_output)
print(f"Fairness score: {bias['fairness_score']:.2f}")
# Fix bias if detected
if bias['total_issues'] > 0:
fixed = await bias_detector.suggest_inclusive_alternatives(ai_output)
ai_output = fixed['inclusive_content']
# Prepare for TTS
tts = await multimodal_agent.prepare_for_tts(ai_output)
print(f"Ready for screen readers: {tts['ssml']}")- Python 3.8+
- pydantic
- torch (optional, for emotion classification)
- numpy (optional, for advanced features)
git clone https://github.com/Tuesdaythe13th/AccessibleDeepAgent.git
cd AccessibleDeepAgent
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtpip install pydanticAll accessibility agents work without ML dependencies!
# Additional dependencies for benchmark evaluation
pip install -r requirements-adk.txtSee the complete accessibility_features_demo.ipynb for:
- ✅ 40+ working code examples
- ✅ All features demonstrated end-to-end
- ✅ Google Colab compatible
- ✅ Copy-paste ready for your projects
- ✅ Detailed explanations and best practices
AccessibleDeepAgent/
├── src/adk/agents/
│ ├── visual_accessibility_agent.py # Visual features
│ ├── cognitive_accessibility_agent.py # Cognitive features
│ ├── content_bias_detector.py # Bias detection
│ ├── multimodal_output_agent.py # Multi-modal output
│ ├── neurodiversity_support_agent.py # Neurodiversity support
│ ├── core.py # Core coordination
│ └── ui_adaptation_agent.py # UI adaptations
├── src/adk/
│ ├── bidirectional_reasoning.py # Emotion AI fairness
│ ├── neuroadaptive_wrapper.py # Complete integration
│ └── utils/ # Shared utilities
├── accessibility_features_demo.ipynb # 🆕 Complete demo
└── src/ # DeepAgent framework
Use Individual Agents:
from adk.agents.cognitive_accessibility_agent import CognitiveAccessibilityAgent
cognitive_agent = CognitiveAccessibilityAgent()
result = await cognitive_agent.simplify_text(ai_output)Use Complete Wrapper:
from adk.neuroadaptive_wrapper import NeuroadaptiveWrapper
wrapper = NeuroadaptiveWrapper()
await wrapper.initialize()
result = await wrapper.process_interaction_with_emotion(...)class AccessibleChatbot:
def __init__(self):
self.cognitive_agent = CognitiveAccessibilityAgent()
self.bias_detector = ContentBiasDetector()
self.multimodal_agent = MultimodalOutputAgent()
async def generate_accessible_response(self, user_input, user_profile):
# Generate AI response
ai_response = await your_llm.generate(user_input)
# 1. Check for bias
bias_check = await self.bias_detector.detect_bias(ai_response)
if bias_check['total_issues'] > 0:
alt = await self.bias_detector.suggest_inclusive_alternatives(ai_response)
ai_response = alt['inclusive_content']
# 2. Simplify if needed
if user_profile.get('reading_level') == 'simple':
simp = await self.cognitive_agent.simplify_text(ai_response)
ai_response = simp['simplified_text']
# 3. Prepare multi-modal outputs
tts = await self.multimodal_agent.prepare_for_tts(ai_response)
return {
'text': ai_response,
'tts_ssml': tts['ssml'],
'fairness_score': bias_check['fairness_score']
}from fastapi import FastAPI
from adk.agents import CognitiveAccessibilityAgent, ContentBiasDetector
app = FastAPI()
cognitive = CognitiveAccessibilityAgent()
bias = ContentBiasDetector()
@app.post("/api/accessible")
async def make_accessible(content: str, features: List[str]):
result = {}
if "simplify" in features:
result['simplified'] = await cognitive.simplify_text(content)
if "bias" in features:
result['bias'] = await bias.detect_bias(content)
return resultAccessibleDeepAgent also includes a research-grade multi-tool reasoning agent for benchmark evaluation.
| Benchmark | Type | Status |
|---|---|---|
| ToolBench | Tool use | ✅ Supported |
| GAIA | QA with tools | ✅ Supported |
| API-Bank | API calling | ✅ Supported |
| RestBench | REST API use | ✅ Supported |
| ToolHop | Multi-hop tool use | ✅ Supported |
| ALFWorld | Embodied tasks | ✅ Supported |
| WebShop | Web navigation | ✅ Supported |
# Run on GAIA benchmark
python src/run_deep_agent.py \
--config_path ./config/base_config.yaml \
--dataset_name gaia \
--enable_tool_search \
--evalSee original sections below for full DeepAgent documentation.
The ADK (Accessibility Development Kit) provides real-time neuroadaptive accessibility with emotion AI bias mitigation.
- Real-time Signal Processing: Eye tracking, interaction patterns, mouse movement
- Cognitive State Estimation: Load, attention, fatigue, stress, comprehension
- Bidirectional Reasoning: 40% FNR reduction for alexithymic users
- BeTaL Testing: Automated fairness benchmark generation (5.8% gap vs 12.5% baseline)
- Memory System: Persistent profiles with mem0.ai
- UI Adaptation: Dynamic accessibility adjustments
import asyncio
from adk.agents.core import AccessibilityCoordinator
from adk.utils import SignalType
async def main():
coordinator = AccessibilityCoordinator()
await coordinator.initialize()
# Process interaction
raw_signals = [
(SignalType.EYE_TRACKING, 0.7, {}),
(SignalType.INTERACTION_TIMING, 0.65, {}),
]
result = await coordinator.process_user_interaction(
raw_signals=raw_signals,
user_id="user123",
content_to_refine="Your content..."
)
print(f"Cognitive Load: {result['cognitive_state']['cognitive_load']:.2f}")
await coordinator.close()
asyncio.run(main())Problem: Emotion AI discriminates against alexithymic users (40% higher false negatives)
Solution: Bidirectional verification instead of unidirectional classification
Audio → Encoder → Emotion → Decoder → Reconstructed Audio
↓ ↓
Embedding 1 ←── Contrastive Loss ──→ Embedding 2
Result: 40% reduction in false negative rate
LLM-guided benchmark generation achieving 5.8% fairness gap (vs 12.5% baseline)
Designer LLM → proposes scenarios
Student LLM → evaluated on fairness
Feedback Loop → optimizes for gaps
- 🆕 Accessibility Demo: accessibility_features_demo.ipynb - Complete feature showcase
- Main README: README.md - This file
- ADK Docs: src/adk/docs/README.md
- Bidirectional Reasoning: src/adk/docs/BIDIRECTIONAL_REASONING.md
- BeTaL Framework: src/adk/docs/BETAL.md
- ✅ WCAG 2.1 AA/AAA
- ✅ Federal Plain Language Guidelines
- ✅ Person-First Language Standards
- ✅ Section 508 Accessibility
- ✅ Flesch-Kincaid Readability
- AI Content Generators - Ensure all generated content is accessible
- Chatbots & Assistants - Adapt responses based on user profiles
- Educational Platforms - Simplify content to appropriate levels
- CMS Systems - Check content for bias before publishing
- Documentation - Generate multi-modal versions
- Customer Support - Provide accessible responses
We welcome contributions! Priority areas:
- Additional language support
- More bias detection patterns
- Enhanced neurodiversity features
- New output formats
- Accessibility testing tools
- Documentation improvements
See CONTRIBUTING.md for guidelines.
@software{accessibledeepagent2025,
title={AccessibleDeepAgent: Comprehensive Accessibility Framework for Generative AI},
author={Tuesday, ARTIFEX Labs},
year={2025},
url={https://github.com/Tuesdaythe13th/AccessibleDeepAgent},
note={Production-ready accessibility toolkit with fairness-focused AI agents}
}MIT License - see LICENSE file for details.
- Humane Intelligence Accessibility Bias Bounty - Motivating force
- WCAG & W3C - Accessibility standards
- Plain Language Community - Guidelines and best practices
- Neurodiversity Advocates - User-centered design insights
- mem0.ai - Memory system infrastructure
- OpenAI, Anthropic - LLM providers
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: general@artifex.fun
🚀 Active Development | ⭐ Production-Ready Accessibility Features
- ✅ NEW: 5 comprehensive accessibility agents (Visual, Cognitive, Bias, Multi-Modal, Neurodiversity)
- ✅ NEW: Complete Jupyter demo notebook with 40+ examples
- ✅ NEW: Platform-agnostic integration guides
- ✅ Bidirectional reasoning for emotion AI fairness
- ✅ BeTaL automated fairness testing
- ✅ Multi-benchmark evaluation harness
- Browser extension for signal collection
- Mobile app support
- Multi-language accessibility
- A/B testing framework
- Extended benchmark coverage
- User feedback integration
Making Generative AI Accessible to Everyone ♿️
Built with accessibility at the core. Production-ready. Platform-agnostic. Open source.