Skip to content

Neural interface system that uses machine learning to decode brain signals and generate immersive virtual worlds. Combines EEG/fMRI processing, deep learning models, and real-time procedural generation to transform neural activity into explorable dreamscapes.

License

Notifications You must be signed in to change notification settings

zaydabash/dreamwalk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DreamWalk - Neural Dreamscape Generator

Neural interface system that uses machine learning to decode brain signals and generate immersive virtual worlds

DreamWalk is a neural interface platform that bridges neuroscience and artificial intelligence to create immersive virtual experiences. The system uses machine learning algorithms to decode real-time EEG and fMRI signals, translating brain activity patterns into dynamic, procedurally generated virtual worlds that you can explore in VR.

Built on state-of-the-art neuroscience research and deep learning models, DreamWalk combines real-time neural signal processing, emotion classification, and AI-powered world generation to transform consciousness into explorable environments.

DreamWalk Neural Interface

Features

  • Real-time EEG Processing: Live neural signal ingestion with artifact removal and feature extraction
  • AI-Powered World Generation: Neural decoders map brain activity to CLIP embeddings for world generation
  • Procedural Dreamscapes: Unity-powered environments that react to emotional states
  • VR Integration: Full OpenXR support with smooth locomotion and interaction
  • Texture Generation: Stable Diffusion-powered procedural textures and skyboxes
  • Narrative Layer: LLM-generated ambient narration based on neural patterns
  • Web Dashboard: Real-time monitoring and visualization of neural states
  • Mock Mode: Complete demo experience without hardware

Screenshots

Web Dashboard

Web Dashboard Real-time monitoring of neural signals, emotional states, and world generation

VR Environment

VR Dreamscape Immersive virtual world generated from neural activity

Neural Signal Processing

EEG Processing Real-time EEG signal analysis and feature extraction

World Generation

World Generation Procedural biome generation based on emotional state

Architecture

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│   EEG Hardware  │ -> │  Signal Pipeline │ -> │  Neural Decoder │
│  (OpenBCI/LSL)  │    │   (MNE-Python)   │    │   (PyTorch)     │
└─────────────────┘    └──────────────────┘    └─────────────────┘
                                                        │
┌─────────────────┐    ┌──────────────────┐           │
│   Unity VR      │ <- │  WebSocket API   │    ┌─────────────────┐
│   World Gen     │    │  (FastAPI)       │    │  CLIP Latent    │
└─────────────────┘    └──────────────────┘    │  Embeddings     │
         │                       │              └─────────────────┘
         │                       │                       │
┌─────────────────┐    ┌──────────────────┐           │
│  Texture Gen    │    │   Narrative      │    ┌─────────────────┐
│ (Stable Diff)   │    │     Layer        │    │  World State    │
└─────────────────┘    │    (LLM)         │    │  Controller     │
                       └──────────────────┘    └─────────────────┘

Quick Start

Prerequisites

  • Docker & Docker Compose
  • Python 3.10+
  • Unity 2023.3+ (for VR builds)
  • NVIDIA GPU (recommended for texture generation)

1. Clone and Setup

git clone <your-repo>
cd dreamwalk
cp .env.example .env
docker-compose up -d

2. Run Mock Demo

# Start all services
docker-compose up

# In another terminal, start mock EEG data
python scripts/mock_eeg_stream.py

# Open Unity project and press Play
# Navigate to http://localhost:8000 for dashboard

3. Real EEG Setup (Optional)

# Install OpenBCI drivers
pip install pyOpenBCI

# Connect your EEG device
python scripts/real_eeg_stream.py --device /dev/ttyUSB0

Project Structure

dreamwalk/
├── services/
│   ├── signal-processor/     # EEG/fMRI preprocessing
│   ├── neural-decoder/       # Brain-to-latent mapping
│   ├── realtime-server/      # WebSocket API & state management
│   ├── texture-generator/    # Stable Diffusion service
│   ├── narrative-layer/      # LLM narration service
│   └── web-dashboard/        # Monitoring UI
├── unity/
│   └── DreamWalkVR/          # Unity VR project
├── datasets/
│   ├── synthetic/            # Generated training data
│   └── real/                 # Real EEG recordings
├── models/
│   ├── checkpoints/          # Trained model weights
│   └── exports/              # ONNX exports
├── scripts/
│   ├── mock_eeg_stream.py    # Demo data generator
│   ├── real_eeg_stream.py    # Hardware integration
│   └── train_decoder.py      # Model training
├── docker-compose.yml        # Service orchestration
├── .env.example              # Environment configuration
└── README.md

Core Components

1. Signal Processing Pipeline

  • Real-time EEG ingestion via LabStreamingLayer (LSL)
  • Artifact removal using Independent Component Analysis (ICA)
  • Feature extraction (band powers, Hjorth parameters, connectivity)
  • fMRI support for offline analysis of brain volumes

2. Neural Decoder

  • EEG → CLIP embedding mapping using transformer architecture
  • Emotion estimation (valence, arousal, dominance)
  • Motif detection for recurring neural patterns
  • Real-time inference with <100ms latency

3. World Generation Engine

  • Procedural terrain generation based on neural state
  • Dynamic lighting and weather systems
  • Particle effects and atmospheric rendering
  • Audio spatialization matching emotional states

4. VR Interface

  • OpenXR integration for cross-platform VR support
  • Smooth locomotion and teleportation
  • Real-time world morphing as neural state changes
  • Performance optimization for 72+ FPS

Research Applications

  • Sleep Research: Visualize REM activity as dreamscapes
  • Therapeutic Tools: Emotional state visualization for therapy
  • Artistic Creation: Generate surreal worlds from meditation states
  • Personal Reflection: Daily neural diary as explorable worlds

Development

Adding New Features

  1. Create feature branch: git checkout -b feature/your-feature
  2. Implement in appropriate service
  3. Add tests and documentation
  4. Submit pull request

Testing

Test Coverage: 70%+ (target: 95%)

The project includes comprehensive unit and integration tests with pytest. Code quality is enforced through flake8, pylint, black, and mypy.

# Run all tests with coverage
pytest tests/ -v --cov=services --cov-report=term-missing --cov-report=html

# Run specific service tests
cd services/signal-processor && python -m pytest

# Run only unit tests
pytest tests/ -m unit

# Run integration tests
pytest tests/ -m integration

# Run code quality checks
flake8 services/
pylint services/
black --check services/
isort --check-only services/
mypy services/

Test Structure:

  • tests/services/ - Unit tests for each service
  • tests/integration/ - End-to-end integration tests
  • tests/conftest.py - Shared pytest fixtures

CI/CD:

  • Automated testing on push and pull requests
  • Code quality checks (flake8, pylint, black, isort)
  • Security scanning (bandit, safety)
  • Coverage reporting (target: 95% coverage)

Training Models

# Train neural decoder
python scripts/train_decoder.py --config configs/decoder_config.yaml

# Generate synthetic data
python scripts/generate_synthetic_data.py --samples 10000

Performance Metrics

  • Latency: <100ms from EEG to world update
  • Throughput: 10Hz real-time processing
  • VR Performance: 72+ FPS on Quest 3
  • Accuracy: 85%+ emotion classification on test data
  • Test Coverage: 70%+ (target: 95%)

Security

Security is a top priority for DreamWalk, especially when handling sensitive neural data.

Security Features

  • Input Validation: All API endpoints validate and sanitize inputs using Pydantic models
  • Secret Management: Environment variables and secrets are never committed to version control
  • Dependency Scanning: Automated security scanning of dependencies in CI/CD pipeline
  • Code Quality: Static analysis tools (flake8, pylint, bandit) check for security issues
  • Data Privacy: Neural data is handled with privacy and compliance in mind

Security Considerations

Development Environment:

  • API endpoints do not require authentication (development only)
  • Services communicate over HTTP (use HTTPS in production)
  • CORS is open for development (restrict in production)

Production Recommendations:

  • Enable authentication on all API endpoints
  • Use HTTPS/TLS for all service communication
  • Configure proper CORS settings
  • Store secrets in secure secret management systems
  • Enable Redis AUTH and Kafka security
  • Implement rate limiting and input validation
  • Enable data encryption at rest and in transit

For detailed security information, see SECURITY.md.

About

DreamWalk represents a breakthrough in neural interface technology, combining cutting-edge neuroscience research with advanced machine learning and immersive computing. The system bridges the gap between human consciousness and digital environments, enabling users to explore their own neural patterns as living, breathing virtual worlds.

Research Foundation

Built on established neuroscience principles and state-of-the-art signal processing techniques, DreamWalk leverages:

  • Real-time EEG analysis with artifact removal and feature extraction
  • Neural decoding algorithms that map brain activity to semantic embeddings
  • Emotional state estimation using valence-arousal-dominance models
  • Procedural generation that responds to neural patterns in real-time

Technical Innovation

The system introduces several novel approaches:

  • EEG-to-CLIP mapping for semantic neural decoding
  • Real-time world morphing based on neural state changes
  • Multi-modal integration combining EEG, emotion estimation, and procedural generation
  • Scalable microservices architecture for production deployment

Applications

DreamWalk has potential applications across multiple domains:

  • Neuroscience Research: Visualizing brain activity patterns as explorable environments
  • Therapeutic Interventions: Creating healing environments based on emotional states
  • Creative Expression: Transforming meditation and creative states into artistic worlds
  • Education: Making neural processes tangible and interactive

Releases

Version 1.0.0 (Current)

Initial Release - January 2024

Features:

  • Complete microservices architecture with 6 core services
  • Real-time EEG signal processing with artifact removal
  • Neural decoder for EEG-to-CLIP embedding mapping
  • Emotion classification (valence, arousal, dominance)
  • Procedural world generation with 9 biome types
  • Unity VR integration with smooth world morphing
  • Web dashboard for real-time monitoring
  • Docker orchestration with Prometheus/Grafana monitoring
  • Mock data generation for immediate demo capabilities

Technical Stack:

  • Python 3.10+ with FastAPI and WebSocket support
  • PyTorch for neural network models
  • Stable Diffusion for texture generation
  • Unity 2023.3+ with OpenXR VR support
  • Docker Compose for service orchestration
  • Redis for caching and pub/sub messaging

System Requirements:

  • Docker & Docker Compose
  • Python 3.10+
  • Unity 2023.3+ (for VR components)
  • NVIDIA GPU (recommended for texture generation)
  • 8GB+ RAM
  • 10GB+ disk space

Upcoming Releases

Version 1.1.0 - Planned for Q2 2024

  • Real EEG hardware integration (OpenBCI)
  • fMRI support for offline analysis
  • Additional biome types and generation algorithms
  • Mobile app companion
  • Multiplayer dreamscape sharing

Version 1.2.0 - Planned for Q3 2024

  • Advanced neural motif detection
  • AI agent integration in virtual worlds
  • Haptic feedback support
  • Cloud deployment options
  • Enterprise authentication and security

Version 2.0.0 - Planned for Q4 2024

  • Real-time fMRI integration
  • Multi-modal neural data fusion
  • Advanced procedural narrative generation
  • Cross-platform mobile support
  • Research collaboration tools

Release Notes

For detailed release notes and changelog, see RELEASES.md.

Download

  • Latest Release: v1.0.0
  • Source Code: Clone the repository or download as ZIP
  • Docker Images: Available on Docker Hub (coming soon)

Code Quality

The project maintains high code quality standards:

  • Linting: flake8 and pylint for code style and quality
  • Formatting: black for code formatting, isort for import sorting
  • Type Checking: mypy for static type checking
  • Testing: pytest with 70%+ test coverage (target: 95%)
  • Security: bandit for security scanning, safety for dependency checks
  • CI/CD: Automated testing and quality checks on every commit

Running Quality Checks:

# Format code
black services/
isort services/

# Run linters
flake8 services/
pylint services/

# Type checking
mypy services/

# Security scanning
bandit -r services/
safety check

Contributing

Contributions are welcomed. Please see CONTRIBUTING.md for guidelines.

Before submitting a pull request:

  1. Ensure all tests pass: pytest tests/
  2. Run code quality checks: flake8 services/ && black --check services/
  3. Update tests if adding new features
  4. Update documentation as needed
  5. Ensure test coverage remains above 70%

License

MIT License - see LICENSE for details.

Acknowledgments

  • OpenBCI for EEG hardware support
  • MNE-Python for signal processing
  • Unity Technologies for VR platform
  • Stability AI for generative models

Built for the future of neural interfaces

About

Neural interface system that uses machine learning to decode brain signals and generate immersive virtual worlds. Combines EEG/fMRI processing, deep learning models, and real-time procedural generation to transform neural activity into explorable dreamscapes.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published