Skip to content

Resonant-Intelligence-Lab/retrocausality-game

Repository files navigation

PREDICTIVE

Adaptive Resonance Simulator

Don't react. Anticipate.

PREDICTIVE Logo


Overview

PREDICTIVE is an experimental adaptive resonance simulator that transforms traditional reactive gameplay into a deeply personal exploration of anticipatory consciousness. By learning how players perceive and anticipate time, the system creates a dynamic, emotionally responsive experience that adapts to individual cognitive patterns and emotional states.

This is not a game you play—it's a system that learns you.


Core Philosophy

Traditional games reward reaction speed. PREDICTIVE rewards anticipation—the ability to synchronize with events before they occur. Through continuous biofeedback loops and adaptive algorithms, the system evolves alongside the player, creating a unique temporal signature for each individual.

Key Principles

  • Anticipatory Activity over Reactive Response: Players must predict events before sensory confirmation
  • Emotional Resonance: The system responds to player emotional states through timing patterns and input rhythms
  • Adaptive Learning: Difficulty, visual cues, and audio layers adjust based on individual anticipation patterns
  • Minimalist Aesthetic: Clean, neural-inspired visuals that emphasize symbolic meaning over decoration
  • Temporal Perception Training: Develops heightened awareness of time perception and predictive cognition

Technical Architecture

System Components

``` ┌─────────────────────────────────────────────────────────┐ │ PREDICTIVE CORE │ ├─────────────────────────────────────────────────────────┤ │ │ │ ┌──────────────┐ ┌──────────────┐ ┌───────────┐ │ │ │ Temporal │───▶│ Adaptive │───▶│ Emotion │ │ │ │ Engine │ │ Resonance │ │ Analyzer │ │ │ └──────────────┘ └──────────────┘ └───────────┘ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌──────────────────────────────────────────────────┐ │ │ │ Predictive Field Generator │ │ │ └──────────────────────────────────────────────────┘ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌──────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ Visual │ │ Audio │ │ Haptic │ │ │ │ System │ │ Synthesis │ │ Feedback │ │ │ └──────────┘ └──────────────┘ └──────────────┘ │ │ │ └─────────────────────────────────────────────────────────┘ ```

1. Temporal Engine

The Temporal Engine manages event generation and timing prediction.

Features:

  • Dynamic event scheduling based on player performance
  • Temporal window analysis (anticipation vs. reaction)
  • Rhythm pattern detection and adaptation
  • Predictive timing model per player

Algorithm: ```typescript interface TemporalProfile { averageAnticipationTime: number // Player's typical anticipation window rhythmVariance: number // Consistency of timing learningRate: number // How quickly player adapts temporalSignature: number[] // Unique timing fingerprint }

function adaptEventTiming(profile: TemporalProfile, currentLevel: number): number { const baseInterval = 1500 - (currentLevel * 100) const personalizedInterval = baseInterval * (1 + profile.rhythmVariance) const anticipationAdjustment = profile.averageAnticipationTime * 0.3

return Math.max( personalizedInterval - anticipationAdjustment, 800 // Minimum interval ) } ```

2. Adaptive Resonance System

The core learning mechanism that creates personalized experiences.

Adaptive Mechanisms:

A. Visual Cue Adaptation

  • Early Stages: Prominent visual hints (expanding rings, geometric patterns)
  • Intermediate: Subtle wave distortions and 4D shape morphing
  • Advanced: Minimal cues, relying on internalized rhythm
  • Mastery: Pure anticipation with no external hints

```typescript function calculateVisualHintStrength( accuracy: number, level: number, consecutiveSuccesses: number ): number { const baseStrength = 1.0 const accuracyFactor = Math.max(0.2, 1 - accuracy) const levelFactor = Math.max(0.3, 1 - (level * 0.05)) const masteryFactor = Math.max(0.1, 1 - (consecutiveSuccesses * 0.02))

return baseStrength * accuracyFactor * levelFactor * masteryFactor } ```

B. Audio Layer Adaptation

Dynamic soundscape that becomes more harmonious with better anticipation.

Audio Layers:

  1. Base Layer: Ambient drone (always present)
  2. Rhythm Layer: Adds with 60%+ accuracy
  3. Melody Layer: Adds with 75%+ accuracy
  4. Harmony Layer: Adds with 85%+ accuracy
  5. Resonance Layer: Adds with 95%+ accuracy (pure flow state)

```typescript interface AudioProfile { baseFrequency: number harmonicRatio: number dissonanceLevel: number layerActivation: boolean[] }

function synthesizeAdaptiveAudio( accuracy: number, anticipationTiming: number, emotionalState: EmotionalState ): AudioProfile { const harmonicRatio = 1 + (accuracy * 0.5) // More accurate = more harmonious const dissonance = Math.max(0, 1 - accuracy) * emotionalState.tension

return { baseFrequency: 220 + (accuracy * 220), // A3 to A4 range harmonicRatio, dissonanceLevel: dissonance, layerActivation: [ true, // Base always on accuracy > 0.6, // Rhythm accuracy > 0.75, // Melody accuracy > 0.85, // Harmony accuracy > 0.95 // Resonance ] } } ```

3. Emotion Analyzer

Analyzes player emotional state through behavioral patterns.

Emotional Indicators:

  • Input Rhythm: Consistent vs. erratic timing patterns
  • Anticipation Confidence: Early vs. late anticipations
  • Recovery Pattern: How quickly player adapts after mistakes
  • Engagement Level: Sustained attention vs. distraction

```typescript interface EmotionalState { tension: number // 0-1: Relaxed to tense confidence: number // 0-1: Uncertain to confident flow: number // 0-1: Distracted to flow state frustration: number // 0-1: Calm to frustrated }

function analyzeEmotionalState( recentAnticipations: number[], timingVariance: number, recoverySpeed: number ): EmotionalState { const avgAccuracy = recentAnticipations.reduce((a, b) => a + b, 0) / recentAnticipations.length

return { tension: Math.min(1, timingVariance * 2), confidence: avgAccuracy, flow: Math.max(0, 1 - timingVariance) * avgAccuracy, frustration: Math.max(0, (1 - avgAccuracy) * (1 / recoverySpeed)) } } ```

Adaptive Responses to Emotional States:

Emotional State System Response
High Tension Slower event timing, stronger visual cues, calming audio frequencies
Low Confidence More forgiving anticipation windows, encouraging feedback particles
Flow State Minimal cues, faster progression, harmonious audio layers
Frustration Temporary difficulty reduction, visual breathing exercises, reset suggestion

4. Predictive Field Generator

The visual manifestation of player mastery—a growing aura that represents anticipatory power.

Field Properties:

  • Size: Grows with correct anticipations (max 100 units)
  • Opacity: Pulses with event synchronization
  • Color: Shifts from cyan (learning) → purple (mastery)
  • Particle Density: Increases with consecutive successes

```typescript interface PredictiveField { radius: number // 60-260 pixels opacity: number // 0.3-0.8 colorHue: number // 180 (cyan) to 280 (purple) particleDensity: number // 0-100 resonanceLevel: number // 0-10 (visual intensity) }

function updatePredictiveField( currentField: PredictiveField, anticipationAccuracy: number, consecutiveSuccesses: number ): PredictiveField { return { radius: 60 + (anticipationAccuracy * 200), opacity: 0.3 + (anticipationAccuracy * 0.5), colorHue: 180 + (anticipationAccuracy * 100), // Cyan to purple particleDensity: Math.min(100, consecutiveSuccesses * 5), resonanceLevel: Math.floor(anticipationAccuracy * 10) } } ```


Adaptive Learning Algorithms

Temporal Signature Learning

Each player develops a unique temporal signature—a fingerprint of their anticipatory patterns.

```typescript class TemporalSignatureLearner { private history: number[] = [] private readonly maxHistory = 100

learn(anticipationTime: number): void { this.history.push(anticipationTime) if (this.history.length > this.maxHistory) { this.history.shift() } }

getSignature(): TemporalSignature { const mean = this.calculateMean() const variance = this.calculateVariance() const distribution = this.calculateDistribution()

return {
  preferredAnticipationWindow: mean,
  consistency: 1 - variance,
  distributionPattern: distribution,
  adaptationRate: this.calculateAdaptationRate()
}

}

private calculateAdaptationRate(): number { // Measures how quickly player improves over time const recentPerformance = this.history.slice(-20) const earlyPerformance = this.history.slice(0, 20)

const recentAvg = recentPerformance.reduce((a, b) => a + b, 0) / recentPerformance.length
const earlyAvg = earlyPerformance.reduce((a, b) => a + b, 0) / earlyPerformance.length

return (recentAvg - earlyAvg) / this.history.length

} } ```

Difficulty Scaling Algorithm

Dynamic difficulty adjustment based on player performance and emotional state.

```typescript function calculateAdaptiveDifficulty( currentLevel: number, accuracy: number, emotionalState: EmotionalState, temporalSignature: TemporalSignature ): DifficultySettings { // Base difficulty from level let difficulty = currentLevel

// Adjust for accuracy (too easy or too hard) if (accuracy > 0.9) { difficulty += 0.5 // Increase challenge } else if (accuracy < 0.5) { difficulty -= 0.3 // Reduce challenge }

// Emotional state modulation if (emotionalState.frustration > 0.7) { difficulty -= 0.5 // Reduce difficulty when frustrated } if (emotionalState.flow > 0.8) { difficulty += 0.3 // Increase difficulty in flow state }

// Temporal signature adaptation const consistencyBonus = temporalSignature.consistency * 0.2 difficulty += consistencyBonus

return { eventInterval: calculateEventInterval(difficulty), visualHintStrength: calculateVisualHintStrength(accuracy, difficulty, 0), anticipationWindow: temporalSignature.preferredAnticipationWindow * 1.2, audioComplexity: Math.min(5, Math.floor(difficulty / 2)) } } ```


Visual Design System

Color Palette

The minimalist color system emphasizes neural and dimensional aesthetics:

Color Hex Usage Symbolic Meaning
Deep Void #0F0F19 Background The unknown, potential
Cyan Pulse #00DCFF Primary accent Anticipation, clarity
Electric Blue #64C8FF Secondary Neural activity
Violet Resonance #9664FF Mastery indicator Higher consciousness
Quantum Purple #C864FF Flow state Transcendence

Typography

  • Display Font: Orbitron (futuristic, geometric)
  • Monospace: JetBrains Mono (technical precision)
  • Usage: Minimal text, maximum symbolic communication

Visual Elements

4D Visualizations

  • Tesseract (Hypercube): Represents higher-dimensional thinking
  • 4D Simplex: Morphing geometric consciousness
  • Dimensional Waves: Multi-layered reality perception

Particle Systems

  • Anticipation Particles: Burst from the center on correct predictions
  • Field Particles: Orbit the predictive field
  • Event Particles: Spiral patterns for wave events

Predictive Field

  • Core: Bright cyan-to-purple gradient
  • Aura: Multi-layered glow with depth
  • Rings: Rotating dashed circles indicating synchronization

Emotional Responsiveness Strategies

1. Tension Detection & Response

Detection:

  • Rapid, inconsistent input timing
  • Decreased accuracy after mistakes
  • Shortened anticipation windows

Response:

  • Slow down event generation by 20%
  • Increase visual hint prominence
  • Shift audio to lower, calming frequencies (220-330 Hz)
  • Add breathing rhythm to visual waves

2. Flow State Recognition & Enhancement

Detection:

  • Consistent anticipation timing (variance < 50ms)
  • High accuracy (>85%) sustained over 10+ events
  • Optimal anticipation window usage

Response:

  • Reduce visual hints to minimum
  • Add all audio harmony layers
  • Increase particle density and field glow
  • Accelerate event timing to match player rhythm

3. Frustration Management

Detection:

  • Declining accuracy over time
  • Increased timing variance
  • Longer gaps between inputs (disengagement)

Response:

  • Offer a gentle reset suggestion
  • Temporarily reduce difficulty
  • Provide encouraging visual feedback (softer colors)
  • Introduce longer pauses between events

4. Confidence Building

Detection:

  • Improving the accuracy trend
  • Decreasing timing variance
  • Earlier anticipations (growing confidence)

Response:

  • Gradually remove visual cues
  • Introduce more complex event patterns
  • Increase audio richness
  • Expand the predictive field more rapidly

Technical Stack

  • Framework: Next.js 16 (App Router)
  • Rendering: HTML5 Canvas API
  • Audio: Web Audio API
  • State Management: React hooks + refs
  • Styling: Tailwind CSS v4
  • Typography: Orbitron, JetBrains Mono
  • Deployment: Vercel

Playing PREDICTIVE

  1. Begin Synchronization: Start the experience
  2. Observe: Watch for subtle visual hints—ripples, waves, geometric distortions
  3. Anticipate: Press SPACE or CLICK before events occur
  4. Synchronize: Find your rhythm and enter flow state
  5. Transcend: Watch your predictive field grow as you master anticipation

Research & Inspiration

Neuroscience

  • Predictive Processing Theory: The brain as a prediction machine
  • Temporal Perception: How we experience and anticipate time
  • Flow State Research: Optimal experience and peak performance

Game Design

  • Rez (Tetsuya Mizuguchi): Synesthesia and rhythm
  • Sound Shapes: Audio-visual synchronization
  • Inside (Playdead): Minimalist environmental storytelling

Philosophy

  • Anticipatory Systems (Robert Rosen): Systems that contain predictive models
  • Temporal Consciousness: The phenomenology of time perception
  • Embodied Cognition: Mind-body integration in perception

Contributing

PREDICTIVE is an experimental research project exploring anticipatory consciousness and adaptive systems. Contributions are welcome in the following areas:

  • Temporal learning algorithms
  • Emotional state detection methods
  • Audio synthesis techniques
  • Visual effect systems
  • Accessibility improvements
  • Performance optimization

License

MIT License - See LICENSE file for details


Contact

For research inquiries, collaboration, or feedback:


PREDICTIVE — An Adaptive Resonance Simulator
Built with anticipation

Releases

No releases published

Packages

 
 
 

Contributors