glq_analytics.py- Main entry point for all operationsrequirements.txt- Python dependencies.env.template/.env.example- Environment configuration templates.gitignore- Git ignore patterns
Contains the main application logic organized by functionality.
config.py- Configuration management and validationblockchain_client.py- GLQ Chain RPC client with connection poolinginfluxdb_client.py- InfluxDB client optimized for blockchain data
historical_processor.py- Historical blockchain data synchistorical_clean.py- Clean version of historical processorrealtime_monitor.py- Real-time blockchain monitoringmonitoring_service.py- Web service for monitoring dashboard
__init__.py- Analytics module initializationtoken_analytics.py- ERC20/721/1155 token transfer analyticsdex_analytics.py- DEX swap and liquidity analyticsdefi_analytics.py- DeFi protocol interaction analyticsadvanced_analytics.py- Coordinated analytics processing
Entry point scripts for various operations.
full_sync_with_analytics.py- Complete blockchain synchronization with analyticsstart_realtime_monitor.py- Start real-time command-line monitoringstart_monitor_service.py- Start web dashboard monitoring service
Comprehensive testing and validation scripts.
test_sync_setup.py- Complete system connectivity and setup validationtest_setup.py- Basic setup and connection testingtest_monitor_service.py- Monitoring service testing
Project documentation and guides.
ANALYTICS.md- Analytics capabilities and modules documentationFIX_SUMMARY.md- Technical fix documentationSYNC_SUMMARY.md- Synchronization completion guideCHANGELOG.md- Version history and changesCONTRIBUTING.md- Development contribution guidelinesPROJECT_STRUCTURE.md- This file
System configuration and schemas.
config.yaml- Main application configurationinfluxdb_schema.md- InfluxDB database schema documentation
Example scripts and usage patterns (to be populated).
Runtime logs and debugging information (created at runtime).
# Test system setup
python glq_analytics.py test
# Run full blockchain sync
python glq_analytics.py sync
# Start real-time monitoring
python glq_analytics.py monitor
# Start web dashboard
python glq_analytics.py service# Run tests directly
python tests/test_sync_setup.py
# Run sync directly
python scripts/full_sync_with_analytics.py
# Start monitoring directly
python scripts/start_realtime_monitor.pyAll scripts automatically add the correct paths:
src/directory is added to Python path for core modules- Relative imports work correctly from any script location
# From any script in scripts/ or tests/
import sys
from pathlib import Path
# Add src to path
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
# Now can import core modules
from core.config import Config
from core.blockchain_client import BlockchainClient- Configuration Management: Centralized config loading and validation
- Blockchain Connectivity: High-performance RPC client with pooling
- Database Integration: Optimized InfluxDB client for time-series data
- Historical Processing: Batch processing for blockchain history
- Real-time Processing: Live blockchain monitoring and processing
- Service Management: Web services and health monitoring
- Token Analysis: ERC token transfer tracking and metrics
- DEX Analysis: Decentralized exchange activity analysis
- DeFi Analysis: DeFi protocol interaction tracking
- Advanced Coordination: Orchestrated multi-module analytics
- Place core functionality in appropriate
src/subdirectory - Add tests in
tests/with corresponding names - Create scripts in
scripts/if new entry points needed - Update documentation in
docs/
- Always use
Path(__file__).parent.parentto reference project root - Use relative imports within
src/modules - Add proper path setup in standalone scripts
- Add new config options to
config/config.yaml - Document schema changes in
config/influxdb_schema.md - Update environment template files for new variables
- Each analytics module is independent and can be enabled/disabled
- Processing modules can be run separately or together
- Configuration-driven feature flags for easy management
- Core clients support connection pooling and batching
- Analytics modules process in parallel where possible
- Database writes are optimized with batch operations
- Comprehensive logging throughout all modules
- Health check endpoints in monitoring services
- Progress tracking and performance metrics