I bridge the gap between raw data infrastructure and strategic business insights. Currently completing my BSc in Data Science (Top 10% - GPA 9.0) while building automated, scalable data solutions.
I define myself as a Data & Analytics Specialist because I don't just move data; I make it useful. My approach combines the rigor of Data Engineering (robust pipelines, data quality, CI/CD) with the exploratory nature of Data Science.
- 🔭 Focus: Designing automated ETL/ELT pipelines, Data Warehousing, and decision-ready dashboards.
- 💼 Experience: Diagnosed a 23.2% server error rate across 1,000 requests, identifying 9 critical endpoints causing 3,700ms+ latency in my latest project.
- 🌱 Learning: Deepening my knowledge in Apache Airflow, GCP, and DuckDB for modern data stacks.
Focusing on modern Data Engineering and Analytics architecture.
| Domain | Tools |
|---|---|
| Languages | |
| Engineering & Cloud | |
| DevOps & CI/CD | |
| Analytics & BI |
Turning raw access logs into actionable infrastructure insights.
- The Challenge: Production API showing degraded performance with no visibility into root causes.
- The Solution: Built a SQL-based diagnostic pipeline with DuckDB + interactive Looker Studio dashboard.
- Impact: 📉 Identified 9 of 11 endpoints with >20% error rate, pinpointing 3 critical services causing 35.78% of all 5xx errors.
Solving the "stale data" problem for business stakeholders.
- The Challenge: Sales team spent 2 hours/day manually merging CSVs, leading to errors and delays.
- The Solution: Built an end-to-end Python ETL pipeline with Parquet optimization and Data Quality checks.
- Impact: 📉 Reduced reporting latency by 97% (Automated & Daily).
Collaboration with MIT researchers to analyze economic survival in Argentina.
- The Tech: NLP for survey processing, Geographic segmentation, and Statistical Analysis.
- Impact: Identified trends contributing to recommendations for a potential 18% sales improvement for local businesses.


