Data Quality Monitoring powered by DuckDB
Koality is a Python library for data quality monitoring (DQM) using DuckDB. It provides configurable checks that validate data in tables and can persist results to database tables for monitoring and alerting.
We would like to thank Norbert Maager who is the original inventor of Koality.
Warning
This library is a work in progress!
Breaking changes should be expected until a 1.0 release, so version pinning is recommended.
For comprehensive documentation, visit the Koality Documentation.
- Configurable Checks: Define data quality checks via simple YAML configuration files
- DuckDB-Powered: Fast, in-process analytics with DuckDB's in-memory engine
- External Database Support: Currently supports Google Cloud BigQuery via DuckDB extensions
- Multiple Check Types: Null ratios, regex matching, value sets, duplicates, counts, match rates, outlier detection, and more
- Flexible Filtering: Dynamic filtering system with column/value pairs for targeted checks
- Result Persistence: Store check results in database tables for historical tracking
- CLI Tool: Easy-to-use command-line interface for running checks
- Threshold Validation: Compare check results against configurable lower/upper bounds
| Database | Status |
|---|---|
| DuckDB (in-memory) | ✅ Fully supported |
| Google Cloud BigQuery | ✅ Fully supported |
Koality uses DuckDB as its query engine. External databases are accessed through DuckDB extensions (e.g., the BigQuery extension for Google Cloud).
External databases may need custom handling in execute_query!
| Check Type | Description |
|---|---|
NullRatioCheck |
Share of NULL values in a column |
RegexMatchCheck |
Share of values matching a regex pattern |
ValuesInSetCheck |
Share of values matching a predefined set |
RollingValuesInSetCheck |
Values in set over a rolling time window |
DuplicateCheck |
Number of duplicate values in a column |
CountCheck |
Row count or distinct value count |
AverageCheck |
Average of a column |
MaxCheck |
Maximum of a column |
MinCheck |
Minimum of a column |
MatchRateCheck |
Match rate between two tables after joining |
RelCountChangeCheck |
Relative count change vs. historical average |
IqrOutlierCheck |
Detect outliers using interquartile range |
OccurrenceCheck |
Check value occurrence frequency |
pip install koalityOr add to your pyproject.toml:
[project]
dependencies = [
"koality>=0.1.0",
]# koality_config.yaml
name: My Data Quality Checks
# Database connection setup - executed before running checks
database_setup: |
INSTALL bigquery;
LOAD bigquery;
ATTACH 'project=${PROJECT_ID}' AS bq (TYPE bigquery, READ_ONLY);
# Prefix for table references (use attached database name)
database_accessor: bq
defaults:
result_table: bq.dqm.results
log_path: dqm_failures.txt
filters:
partition_date:
column: date
value: yesterday
type: date
check_bundles:
- name: null_ratio_checks
defaults:
check_type: NullRatioCheck
table: bq.dataset.orders
lower_threshold: 0
upper_threshold: 0.05
checks:
- check_column: customer_id
- check_column: order_date
- check_column: total_amountFor in-memory DuckDB (local testing or CSV/Parquet files):
database_setup: |
CREATE TABLE orders AS SELECT * FROM 'data/orders.parquet';
CREATE TABLE results (check_name VARCHAR, result DOUBLE, timestamp TIMESTAMP);
database_accessor: ""# Pass database setup variables via CLI
koality run --config_path koality_config.yaml -dsv PROJECT_ID=my-gcp-project
# Or via environment variable
DATABASE_SETUP_VARIABLES="PROJECT_ID=my-gcp-project" koality run --config_path koality_config.yamlResults are persisted to your configured result table and failures are logged to the specified log path.
Koality uses a hierarchical configuration system where more specific settings override general ones:
defaults: Base settings for all checks (result table, persistence, filters)check_bundles.defaults: Bundle-level defaults (check type, table, thresholds)checks: Individual check configurations (specific columns, custom thresholds)
Apply dynamic filters to check specific data subsets using the structured filters syntax:
defaults:
filters:
partition_date:
column: created_at
value: yesterday
type: date # Required for rolling checks; auto-parses date values
shop_id:
column: shop_id
value: SHOP01
type: identifier # Marks this as the identifier filter for result grouping
revenue:
column: total_revenue
value: 1000
operator: ">=" # Supports =, !=, >, >=, <, <=, IN, NOT IN, LIKE, NOT LIKE| Property | Description |
|---|---|
column |
Database column name to filter on (optional in defaults, required after merge) |
value |
Filter value (optional in defaults, required after merge) |
type |
date, identifier, or other (default). Only one of each type allowed |
operator |
SQL operator: =, !=, >, >=, <, <=, IN, NOT IN, LIKE |
parse_as_date |
If true, parse value as date (for non-date-type filters) |
Koality automatically parses date values when type: date is set:
- Relative dates:
today,yesterday,tomorrow - ISO dates:
2024-01-15,20240115 - With inline offset:
yesterday-2(2 days before yesterday),today+1(tomorrow)
Contributions are welcome! Please feel free to submit issues and pull requests on GitHub.
This project is licensed under the MIT License - see the LICENSE.md file for details.
