This is a sample project for Databricks, generated via cookiecutter.
While using this project, you need Python 3.X and pip or conda for package management.
pip install -r unit-requirements.txtpip install -e .For local unit testing, please use pytest:
pytest tests/unit
For an integration test on interactive cluster, use the following command:
dbx execute --cluster-name=<name of interactive cluster> --job=lendingclub_scoring_dbx-sample-integration-test
For a test on a automated job cluster, use launch instead of execute:
dbx launch --job=lendingclub_scoring_dbx-sample-integration-test
dbxexpects that cluster for interactive execution supports%pipand%condamagic commands.- Please configure your job in
conf/deployment.jsonfile. - To execute the code interactively, provide either
--cluster-idor--cluster-name.
dbx execute \
--cluster-name="<some-cluster-name>" \
--job=job-nameMultiple users also can use the same cluster for development. Libraries will be isolated per each execution context.
Next step would be to configure your deployment objects. To make this process easy and flexible, we're using JSON for configuration.
By default, deployment configuration is stored in conf/deployment.json.
To start new deployment, launch the following command:
dbx deployYou can optionally provide requirements.txt via --requirements option, all requirements will be automatically added to the job definition.
After the deploy, launch the job via the following command:
dbx launch --job=lendingclub_scoring_dbx-sample
Please set the following secrets or environment variables. Follow the documentation for GitHub Actions or for Azure DevOps Pipelines:
DATABRICKS_HOSTDATABRICKS_TOKEN