Skip to content

Migrating to Rust & Best Practices

nedvedba edited this page May 30, 2025 · 1 revision

This page will serve as the go-to resource for the migration to Rust for the metadata mode (and beyond), along with best practices, developer environment configuration, etc.

To begin,

Installing Rust & Setting up environment

Installing Rust is very simple as you will either download the Rustup installer from this page or copy the command the will install it for you on Unix. This will install Rustup, which is the equivalent of nvm for Node. It will install and manage all your Rust versions and their target architectures. It will by default install stable which is sufficient for our means.

Setting up your environment will be a bit more complicated. To begin I would recommend an editor that has LSP (Language Server Protocol) support as compiler hints are a must when working in Rust. Editors such as RustRover, VS Code, and even vim (if configured properly) are sufficient. Personally I use VS Code. Next, you will need to install rust-analyzer (the best language server for rust). Usually that will automatically be done for you if you install the rust-analyzer extension for your editor, I.E. this extension. After this is done we now need to change the command that Rust uses to lint your application from cargo check to cargo clippy. Cargo Clippy is build on top of Cargo Check with hints about more Rusty ways to do things, such as converting a for loop to an Iterator (which is more performant and cleaner code). To do this will depend on your editor, so I cannot specifically tell you how to set it up for everything. In VS Code this can be configured by adding the following to your JSON user settings "rust-analyzer.check.command": "clippy", and in RustRover these instructions should suffice. Finally, set your editor to format on save to always be adhere to Rusts strict formatting (as it is a warning to not follow it).

Tooling Changes

For this change to the core service (and DataFed as a whole) I have included a new Docker compose file that include some new external tooling that we have previously wanted to add to DataFed in the past. This includes Postgres as the database and a new observability stack including Prometheus + Grafana + Loki. These choices are not set in stone, but will be a good working point to start with. The only things you will need to do prior to running the application is run.

docker compose up -d

Which will start all the necessary containers (or just docker compose up -d postgres if you do not want the observability stack).

The Grafana frontend can be accessed at https://localhost:8080 with the default username and password both being admin.

App Configuration

I have included a new configuration file, AppSettings.toml. This file will be parsed by all Rust targets, so that things like the migrator and api can all rely on the same configuration file. The included configuration file is not intended for production use, as that should only be mounted into the production containers at runtime, however the included file servers as a sensible default for development purposes. NOTE: docker compose will not read this configuration file, so if you change something like the database password for development purposes you will have to change it in the compose file and the app settings file.

Migrations

Since we are now using a Schemaful database, we can now use migrations to keep our database running smoothly and have easily reproducible tests and environment setups. These migrations are defined in Rust, and can be run at any time from the code, however I have created a separate migrate binary target to run these migrations. To run the migrations it is as easy as running the following:

cargo run --bin migrate --features migrations

NOTE: You will need to do this each time you add a new migration, or if the Postgres data volume is brought down.

Adding a Migration

To add a migration, create a new file in this directory. The naming scheme is mYYYYMMDD_{6 digit time stamp}_{relevant added features}.rs so that way the migrations will always be in the order in which they were created/are run. After writing your migration, add it to the mod.rs file in the same directory which is what will cause the migrator to actually run it.

Running the New Core API

To run the API, run the following command in your terminal

cargo run

To run the application with the optional metrics and log aggregation features, you can include them in features flag to Cargo as a comma-separated list, I.E.

cargo run --features metrics,loki

At this point, you should have a running application that actively outputs its logs to standard output, with a running API on the address specified in AppSettings.toml (default http://localhost:3000).

Using the Services

If you did the previous step, you should have the API running on http://localhost:3000. Included in this frontend is a swagger front end at http://localhost:3000/swagger-ui which will provide API documentation and an API playground, and an OpenAPI JSON spec file at http://localhost:3000/api-docs/open-api.json

If you are running the observability stack with Docker Compose, and have included the metrics and loki flags, there will also be a Grafana frontend hosted at http://localhost:8080/ with pre-configured datasources.

Happy Coding!

Clone this wiki locally