Skip to content

Conversation

@altendorfme
Copy link
Contributor

I added MySQL, MariaDB, PostgreSQL, and SQLite as volume options, removed the files tab, and adjusted the listings.

I also had to add the client to the Dockefile to support dumping

I think this structure can still be improved; I don't know if volumes are a better place or if I should create a new Sources environment

Copy link
Owner

@nicotsx nicotsx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @altendorfme this is a good first stab! I left you a few comments to review

Comment on lines 88 to 95
if (
formValues.backend === "nfs" ||
formValues.backend === "smb" ||
formValues.backend === "webdav" ||
formValues.backend === "mariadb" ||
formValues.backend === "mysql" ||
formValues.backend === "postgres"
) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could make this a constant ["nfs", "smb", ...] and here refactor to if SUPPORTS_CONNECTION.includes(formValues.backend) {} as this condition is becoming to big

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As SQLite is just a file, I'm wondering if it makes any sense to add it as a "volume". Users can simply use the local directory feature. I would remove it

Comment on lines 209 to 232
const volumePath = getVolumePath(volume);
let backupPath: string;
let dumpFilePath: string | null = null;
const isDatabase = isDatabaseVolume(volume);

if (isDatabase) {
logger.info(`Creating database dump for volume ${volume.name}`);

const timestamp = Date.now();
dumpFilePath = getDumpFilePath(volume, timestamp);

try {
await executeDatabaseDump(volume.config as DatabaseConfig, dumpFilePath);
logger.info(`Database dump created at: ${dumpFilePath}`);
} catch (error) {
logger.error(`Failed to create database dump: ${toMessage(error)}`);
throw error;
}

backupPath = dumpFilePath;
} else {
backupPath = getVolumePath(volume);
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this becoming hard to maintain if we ever add more types. Instead, I suggest you add a new abstract method in the xxx-backend.ts files for getVolumePath() and then here you. can simply call it the same way for all possible backends.

By doing this, you can get rid of all the 4 functions you have added in helpers.ts

Comment on lines 57 to 75
resolve({
exitCode: -1,
stdout: stdoutData,
stderr: stderrData,
});
reject(error);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why changing this? The whole purpose of the safeSpawn is to not throw and avoid needless try catches when using it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, this is the kind of change we make when some things break and we don't know why, haha!

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably live in each respective backend file instead of a big everything helper

Comment on lines 44 to 48
// Write stdin if provided
if (stdin && child.stdin) {
child.stdin.write(stdin);
child.stdin.end();
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't seem to be using this at the moment. Could be useful but no need to clutter the code with it if we don't need it now

Dockerfile Outdated
Comment on lines 7 to 9
mariadb-client \
mysql-client \
postgresql-client \
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bun ships with all 3 of these clients https://bun.com/docs/runtime/sql could you explore if it is possible to replace the usage of those clients with a direct connection from javascript and running a raw query for dumping?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum! I'll try it, I didn't know about this route!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was reading the comparisons, and the native clients have full support for tables, views, triggers, procedures, and functions, and the backup process is safer and more stable for long processing times.

@altendorfme
Copy link
Contributor Author

I removed SQLite support.
I reverted some tests
And I created a new method for volumepath

@CLAassistant
Copy link

CLAassistant commented Nov 18, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@nicotsx
Copy link
Owner

nicotsx commented Nov 18, 2025

Will merge this asap! Looks great

Repository owner deleted a comment from CLAassistant Nov 18, 2025
@nicotsx
Copy link
Owner

nicotsx commented Nov 20, 2025

Hello @altendorfme

The more I tested the implementation the more I started thinking that maybe we are not the right tool to integrate Database backups. For a few reasons mainly:

  • Databases are not real “volumes” and we are adding weird code branches to make up for this. We have to run dump executions or “hooks” before backups and after for cleaning up
  • Since we create a new dump and back it up each time this goes a bit against the restic philosophy of incremental deduped backups
  • The frontend functionality like volume size, file explorer, “mounted” status, file inclusions/exclusions make little sense and we are also writing code branches to hide these parts to the user.
  • We increase the image size significantly with multiple clients that the user might not need

Overall I think this complexifies the code too much while the user could simply run dump executions from a script on the host and backup that folder with Zerobyte.

What do you think? I'm sorry you've probably spent some time here

@altendorfme
Copy link
Contributor Author

To avoid this conflict with volumes, could I handle and separate them in an environment called Source or Database?

@nicotsx
Copy link
Owner

nicotsx commented Nov 20, 2025

I think we should maybe start a discussion and check with the community how this feature would be used. This way we make sure that we get the implementation right from the get go

I'll start a thread tomorrow (it's late here) and ping you

@sebwieser
Copy link

Hello @altendorfme

The more I tested the implementation the more I started thinking that maybe we are not the right tool to integrate Database backups. For a few reasons mainly:

  • Databases are not real “volumes” and we are adding weird code branches to make up for this. We have to run dump executions or “hooks” before backups and after for cleaning up
  • Since we create a new dump and back it up each time this goes a bit against the restic philosophy of incremental deduped backups
  • The frontend functionality like volume size, file explorer, “mounted” status, file inclusions/exclusions make little sense and we are also writing code branches to hide these parts to the user.
  • We increase the image size significantly with multiple clients that the user might not need

Overall I think this complexifies the code too much while the user could simply run dump executions from a script on the host and backup that folder with Zerobyte.

What do you think? I'm sorry you've probably spent some time here

Hi @nicotsx, excuse me for jumping in. Speaking from a PostgreSQL perspective at least, your instincts are correct.
You will run into multiple issues trying to backup databases in the same way you do regular files.
Incremental backups with Restic will be done on a file level, meaning if a single database page changes, Restic will copy the whole database file. On a busy database, that could quickly start looking like a full backup.

Furthermore, it's common to restore databases to an exact point in time, which means you should make sure to back up WAL files as well in a consistent way, otherwise you're risking data loss or corruption.
Postgres has its own ecosystem of tools (pgBackRest, Barman) for creating consistent backups. PgBackrest is capable of creating proper database incremental backups on a block level.

If you'd like to cover database backups with Zerobyte, I'd always suggest going native.

I haven't been using SQLite much, but before backing it up, I'd make sure there are no writers to the file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants