Upload from Supervisor's list of backups; use HA's name#7
Upload from Supervisor's list of backups; use HA's name#7ryansch wants to merge 2 commits intothomasfr:mainfrom
Conversation
thomasfr
left a comment
There was a problem hiding this comment.
First of all: LOVE IT! 🚀 I was already trying around to get the name and replace it, so happy to see your PR. Thank you.
See my comments, happy to discuss and hear your opinions
|
|
||
| readarray -t backups < <(bashio::api.supervisor GET /backups/info false .backups | jq -c '.[]') | ||
| for backup in "${backups[@]}"; do | ||
| bashio::log.debug $(echo "${backup}" | jq -C .) |
There was a problem hiding this comment.
Could you add a more descriptive debug message please. Just a string what to expect to see, to make it obvious if there is an empty string echoed
| monitor_path="/backup" | ||
| jq_filter=".backups|=sort_by(.date)|.backups|reverse|.[$local_backups_to_keep:]|.[].slug" | ||
|
|
||
|
|
There was a problem hiding this comment.
Remove additional empty line please
| bashio::log.debug $(echo "${backup}" | jq -C .) | ||
|
|
||
| slug=$(echo "${backup}" | jq -r .slug) | ||
| name=$(echo "${backup}" | jq -r .name) |
There was a problem hiding this comment.
I think we should also handle special characters. See limitations for object keys: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html#:~:text=Characters%20that%20might%20require%20special%20handling
I suggest to use $name as is in the metadata and create a separate variable (for instance $object_key_name) for the object key with replaced/removed characters which need special handling and should be avoided, mentioned in the above link.
Logic could be something like:
- Replace spaces/multiple whitespace characters with a single
_ - Replace all characters not mentioned here with
-, and with a single-if there are multiple consecutive unsafe characters.
What do you think?
| slug=$(echo "${backup}" | jq -r .slug) | ||
| name=$(echo "${backup}" | jq -r .name) | ||
|
|
||
| if ! aws s3 ls "s3://${bucket_name}/${name}.tar" >/dev/null; then |
There was a problem hiding this comment.
I think we could/should extract the date information we get from the /backups/info response (date ), and use year, month, day, hour as object key prefix.
For example:
{
"backups": [
{
"slug": "skuwe823",
"date": "2020-09-30T20:25:34.273Z",
"name": "Awesome backup with : some@weird=//characters",
"type": "partial",
"size": 44,
"protected": true,
"compressed": true,
"content": {
"homeassistant": true,
"addons": ["awesome_addon"],
"folders": ["ssl", "media"]
}
}
],
"days_until_stale": 30
}Together with my previous comment, this would result in:
object_key_name="Awesome_backup_with_some-weird-characters"
object_key="$year/$month/$day/$hour/$object_key_name"
# Would result in: '2020/09/30/20/Awesome_backup_with_some-weird-characters.tar'
Would also make it easier to manually delete older files via the console or create rules based on the key and we would not need the additional aws s3 ls - (although theoretically there could still be an object with the same key, but very very unlikely).
|
This is still on my radar. |
|
I swear I'm not dead. |
Instead of using
aws s3 sync, I loop through the backups from supervisor's backup endpoint.The backup API gives us the friendly name from the UI along with metadata we can send along to S3.
Example:

I'm using this alongside https://github.com/jcwillox/hass-auto-backup to automate taking the actual backups.