AWS_S3 Support non-AWS endpoints

I was excited to see the S3 backup add-on (aws_s3) added to HA Core 2025.5.0.

However, I use Wasabi instead of AWS for my S3 compatible storage. The extension currently only supports AWS domains:

I would love to see this expanded to other endpoints. For example, Wasabi’s endpoints are here:
https://docs.wasabi.com/v1/docs/what-are-the-service-urls-for-wasabi-s-different-storage-regions

But there are countless others such as DigitalOcean Spaces, Bunny(.)net, etc. It may be difficult to maintain an allow list so I would argue the feature may be to just allow any valid URL and then check the permissions.

cc:// @tomasbedrich let me know if you would prefer this as a bug report instead of a feature request, or need additional details.

@tomasbedrich Thanks a lot for implementing this very useful feature!

How about replacing the hard check with a more subtle one. Maybe when entering a non-matching endpoint URL, just display a warning that this may or may not work instead of flat out refusing it?

3 Likes

I am interested in a solution with hetzner endpoints

1 Like

Meanwhile, I’ve tried the following:

  1. Clone the aws_s3 component to custom_components
  2. Edit the config flow to not bail when entering non-Amazon URLs
  3. Reboot the Yellow

I was expecting my patched version to take precedence, but this isn’t the case. Maybe somebody has an idea how to force this?

Hello, the integration previously targeted all S3-compatible providers, but core team decided to limit it to AWS. Therefore your eventual pull requests and/or issues will likely be closed, like a few others.

If you do want to use your patched version, it will likely work. Your issue sounds like you are missing the version key in manifest.json.

There is a discussion ongoing how to go ahead with this, so maybe there will be a solution soon™

Until then, I’ve forked the component as generic_s3 and applied the necessary changes for it to work. (No brand icon though, I hope there will be a proper solution soonish.)

2 Likes

Thank you @svoop! I managed to get it to work with Scaleway’s S3, the only thing to know is that the URL given by Scaleway is a subdomain for your bucket (https://bucket.s3.region.scw.cloud) which should not be used, it is the domain of the s3 of your region (https://s3.region.scw.cloud) that should be used.

I knew that this release would introduce an S3-compatible backup, I discovered upon installing the release that the PR had been restricted to AWS only… I honestly facepalmed.

I work daily with S3-compatible providers: S3 is a living standard.

In my opinion, the argument given here by @frenck Add support for s3 compatible storage providers by patrickvorgers · Pull Request #144474 · home-assistant/core · GitHub is invalid, AWS holds 33% of the market shares, the rest of the providers have an S3-compatible object storage. You can even use MinIO to host your own S3-compatible object storage, which I have done for my company.

This is the one feature of cloud providers where you can swap one provider for another nearly blindly, of course there can be quirks, but if you fear a maintenance hell, the API is well established and no breaking changes will be introduced any time soon.

Please do NOT implement an integration per cloud provider, in would make no sense. Only a generic S3 integration would make sense,

2 Likes

Even the AWS S3 cli allows third party providers! We don’t see a custom cli for each providers. This is the first time I see this kind of lock to the aws s3. For me it makes no sense. Allowing third party does not imply more maintaining work. The contrary, this reduce the work for the community. Ensuring compatibility is the job of the providers and they already have to do this for the thousands of s3 tool out there.