If you’d like to store Synapse’s content repository (media_store) files on Amazon S3 (or other S3-compatible service),
you can use the synapse-s3-storage-provider media provider module for Synapse.
synapse-s3-storage-provider support is very new and still relatively untested. Using it may cause data loss.
An alternative (which has worse performance) is to use Goofys to mount the S3 store to the local filesystem.
Summarized writings here are inspired by this article.
The way media storage providers in Synapse work has some caveats:
You may be thinking if all files are stored locally as well, what’s the point?
You can run some scripts to delete the local files once in a while, thus freeing up local disk space. If these files are needed in the future (for serving them to users, etc.), Synapse will pull them from the media storage provider on demand.
While you will need some local disk space around, it’s only to accommodate usage, etc., and won’t grow as large as your S3 store.
After creating the S3 bucket and configuring it, you can proceed to configure Goofys in your configuration file (inventory/host_vars/matrix.<your-domain>/vars.yml):
matrix_synapse_ext_synapse_s3_storage_provider_enabled: true
matrix_synapse_ext_synapse_s3_storage_provider_config_bucket: your-bucket-name
matrix_synapse_ext_synapse_s3_storage_provider_config_region_name: some-region-name # e.g. eu-central-1
matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url: https://.. # delete this whole line for Amazon S3
matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id: access-key-goes-here
matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key: secret-key-goes-here
matrix_synapse_ext_synapse_s3_storage_provider_config_storage_class: STANDARD # or STANDARD_IA, etc.
# For additional advanced settings, take a look at `roles/matrix-synapse/defaults/main.yml`
If you have existing files in Synapse’s media repository (/matrix/synapse/media-store/..):
Migrating your existing data can happen in multiple ways:
s3_media_upload script from synapse-s3-storage-provider (very slow when dealing with lots of data)s3_media_upload (quicker when dealing with lots of data)s3_media_upload script from synapse-s3-storage-providerInstead of using s3_media_upload directly, which is very slow and painful for an initial data migration, we recommend using another tool in combination with s3_media_upload.
To copy your existing files, SSH into the server and run /usr/local/bin/matrix-synapse-s3-storage-provider-shell.
This launches a Synapse container, which has access to the local media store, Postgres database, S3 store and has some convenient environment variables configured for you to use (MEDIA_PATH, BUCKET, ENDPOINT, UPDATE_DB_DAYS, etc).
Then use the following commands ($ values come from environment variables - they’re not placeholders that you need to substitute):
s3_media_upload update-db $UPDATE_DB_DURATION - create a local SQLite database (cache.db) with a list of media repository files (from the synapse Postgres database) eligible for operating on
$UPDATE_DB_DURATION is influenced by the matrix_synapse_ext_synapse_s3_storage_provider_update_db_day_count variable (defaults to 0)$UPDATE_DB_DURATION defaults to 0d (0 days), which means include files which haven’t been accessed for more than 0 days (that is, all files will be included).s3_media_upload check-deleted $MEDIA_PATH - check whether files in the local cache still exist in the local media repository directorys3_media_upload upload $MEDIA_PATH $BUCKET --delete --endpoint-url $ENDPOINT - uploads locally-stored files to S3 and deletes them from the local media repository directoryThe upload command may take a lot of time to complete.
s3_media_uploadTo migrate your existing local data to S3, we recommend to:
first use another tool (aws s3 or b2 sync, etc.) to copy the local files to the S3 bucket
only then use the s3_media_upload tool to finish the migration (this checks to ensure all files are uploaded and then deletes the local files)
Generally, you need to use the aws s3 tool.
This documentation section could use an improvement. Ideally, we’d come up with a guide like the one used in Copying data to Backblaze B2 - running aws s3 in a container, etc.
To copy to Backblaze B2, start a container like this:
docker run -it --rm \
-w /work \
--env='B2_KEY_ID=YOUR_KEY_GOES_HERE' \
--env='B2_KEY_SECRET=YOUR_SECRET_GOES_HERE' \
--env='B2_BUCKET_NAME=YOUR_BUCKET_NAME_GOES_HERE' \
--mount type=bind,src=/matrix/synapse/storage/media-store,dst=/work,ro \
--entrypoint=/bin/sh \
tianon/backblaze-b2:3.6.0 \
-c 'b2 authorize-account $B2_KEY_ID $B2_KEY_SECRET > /dev/null && b2 sync /work b2://$B2_BUCKET_NAME --skipNewer'