Parcourir la source

Improve "Copying data to Amazon S3" documentation for synapse-s3-storage-provider

pull/2413/head
Slavi Pantaleev il y a 3 ans
Parent
révision
910c99d03d
1 fichiers modifiés avec 12 ajouts et 4 suppressions
  1. +12
    -4
      docs/configuring-playbook-synapse-s3-storage-provider.md

+ 12
- 4
docs/configuring-playbook-synapse-s3-storage-provider.md Voir le fichier

@@ -93,13 +93,21 @@ To migrate your existing local data to S3, we recommend to:


#### Copying data to Amazon S3 #### Copying data to Amazon S3


Generally, you need to use the `aws s3` tool.
To copy to AWS S3, start a container on the Matrix server like this:


This documentation section could use an improvement. Ideally, we'd come up with a guide like the one used in [Copying data to Backblaze B2](#copying-data-to-backblaze-b2) - running `aws s3` in a container, etc.
```sh
docker run -it --rm \
-w /work \
--env-file=/matrix/synapse/ext/s3-storage-provider/env \
--mount type=bind,src=/matrix/synapse/storage/media-store,dst=/work,ro \
--entrypoint=/bin/sh \
docker.io/amazon/aws-cli:2.9.16 \
-c 'aws s3 sync /work/. s3://$BUCKET/'
```


#### Copying data to Backblaze B2 #### Copying data to Backblaze B2


To copy to Backblaze B2, start a container like this:
To copy to Backblaze B2, start a container on the Matrix server like this:


```sh ```sh
docker run -it --rm \ docker run -it --rm \
@@ -109,7 +117,7 @@ docker run -it --rm \
--env='B2_BUCKET_NAME=YOUR_BUCKET_NAME_GOES_HERE' \ --env='B2_BUCKET_NAME=YOUR_BUCKET_NAME_GOES_HERE' \
--mount type=bind,src=/matrix/synapse/storage/media-store,dst=/work,ro \ --mount type=bind,src=/matrix/synapse/storage/media-store,dst=/work,ro \
--entrypoint=/bin/sh \ --entrypoint=/bin/sh \
tianon/backblaze-b2:3.6.0 \
docker.io/tianon/backblaze-b2:3.6.0 \
-c 'b2 authorize-account $B2_KEY_ID $B2_KEY_SECRET && b2 sync /work b2://$B2_BUCKET_NAME --skipNewer' -c 'b2 authorize-account $B2_KEY_ID $B2_KEY_SECRET && b2 sync /work b2://$B2_BUCKET_NAME --skipNewer'
``` ```




Chargement…
Annuler
Enregistrer