| @@ -0,0 +1,3 @@ | |||
| /inventory/* | |||
| !/inventory/.gitkeep | |||
| !/inventory/host_vars/.gitkeep | |||
| @@ -0,0 +1,166 @@ | |||
| # Matrix (An open network for secure, decentralized communication) server setup using Ansible and Docker | |||
| ## Purpose | |||
| This Ansible playbook is meant to easily let you run your own [Matrix](http://matrix.org/) homeserver. | |||
| That is, it lets you join the Matrix network with your own `@<username>:<your-domain>` identifier, all hosted on your own server. | |||
| Using this playbook, you can get the following services configured on your server: | |||
| - a [Matrix Synapse](https://github.com/matrix-org/synapse) homeserver - storing your data and managing your presence in the [Matrix](http://matrix.org/) network | |||
| - a [PostgreSQL](https://www.postgresql.org/) database for Matrix Synapse - providing better performance than the default [SQLite](https://sqlite.org/) database | |||
| - a [STUN server](https://github.com/coturn/coturn) for WebRTC audio/video calls | |||
| - a [Riot](https://riot.im/) web UI | |||
| - free [Let's Encrypt](https://letsencrypt.org/) SSL certificate, which secures the connection to the Synapse server and the Riot web UI | |||
| Basically, this playbook aims to get you up-and-running with all the basic necessities around Matrix, without you having to do anything else. | |||
| ## What's different about this Ansible playbook? | |||
| This is similar to the [EMnify/matrix-synapse-auto-deploy](https://github.com/EMnify/matrix-synapse-auto-deploy) Ansile deployment, but: | |||
| - this one is a complete Ansible playbook (instead of just a role), so it should be **easier to run** - especially for folks not familiar with Ansible | |||
| - this one **can be re-ran many times** without causing trouble | |||
| - this one **runs everything in Docker containers** (like [silviof/docker-matrix](https://hub.docker.com/r/silviof/docker-matrix/) and [silviof/matrix-riot-docker](https://hub.docker.com/r/silviof/matrix-riot-docker/)), so it's likely more predictable | |||
| - this one retrieves and automatically renews free [Let's Encrypt](https://letsencrypt.org/) **SSL certificates** for you | |||
| Special thanks goes to: | |||
| - [EMnify/matrix-synapse-auto-deploy](https://github.com/EMnify/matrix-synapse-auto-deploy) - for the inspiration | |||
| - [silviof/docker-matrix](https://hub.docker.com/r/silviof/docker-matrix/) - for packaging Matrix Synapse as a Docker image | |||
| - [silviof/matrix-riot-docker](https://hub.docker.com/r/silviof/matrix-riot-docker/) - for packaging Riot as a Docker image | |||
| ## Prerequisites | |||
| - **CentOS server** with no services running on port 80/443 (making this run on non-CentOS servers should be possible in the future) | |||
| - the [Ansible](http://ansible.com/) program, which is used to run this playbook and configures everything for you | |||
| - properly configured DNS SRV record for `<your-domain>` (details in [Configuring DNS](#Configuring-DNS) below) | |||
| - `matrix.<your-domain>` domain name pointing to your new server - this is where the Matrix Synapse server will live (details in [Configuring DNS](#Configuring-DNS) below) | |||
| - `riot.<your-domain>` domain name pointing to your new server - this is where the Riot web UI will live (details in [Configuring DNS](#Configuring-DNS) below) | |||
| ## Configuring DNS | |||
| In order to use an identifier like `@<username>:<your-domain>`, you don't actually need | |||
| to install anything on the actual `<your-domain>` server. | |||
| All services created by this playbook are meant to be installed on their own server (such as `matrix.<your-domain>`). | |||
| In order to do this, you must first instruct the Matrix network of this by setting up a DNS SRV record (think of it as a "redirect"). | |||
| The SRV record should look like this: | |||
| - Name: `_matrix._tcp` (use this text as-is) | |||
| - Content: `10 0 8448 matrix.<your-domain>` (replace `<your-domain>` with your own) | |||
| Once you've set up this DNS SRV record, you should create 2 other domain names (`matrix.<your-domain>` and `riot.<your-domain>`) and point both of them to your new server's IP address (DNS `A` record or `CNAME` is fine). | |||
| This playbook can then install all the services on that new server and you'll be able to join the Matrix network as `@<username>:<your-domain>`, even though everything is installed elsewhere (not on `<your-domain>`). | |||
| ## Configuration | |||
| Once you have your server and you have [configured your DNS records](#Configuring-DNS), you can proceed with configuring this playbook, so that it knows what to install and where. | |||
| You can follow these steps: | |||
| - create a directory to hold your configuration (`mkdir inventory/matrix.<your-domain>`) | |||
| - copy the sample configuration file (`cp examples/host-vars.yml inventory/matrix.<your-domain>/vars.yml`) | |||
| - edit the configuration file (`inventory/matrix.<your-domain>/vars.yml`) to your liking | |||
| - copy the sample inventory hosts file (`cp examples/hosts inventory/hosts`) | |||
| - edit the inventory hosts file (`inventory/hosts`) to your liking | |||
| ## Installing | |||
| Once you have your server and you have [configured your DNS records](#Configuring-DNS), you can proceed with installing. | |||
| To make use of this playbook, you should invoke the `setup.yml` playbook multiple times, with different tags. | |||
| ### Configuring a server | |||
| Run this as-is to set up a server. | |||
| This doesn't start any services just yet (another step does this later - below). | |||
| Feel free to re-run this any time you think something is off with the server configuration. | |||
| ansible-playbook -i inventory/hosts setup.yml --tags=setup-main | |||
| ### Restoring an existing SQLite database (from another installation) | |||
| Run this if you'd like to import your database from a previous default installation of Matrix Synapse. | |||
| (don't forget to import your `media_store` files as well - see below). | |||
| While this playbook always sets up PostgreSQL, by default, a Matrix Synapse installation would run | |||
| using an SQLite database. | |||
| If you have such a Matrix Synapse setup and wish to migrate it here (and over to PostgreSQL), this command is for you. | |||
| Run this command (make sure to replace `<local-path-to-homeserver.db>` with a file path on your local machine): | |||
| ansible-playbook -i inventory/hosts setup.yml --extra-vars='local_path_homeserver_db=<local-path-to-homeserver.db>' --tags=import-sqlite-db | |||
| **Note**: `<local-path-to-homeserver.db>` must be a file path to a `homeserver.db` file on your local machine (not on the server!). This file is copied to the server and imported. | |||
| ### Restoring `media_store` data files from an existing installation | |||
| Run this if you'd like to import your `media_store` files from a previous installation of Matrix Synapse. | |||
| Run this command (make sure to replace `<local-path-to-media_store>` with a path on your local machine): | |||
| ansible-playbook -i inventory/hosts setup.yml --extra-vars='local_path_media_store=<local-path-to-media_store>' --tags=import-media-store | |||
| **Note**: `<local-path-to-media_store>` must be a file path to a `media_store` directory on your local machine (not on the server!). This directory's contents are then copied to the server. | |||
| ### Starting the services | |||
| Run this as-is to start all the services and to ensure they'll run on system startup later on. | |||
| ansible-playbook -i inventory/hosts setup.yml --tags=start | |||
| ### Registering a user | |||
| Run this to create a new user account on your Matrix server. | |||
| You can do it via this Ansible playbook (make sure to edit the `<your-username>` and `<your-password>` part below): | |||
| ansible-playbook -i inventory/hosts setup.yml --extra-vars='username=<your-username> password=<your-password>' --tags=register-user | |||
| **or** using the command-line after **SSH**-ing to your server (requires that [all services have been started](#Starting-the-services)): | |||
| matrix-synapse-register-user <your-username> <your-password> | |||
| **Note**: `<your-username>` is just a plain username (like `john`), not your full `@<username>:<your-domain>` identifier. | |||
| ## Deficiencies | |||
| This Ansible playbook can be improved in the following ways: | |||
| - setting up automatic backups to one or more storage providers | |||
| - enabling TURN support for the Coturn server - see https://github.com/silvio/docker-matrix#coturn-server | |||
| - [importing an old SQLite database](#Restoring-an-existing-SQLite=database-from-another-installation) likely works because of a patch, but may be fragile until [this](https://github.com/matrix-org/synapse/issues/2287) is fixed | |||
| @@ -0,0 +1,2 @@ | |||
| [defaults] | |||
| retry_files_enabled = False | |||
| @@ -0,0 +1,19 @@ | |||
| # This is something which is provided to Let's Encrypt | |||
| # when retrieving the SSL certificates for `<your-domain>`. | |||
| # | |||
| # In case SSL renewal fails at some point, you'll also get | |||
| # an email notification there. | |||
| # | |||
| # Example value: someone@example.com | |||
| host_specific_ssl_support_email: YOUR_EMAIL_ADDRESS_HERE | |||
| # This is your bare domain name (`<your-domain`). | |||
| # | |||
| # Note: the server specified here is not touched. | |||
| # | |||
| # This playbook only installs to `matrix.<your-domain>`, | |||
| # but it nevertheless requires to know the bare domain name | |||
| # (for configuration purposes). | |||
| # | |||
| # Example value: example.com | |||
| host_specific_hostname_identity: YOUR_BARE_DOMAIN_NAME_HERE | |||
| @@ -0,0 +1,2 @@ | |||
| [matrix-servers] | |||
| matrix.<your-domain> ansible_host=<your-server's IP address> ansible_ssh_user=root | |||
| @@ -0,0 +1,941 @@ | |||
| #!/usr/bin/env python | |||
| # -*- coding: utf-8 -*- | |||
| # Copyright 2015, 2016 OpenMarket Ltd | |||
| # | |||
| # Licensed under the Apache License, Version 2.0 (the "License"); | |||
| # you may not use this file except in compliance with the License. | |||
| # You may obtain a copy of the License at | |||
| # | |||
| # http://www.apache.org/licenses/LICENSE-2.0 | |||
| # | |||
| # Unless required by applicable law or agreed to in writing, software | |||
| # distributed under the License is distributed on an "AS IS" BASIS, | |||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |||
| # See the License for the specific language governing permissions and | |||
| # limitations under the License. | |||
| from twisted.internet import defer, reactor | |||
| from twisted.enterprise import adbapi | |||
| from synapse.storage._base import LoggingTransaction, SQLBaseStore | |||
| from synapse.storage.engines import create_engine | |||
| from synapse.storage.prepare_database import prepare_database | |||
| import argparse | |||
| import curses | |||
| import logging | |||
| import sys | |||
| import time | |||
| import traceback | |||
| import yaml | |||
| logger = logging.getLogger("synapse_port_db") | |||
| BOOLEAN_COLUMNS = { | |||
| "events": ["processed", "outlier", "contains_url"], | |||
| "rooms": ["is_public"], | |||
| "event_edges": ["is_state"], | |||
| "presence_list": ["accepted"], | |||
| "presence_stream": ["currently_active"], | |||
| "public_room_list_stream": ["visibility"], | |||
| "device_lists_outbound_pokes": ["sent"], | |||
| "users_who_share_rooms": ["share_private"], | |||
| } | |||
| APPEND_ONLY_TABLES = [ | |||
| "event_content_hashes", | |||
| "event_reference_hashes", | |||
| "event_signatures", | |||
| "event_edge_hashes", | |||
| "events", | |||
| "event_json", | |||
| "state_events", | |||
| "room_memberships", | |||
| "feedback", | |||
| "topics", | |||
| "room_names", | |||
| "rooms", | |||
| "local_media_repository", | |||
| "local_media_repository_thumbnails", | |||
| "remote_media_cache", | |||
| "remote_media_cache_thumbnails", | |||
| "redactions", | |||
| "event_edges", | |||
| "event_auth", | |||
| "received_transactions", | |||
| "sent_transactions", | |||
| "transaction_id_to_pdu", | |||
| "users", | |||
| "state_groups", | |||
| "state_groups_state", | |||
| "event_to_state_groups", | |||
| "rejections", | |||
| "event_search", | |||
| "presence_stream", | |||
| "push_rules_stream", | |||
| "current_state_resets", | |||
| "ex_outlier_stream", | |||
| "cache_invalidation_stream", | |||
| "public_room_list_stream", | |||
| "state_group_edges", | |||
| "stream_ordering_to_exterm", | |||
| ] | |||
| end_error_exec_info = None | |||
| class Store(object): | |||
| """This object is used to pull out some of the convenience API from the | |||
| Storage layer. | |||
| *All* database interactions should go through this object. | |||
| """ | |||
| def __init__(self, db_pool, engine): | |||
| self.db_pool = db_pool | |||
| self.database_engine = engine | |||
| _simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"] | |||
| _simple_insert = SQLBaseStore.__dict__["_simple_insert"] | |||
| _simple_select_onecol_txn = SQLBaseStore.__dict__["_simple_select_onecol_txn"] | |||
| _simple_select_onecol = SQLBaseStore.__dict__["_simple_select_onecol"] | |||
| _simple_select_one = SQLBaseStore.__dict__["_simple_select_one"] | |||
| _simple_select_one_txn = SQLBaseStore.__dict__["_simple_select_one_txn"] | |||
| _simple_select_one_onecol = SQLBaseStore.__dict__["_simple_select_one_onecol"] | |||
| _simple_select_one_onecol_txn = SQLBaseStore.__dict__[ | |||
| "_simple_select_one_onecol_txn" | |||
| ] | |||
| _simple_update_one = SQLBaseStore.__dict__["_simple_update_one"] | |||
| _simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"] | |||
| def runInteraction(self, desc, func, *args, **kwargs): | |||
| def r(conn): | |||
| try: | |||
| i = 0 | |||
| N = 5 | |||
| while True: | |||
| try: | |||
| txn = conn.cursor() | |||
| return func( | |||
| LoggingTransaction(txn, desc, self.database_engine, [], []), | |||
| *args, **kwargs | |||
| ) | |||
| except self.database_engine.module.DatabaseError as e: | |||
| if self.database_engine.is_deadlock(e): | |||
| logger.warn("[TXN DEADLOCK] {%s} %d/%d", desc, i, N) | |||
| if i < N: | |||
| i += 1 | |||
| conn.rollback() | |||
| continue | |||
| raise | |||
| except Exception as e: | |||
| logger.debug("[TXN FAIL] {%s} %s", desc, e) | |||
| raise | |||
| return self.db_pool.runWithConnection(r) | |||
| def execute(self, f, *args, **kwargs): | |||
| return self.runInteraction(f.__name__, f, *args, **kwargs) | |||
| def execute_sql(self, sql, *args): | |||
| def r(txn): | |||
| txn.execute(sql, args) | |||
| return txn.fetchall() | |||
| return self.runInteraction("execute_sql", r) | |||
| def insert_many_txn(self, txn, table, headers, rows): | |||
| sql = "INSERT INTO %s (%s) VALUES (%s)" % ( | |||
| table, | |||
| ", ".join(k for k in headers), | |||
| ", ".join("%s" for _ in headers) | |||
| ) | |||
| try: | |||
| txn.executemany(sql, rows) | |||
| except: | |||
| logger.exception( | |||
| "Failed to insert: %s", | |||
| table, | |||
| ) | |||
| raise | |||
| class Porter(object): | |||
| def __init__(self, **kwargs): | |||
| self.__dict__.update(kwargs) | |||
| @defer.inlineCallbacks | |||
| def setup_table(self, table): | |||
| if table in APPEND_ONLY_TABLES: | |||
| # It's safe to just carry on inserting. | |||
| row = yield self.postgres_store._simple_select_one( | |||
| table="port_from_sqlite3", | |||
| keyvalues={"table_name": table}, | |||
| retcols=("forward_rowid", "backward_rowid"), | |||
| allow_none=True, | |||
| ) | |||
| total_to_port = None | |||
| if row is None: | |||
| if table == "sent_transactions": | |||
| forward_chunk, already_ported, total_to_port = ( | |||
| yield self._setup_sent_transactions() | |||
| ) | |||
| backward_chunk = 0 | |||
| else: | |||
| yield self.postgres_store._simple_insert( | |||
| table="port_from_sqlite3", | |||
| values={ | |||
| "table_name": table, | |||
| "forward_rowid": 1, | |||
| "backward_rowid": 0, | |||
| } | |||
| ) | |||
| forward_chunk = 1 | |||
| backward_chunk = 0 | |||
| already_ported = 0 | |||
| else: | |||
| forward_chunk = row["forward_rowid"] | |||
| backward_chunk = row["backward_rowid"] | |||
| if total_to_port is None: | |||
| already_ported, total_to_port = yield self._get_total_count_to_port( | |||
| table, forward_chunk, backward_chunk | |||
| ) | |||
| else: | |||
| def delete_all(txn): | |||
| txn.execute( | |||
| "DELETE FROM port_from_sqlite3 WHERE table_name = %s", | |||
| (table,) | |||
| ) | |||
| txn.execute("TRUNCATE %s CASCADE" % (table,)) | |||
| yield self.postgres_store.execute(delete_all) | |||
| yield self.postgres_store._simple_insert( | |||
| table="port_from_sqlite3", | |||
| values={ | |||
| "table_name": table, | |||
| "forward_rowid": 1, | |||
| "backward_rowid": 0, | |||
| } | |||
| ) | |||
| forward_chunk = 1 | |||
| backward_chunk = 0 | |||
| already_ported, total_to_port = yield self._get_total_count_to_port( | |||
| table, forward_chunk, backward_chunk | |||
| ) | |||
| defer.returnValue( | |||
| (table, already_ported, total_to_port, forward_chunk, backward_chunk) | |||
| ) | |||
| @defer.inlineCallbacks | |||
| def handle_table(self, table, postgres_size, table_size, forward_chunk, | |||
| backward_chunk): | |||
| if not table_size: | |||
| return | |||
| self.progress.add_table(table, postgres_size, table_size) | |||
| # Patch from: https://github.com/matrix-org/synapse/issues/2287 | |||
| if table == "user_directory_search": | |||
| # FIXME: actually port it, but for now we can leave it blank | |||
| # and have the server regenerate it. | |||
| # you will need to set the values of user_directory_stream_pos | |||
| # to be ('X', null) to force a regen | |||
| return | |||
| if table == "event_search": | |||
| yield self.handle_search_table( | |||
| postgres_size, table_size, forward_chunk, backward_chunk | |||
| ) | |||
| return | |||
| forward_select = ( | |||
| "SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?" | |||
| % (table,) | |||
| ) | |||
| backward_select = ( | |||
| "SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid LIMIT ?" | |||
| % (table,) | |||
| ) | |||
| do_forward = [True] | |||
| do_backward = [True] | |||
| while True: | |||
| def r(txn): | |||
| forward_rows = [] | |||
| backward_rows = [] | |||
| if do_forward[0]: | |||
| txn.execute(forward_select, (forward_chunk, self.batch_size,)) | |||
| forward_rows = txn.fetchall() | |||
| if not forward_rows: | |||
| do_forward[0] = False | |||
| if do_backward[0]: | |||
| txn.execute(backward_select, (backward_chunk, self.batch_size,)) | |||
| backward_rows = txn.fetchall() | |||
| if not backward_rows: | |||
| do_backward[0] = False | |||
| if forward_rows or backward_rows: | |||
| headers = [column[0] for column in txn.description] | |||
| else: | |||
| headers = None | |||
| return headers, forward_rows, backward_rows | |||
| headers, frows, brows = yield self.sqlite_store.runInteraction( | |||
| "select", r | |||
| ) | |||
| if frows or brows: | |||
| if frows: | |||
| forward_chunk = max(row[0] for row in frows) + 1 | |||
| if brows: | |||
| backward_chunk = min(row[0] for row in brows) - 1 | |||
| rows = frows + brows | |||
| self._convert_rows(table, headers, rows) | |||
| def insert(txn): | |||
| self.postgres_store.insert_many_txn( | |||
| txn, table, headers[1:], rows | |||
| ) | |||
| self.postgres_store._simple_update_one_txn( | |||
| txn, | |||
| table="port_from_sqlite3", | |||
| keyvalues={"table_name": table}, | |||
| updatevalues={ | |||
| "forward_rowid": forward_chunk, | |||
| "backward_rowid": backward_chunk, | |||
| }, | |||
| ) | |||
| yield self.postgres_store.execute(insert) | |||
| postgres_size += len(rows) | |||
| self.progress.update(table, postgres_size) | |||
| else: | |||
| return | |||
| @defer.inlineCallbacks | |||
| def handle_search_table(self, postgres_size, table_size, forward_chunk, | |||
| backward_chunk): | |||
| select = ( | |||
| "SELECT es.rowid, es.*, e.origin_server_ts, e.stream_ordering" | |||
| " FROM event_search as es" | |||
| " INNER JOIN events AS e USING (event_id, room_id)" | |||
| " WHERE es.rowid >= ?" | |||
| " ORDER BY es.rowid LIMIT ?" | |||
| ) | |||
| while True: | |||
| def r(txn): | |||
| txn.execute(select, (forward_chunk, self.batch_size,)) | |||
| rows = txn.fetchall() | |||
| headers = [column[0] for column in txn.description] | |||
| return headers, rows | |||
| headers, rows = yield self.sqlite_store.runInteraction("select", r) | |||
| if rows: | |||
| forward_chunk = rows[-1][0] + 1 | |||
| # We have to treat event_search differently since it has a | |||
| # different structure in the two different databases. | |||
| def insert(txn): | |||
| sql = ( | |||
| "INSERT INTO event_search (event_id, room_id, key," | |||
| " sender, vector, origin_server_ts, stream_ordering)" | |||
| " VALUES (?,?,?,?,to_tsvector('english', ?),?,?)" | |||
| ) | |||
| rows_dict = [ | |||
| dict(zip(headers, row)) | |||
| for row in rows | |||
| ] | |||
| txn.executemany(sql, [ | |||
| ( | |||
| row["event_id"], | |||
| row["room_id"], | |||
| row["key"], | |||
| row["sender"], | |||
| row["value"], | |||
| row["origin_server_ts"], | |||
| row["stream_ordering"], | |||
| ) | |||
| for row in rows_dict | |||
| ]) | |||
| self.postgres_store._simple_update_one_txn( | |||
| txn, | |||
| table="port_from_sqlite3", | |||
| keyvalues={"table_name": "event_search"}, | |||
| updatevalues={ | |||
| "forward_rowid": forward_chunk, | |||
| "backward_rowid": backward_chunk, | |||
| }, | |||
| ) | |||
| yield self.postgres_store.execute(insert) | |||
| postgres_size += len(rows) | |||
| self.progress.update("event_search", postgres_size) | |||
| else: | |||
| return | |||
| def setup_db(self, db_config, database_engine): | |||
| db_conn = database_engine.module.connect( | |||
| **{ | |||
| k: v for k, v in db_config.get("args", {}).items() | |||
| if not k.startswith("cp_") | |||
| } | |||
| ) | |||
| prepare_database(db_conn, database_engine, config=None) | |||
| db_conn.commit() | |||
| @defer.inlineCallbacks | |||
| def run(self): | |||
| try: | |||
| sqlite_db_pool = adbapi.ConnectionPool( | |||
| self.sqlite_config["name"], | |||
| **self.sqlite_config["args"] | |||
| ) | |||
| postgres_db_pool = adbapi.ConnectionPool( | |||
| self.postgres_config["name"], | |||
| **self.postgres_config["args"] | |||
| ) | |||
| sqlite_engine = create_engine(sqlite_config) | |||
| postgres_engine = create_engine(postgres_config) | |||
| self.sqlite_store = Store(sqlite_db_pool, sqlite_engine) | |||
| self.postgres_store = Store(postgres_db_pool, postgres_engine) | |||
| yield self.postgres_store.execute( | |||
| postgres_engine.check_database | |||
| ) | |||
| # Step 1. Set up databases. | |||
| self.progress.set_state("Preparing SQLite3") | |||
| self.setup_db(sqlite_config, sqlite_engine) | |||
| self.progress.set_state("Preparing PostgreSQL") | |||
| self.setup_db(postgres_config, postgres_engine) | |||
| # Step 2. Get tables. | |||
| self.progress.set_state("Fetching tables") | |||
| sqlite_tables = yield self.sqlite_store._simple_select_onecol( | |||
| table="sqlite_master", | |||
| keyvalues={ | |||
| "type": "table", | |||
| }, | |||
| retcol="name", | |||
| ) | |||
| postgres_tables = yield self.postgres_store._simple_select_onecol( | |||
| table="information_schema.tables", | |||
| keyvalues={}, | |||
| retcol="distinct table_name", | |||
| ) | |||
| tables = set(sqlite_tables) & set(postgres_tables) | |||
| self.progress.set_state("Creating tables") | |||
| logger.info("Found %d tables", len(tables)) | |||
| def create_port_table(txn): | |||
| txn.execute( | |||
| "CREATE TABLE port_from_sqlite3 (" | |||
| " table_name varchar(100) NOT NULL UNIQUE," | |||
| " forward_rowid bigint NOT NULL," | |||
| " backward_rowid bigint NOT NULL" | |||
| ")" | |||
| ) | |||
| # The old port script created a table with just a "rowid" column. | |||
| # We want people to be able to rerun this script from an old port | |||
| # so that they can pick up any missing events that were not | |||
| # ported across. | |||
| def alter_table(txn): | |||
| txn.execute( | |||
| "ALTER TABLE IF EXISTS port_from_sqlite3" | |||
| " RENAME rowid TO forward_rowid" | |||
| ) | |||
| txn.execute( | |||
| "ALTER TABLE IF EXISTS port_from_sqlite3" | |||
| " ADD backward_rowid bigint NOT NULL DEFAULT 0" | |||
| ) | |||
| try: | |||
| yield self.postgres_store.runInteraction( | |||
| "alter_table", alter_table | |||
| ) | |||
| except Exception as e: | |||
| logger.info("Failed to create port table: %s", e) | |||
| try: | |||
| yield self.postgres_store.runInteraction( | |||
| "create_port_table", create_port_table | |||
| ) | |||
| except Exception as e: | |||
| logger.info("Failed to create port table: %s", e) | |||
| self.progress.set_state("Setting up") | |||
| # Set up tables. | |||
| setup_res = yield defer.gatherResults( | |||
| [ | |||
| self.setup_table(table) | |||
| for table in tables | |||
| if table not in ["schema_version", "applied_schema_deltas"] | |||
| and not table.startswith("sqlite_") | |||
| ], | |||
| consumeErrors=True, | |||
| ) | |||
| # Process tables. | |||
| yield defer.gatherResults( | |||
| [ | |||
| self.handle_table(*res) | |||
| for res in setup_res | |||
| ], | |||
| consumeErrors=True, | |||
| ) | |||
| self.progress.done() | |||
| except: | |||
| global end_error_exec_info | |||
| end_error_exec_info = sys.exc_info() | |||
| logger.exception("") | |||
| finally: | |||
| reactor.stop() | |||
| def _convert_rows(self, table, headers, rows): | |||
| bool_col_names = BOOLEAN_COLUMNS.get(table, []) | |||
| bool_cols = [ | |||
| i for i, h in enumerate(headers) if h in bool_col_names | |||
| ] | |||
| def conv(j, col): | |||
| if j in bool_cols: | |||
| return bool(col) | |||
| return col | |||
| for i, row in enumerate(rows): | |||
| rows[i] = tuple( | |||
| conv(j, col) | |||
| for j, col in enumerate(row) | |||
| if j > 0 | |||
| ) | |||
| @defer.inlineCallbacks | |||
| def _setup_sent_transactions(self): | |||
| # Only save things from the last day | |||
| yesterday = int(time.time() * 1000) - 86400000 | |||
| # And save the max transaction id from each destination | |||
| select = ( | |||
| "SELECT rowid, * FROM sent_transactions WHERE rowid IN (" | |||
| "SELECT max(rowid) FROM sent_transactions" | |||
| " GROUP BY destination" | |||
| ")" | |||
| ) | |||
| def r(txn): | |||
| txn.execute(select) | |||
| rows = txn.fetchall() | |||
| headers = [column[0] for column in txn.description] | |||
| ts_ind = headers.index('ts') | |||
| return headers, [r for r in rows if r[ts_ind] < yesterday] | |||
| headers, rows = yield self.sqlite_store.runInteraction( | |||
| "select", r, | |||
| ) | |||
| self._convert_rows("sent_transactions", headers, rows) | |||
| inserted_rows = len(rows) | |||
| if inserted_rows: | |||
| max_inserted_rowid = max(r[0] for r in rows) | |||
| def insert(txn): | |||
| self.postgres_store.insert_many_txn( | |||
| txn, "sent_transactions", headers[1:], rows | |||
| ) | |||
| yield self.postgres_store.execute(insert) | |||
| else: | |||
| max_inserted_rowid = 0 | |||
| def get_start_id(txn): | |||
| txn.execute( | |||
| "SELECT rowid FROM sent_transactions WHERE ts >= ?" | |||
| " ORDER BY rowid ASC LIMIT 1", | |||
| (yesterday,) | |||
| ) | |||
| rows = txn.fetchall() | |||
| if rows: | |||
| return rows[0][0] | |||
| else: | |||
| return 1 | |||
| next_chunk = yield self.sqlite_store.execute(get_start_id) | |||
| next_chunk = max(max_inserted_rowid + 1, next_chunk) | |||
| yield self.postgres_store._simple_insert( | |||
| table="port_from_sqlite3", | |||
| values={ | |||
| "table_name": "sent_transactions", | |||
| "forward_rowid": next_chunk, | |||
| "backward_rowid": 0, | |||
| } | |||
| ) | |||
| def get_sent_table_size(txn): | |||
| txn.execute( | |||
| "SELECT count(*) FROM sent_transactions" | |||
| " WHERE ts >= ?", | |||
| (yesterday,) | |||
| ) | |||
| size, = txn.fetchone() | |||
| return int(size) | |||
| remaining_count = yield self.sqlite_store.execute( | |||
| get_sent_table_size | |||
| ) | |||
| total_count = remaining_count + inserted_rows | |||
| defer.returnValue((next_chunk, inserted_rows, total_count)) | |||
| @defer.inlineCallbacks | |||
| def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk): | |||
| frows = yield self.sqlite_store.execute_sql( | |||
| "SELECT count(*) FROM %s WHERE rowid >= ?" % (table,), | |||
| forward_chunk, | |||
| ) | |||
| brows = yield self.sqlite_store.execute_sql( | |||
| "SELECT count(*) FROM %s WHERE rowid <= ?" % (table,), | |||
| backward_chunk, | |||
| ) | |||
| defer.returnValue(frows[0][0] + brows[0][0]) | |||
| @defer.inlineCallbacks | |||
| def _get_already_ported_count(self, table): | |||
| rows = yield self.postgres_store.execute_sql( | |||
| "SELECT count(*) FROM %s" % (table,), | |||
| ) | |||
| defer.returnValue(rows[0][0]) | |||
| @defer.inlineCallbacks | |||
| def _get_total_count_to_port(self, table, forward_chunk, backward_chunk): | |||
| remaining, done = yield defer.gatherResults( | |||
| [ | |||
| self._get_remaining_count_to_port(table, forward_chunk, backward_chunk), | |||
| self._get_already_ported_count(table), | |||
| ], | |||
| consumeErrors=True, | |||
| ) | |||
| remaining = int(remaining) if remaining else 0 | |||
| done = int(done) if done else 0 | |||
| defer.returnValue((done, remaining + done)) | |||
| ############################################## | |||
| ###### The following is simply UI stuff ###### | |||
| ############################################## | |||
| class Progress(object): | |||
| """Used to report progress of the port | |||
| """ | |||
| def __init__(self): | |||
| self.tables = {} | |||
| self.start_time = int(time.time()) | |||
| def add_table(self, table, cur, size): | |||
| self.tables[table] = { | |||
| "start": cur, | |||
| "num_done": cur, | |||
| "total": size, | |||
| "perc": int(cur * 100 / size), | |||
| } | |||
| def update(self, table, num_done): | |||
| data = self.tables[table] | |||
| data["num_done"] = num_done | |||
| data["perc"] = int(num_done * 100 / data["total"]) | |||
| def done(self): | |||
| pass | |||
| class CursesProgress(Progress): | |||
| """Reports progress to a curses window | |||
| """ | |||
| def __init__(self, stdscr): | |||
| self.stdscr = stdscr | |||
| curses.use_default_colors() | |||
| curses.curs_set(0) | |||
| curses.init_pair(1, curses.COLOR_RED, -1) | |||
| curses.init_pair(2, curses.COLOR_GREEN, -1) | |||
| self.last_update = 0 | |||
| self.finished = False | |||
| self.total_processed = 0 | |||
| self.total_remaining = 0 | |||
| super(CursesProgress, self).__init__() | |||
| def update(self, table, num_done): | |||
| super(CursesProgress, self).update(table, num_done) | |||
| self.total_processed = 0 | |||
| self.total_remaining = 0 | |||
| for table, data in self.tables.items(): | |||
| self.total_processed += data["num_done"] - data["start"] | |||
| self.total_remaining += data["total"] - data["num_done"] | |||
| self.render() | |||
| def render(self, force=False): | |||
| now = time.time() | |||
| if not force and now - self.last_update < 0.2: | |||
| # reactor.callLater(1, self.render) | |||
| return | |||
| self.stdscr.clear() | |||
| rows, cols = self.stdscr.getmaxyx() | |||
| duration = int(now) - int(self.start_time) | |||
| minutes, seconds = divmod(duration, 60) | |||
| duration_str = '%02dm %02ds' % (minutes, seconds,) | |||
| if self.finished: | |||
| status = "Time spent: %s (Done!)" % (duration_str,) | |||
| else: | |||
| if self.total_processed > 0: | |||
| left = float(self.total_remaining) / self.total_processed | |||
| est_remaining = (int(now) - self.start_time) * left | |||
| est_remaining_str = '%02dm %02ds remaining' % divmod(est_remaining, 60) | |||
| else: | |||
| est_remaining_str = "Unknown" | |||
| status = ( | |||
| "Time spent: %s (est. remaining: %s)" | |||
| % (duration_str, est_remaining_str,) | |||
| ) | |||
| self.stdscr.addstr( | |||
| 0, 0, | |||
| status, | |||
| curses.A_BOLD, | |||
| ) | |||
| max_len = max([len(t) for t in self.tables.keys()]) | |||
| left_margin = 5 | |||
| middle_space = 1 | |||
| items = self.tables.items() | |||
| items.sort( | |||
| key=lambda i: (i[1]["perc"], i[0]), | |||
| ) | |||
| for i, (table, data) in enumerate(items): | |||
| if i + 2 >= rows: | |||
| break | |||
| perc = data["perc"] | |||
| color = curses.color_pair(2) if perc == 100 else curses.color_pair(1) | |||
| self.stdscr.addstr( | |||
| i + 2, left_margin + max_len - len(table), | |||
| table, | |||
| curses.A_BOLD | color, | |||
| ) | |||
| size = 20 | |||
| progress = "[%s%s]" % ( | |||
| "#" * int(perc * size / 100), | |||
| " " * (size - int(perc * size / 100)), | |||
| ) | |||
| self.stdscr.addstr( | |||
| i + 2, left_margin + max_len + middle_space, | |||
| "%s %3d%% (%d/%d)" % (progress, perc, data["num_done"], data["total"]), | |||
| ) | |||
| if self.finished: | |||
| self.stdscr.addstr( | |||
| rows - 1, 0, | |||
| "Press any key to exit...", | |||
| ) | |||
| self.stdscr.refresh() | |||
| self.last_update = time.time() | |||
| def done(self): | |||
| self.finished = True | |||
| self.render(True) | |||
| self.stdscr.getch() | |||
| def set_state(self, state): | |||
| self.stdscr.clear() | |||
| self.stdscr.addstr( | |||
| 0, 0, | |||
| state + "...", | |||
| curses.A_BOLD, | |||
| ) | |||
| self.stdscr.refresh() | |||
| class TerminalProgress(Progress): | |||
| """Just prints progress to the terminal | |||
| """ | |||
| def update(self, table, num_done): | |||
| super(TerminalProgress, self).update(table, num_done) | |||
| data = self.tables[table] | |||
| print "%s: %d%% (%d/%d)" % ( | |||
| table, data["perc"], | |||
| data["num_done"], data["total"], | |||
| ) | |||
| def set_state(self, state): | |||
| print state + "..." | |||
| ############################################## | |||
| ############################################## | |||
| if __name__ == "__main__": | |||
| parser = argparse.ArgumentParser( | |||
| description="A script to port an existing synapse SQLite database to" | |||
| " a new PostgreSQL database." | |||
| ) | |||
| parser.add_argument("-v", action='store_true') | |||
| parser.add_argument( | |||
| "--sqlite-database", required=True, | |||
| help="The snapshot of the SQLite database file. This must not be" | |||
| " currently used by a running synapse server" | |||
| ) | |||
| parser.add_argument( | |||
| "--postgres-config", type=argparse.FileType('r'), required=True, | |||
| help="The database config file for the PostgreSQL database" | |||
| ) | |||
| parser.add_argument( | |||
| "--curses", action='store_true', | |||
| help="display a curses based progress UI" | |||
| ) | |||
| parser.add_argument( | |||
| "--batch-size", type=int, default=1000, | |||
| help="The number of rows to select from the SQLite table each" | |||
| " iteration [default=1000]", | |||
| ) | |||
| args = parser.parse_args() | |||
| logging_config = { | |||
| "level": logging.DEBUG if args.v else logging.INFO, | |||
| "format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s" | |||
| } | |||
| if args.curses: | |||
| logging_config["filename"] = "port-synapse.log" | |||
| logging.basicConfig(**logging_config) | |||
| sqlite_config = { | |||
| "name": "sqlite3", | |||
| "args": { | |||
| "database": args.sqlite_database, | |||
| "cp_min": 1, | |||
| "cp_max": 1, | |||
| "check_same_thread": False, | |||
| }, | |||
| } | |||
| postgres_config = yaml.safe_load(args.postgres_config) | |||
| if "database" in postgres_config: | |||
| postgres_config = postgres_config["database"] | |||
| if "name" not in postgres_config: | |||
| sys.stderr.write("Malformed database config: no 'name'") | |||
| sys.exit(2) | |||
| if postgres_config["name"] != "psycopg2": | |||
| sys.stderr.write("Database must use 'psycopg2' connector.") | |||
| sys.exit(3) | |||
| def start(stdscr=None): | |||
| if stdscr: | |||
| progress = CursesProgress(stdscr) | |||
| else: | |||
| progress = TerminalProgress() | |||
| porter = Porter( | |||
| sqlite_config=sqlite_config, | |||
| postgres_config=postgres_config, | |||
| progress=progress, | |||
| batch_size=args.batch_size, | |||
| ) | |||
| reactor.callWhenRunning(porter.run) | |||
| reactor.run() | |||
| if args.curses: | |||
| curses.wrapper(start) | |||
| else: | |||
| start() | |||
| if end_error_exec_info: | |||
| exc_type, exc_value, exc_traceback = end_error_exec_info | |||
| traceback.print_exception(exc_type, exc_value, exc_traceback) | |||
| @@ -0,0 +1,62 @@ | |||
| [docker-ce-stable] | |||
| name=Docker CE Stable - $basearch | |||
| baseurl=https://download.docker.com/linux/centos/7/$basearch/stable | |||
| enabled=1 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| [docker-ce-stable-debuginfo] | |||
| name=Docker CE Stable - Debuginfo $basearch | |||
| baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/stable | |||
| enabled=0 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| [docker-ce-stable-source] | |||
| name=Docker CE Stable - Sources | |||
| baseurl=https://download.docker.com/linux/centos/7/source/stable | |||
| enabled=0 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| [docker-ce-edge] | |||
| name=Docker CE Edge - $basearch | |||
| baseurl=https://download.docker.com/linux/centos/7/$basearch/edge | |||
| enabled=0 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| [docker-ce-edge-debuginfo] | |||
| name=Docker CE Edge - Debuginfo $basearch | |||
| baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/edge | |||
| enabled=0 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| [docker-ce-edge-source] | |||
| name=Docker CE Edge - Sources | |||
| baseurl=https://download.docker.com/linux/centos/7/source/edge | |||
| enabled=0 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| [docker-ce-test] | |||
| name=Docker CE Test - $basearch | |||
| baseurl=https://download.docker.com/linux/centos/7/$basearch/test | |||
| enabled=0 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| [docker-ce-test-debuginfo] | |||
| name=Docker CE Test - Debuginfo $basearch | |||
| baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/test | |||
| enabled=0 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| [docker-ce-test-source] | |||
| name=Docker CE Test - Sources | |||
| baseurl=https://download.docker.com/linux/centos/7/source/test | |||
| enabled=0 | |||
| gpgcheck=1 | |||
| gpgkey=https://download.docker.com/linux/centos/gpg | |||
| @@ -0,0 +1,32 @@ | |||
| --- | |||
| - name: Fail if playbook called incorrectly | |||
| fail: msg="The `local_path_media_store` variable needs to be provided to this playbook, via --extra-vars" | |||
| when: "local_path_media_store is not defined or local_path_media_store.startswith('<')" | |||
| - name: Check if the provided media store directory exists | |||
| stat: path="{{ local_path_media_store }}" | |||
| delegate_to: 127.0.0.1 | |||
| become: false | |||
| register: local_path_media_store_stat | |||
| - name: Fail if provided media_store directory doesn't exist on the local machine | |||
| fail: msg="File cannot be found on the local machine at {{ local_path_media_store }}" | |||
| when: "not local_path_media_store_stat.stat.exists or not local_path_media_store_stat.stat.isdir" | |||
| - name: Ensure matrix-synapse is stopped | |||
| service: name=matrix-synapse state=stopped daemon_reload=yes | |||
| register: stopping_result | |||
| - name: Ensure provided media_store directory is copied to the server | |||
| synchronize: | |||
| src: "{{ local_path_media_store }}/" | |||
| dest: "{{ matrix_synapse_data_path }}/media_store" | |||
| delete: yes | |||
| - name: Ensure Matrix Synapse is started (if it previously was) | |||
| service: name="{{ item }}" state=started daemon_reload=yes | |||
| when: stopping_result.changed | |||
| with_items: | |||
| - matrix-synapse | |||
| - matrix-nginx-proxy | |||
| @@ -0,0 +1,78 @@ | |||
| --- | |||
| - name: Fail if playbook called incorrectly | |||
| fail: msg="The `local_path_homeserver_db` variable needs to be provided to this playbook, via --extra-vars" | |||
| when: "local_path_homeserver_db is not defined or local_path_homeserver_db.startswith('<')" | |||
| - name: Check if the provided SQLite homeserver.db file exists | |||
| stat: path="{{ local_path_homeserver_db }}" | |||
| delegate_to: 127.0.0.1 | |||
| become: false | |||
| register: local_path_homeserver_db_stat | |||
| - name: Fail if provided SQLite homeserver.db file doesn't exist | |||
| fail: msg="File cannot be found on the local machine at {{ local_path_homeserver_db }}" | |||
| when: not local_path_homeserver_db_stat.stat.exists | |||
| - name: Ensure scratchpad directory exists | |||
| file: | |||
| path: "{{ matrix_scratchpad_dir }}" | |||
| state: directory | |||
| mode: 0755 | |||
| owner: "{{ matrix_user_username }}" | |||
| group: "{{ matrix_user_username }}" | |||
| - name: Ensure provided SQLite homeserver.db file is copied to scratchpad directory on the server | |||
| synchronize: | |||
| src: "{{ local_path_homeserver_db }}" | |||
| dest: "{{ matrix_scratchpad_dir }}/homeserver.db" | |||
| - name: Ensure matrix-postgres is stopped | |||
| service: name=matrix-postgres state=stopped daemon_reload=yes | |||
| - name: Ensure postgres data is wiped out | |||
| file: | |||
| path: "{{ matrix_postgres_data_path }}" | |||
| state: absent | |||
| - name: Ensure postgres data path exists | |||
| file: | |||
| path: "{{ matrix_postgres_data_path }}" | |||
| state: directory | |||
| mode: 0700 | |||
| owner: "{{ matrix_user_username }}" | |||
| group: "{{ matrix_user_username }}" | |||
| - name: Ensure matrix-postgres is started | |||
| service: name=matrix-postgres state=restarted daemon_reload=yes | |||
| - name: Wait a while, so that Postgres can manage to start | |||
| pause: seconds=7 | |||
| # Fixes a problem with porting the `user_directory_search` table. | |||
| # https://github.com/matrix-org/synapse/issues/2287 | |||
| - name: Ensure synapse_port_db_with_patch exists | |||
| copy: | |||
| src: "{{ role_path }}/files/synapse_port_db_with_patch" | |||
| dest: "{{ matrix_scratchpad_dir }}/synapse_port_db_with_patch" | |||
| - name: Importing SQLite database into Postgres | |||
| docker_container: | |||
| name: matrix-synapse-migrate | |||
| image: "{{ docker_matrix_image }}" | |||
| detach: no | |||
| cleanup: yes | |||
| entrypoint: /usr/bin/python | |||
| command: "/usr/local/bin/synapse_port_db_with_patch --sqlite-database /scratchpad/homeserver.db --postgres-config /data/homeserver.yaml" | |||
| user: "{{ matrix_user_uid }}:{{ matrix_user_gid }}" | |||
| volumes: | |||
| - "{{ matrix_synapse_data_path }}:/data" | |||
| - "{{ matrix_scratchpad_dir }}:/scratchpad" | |||
| - "{{ matrix_scratchpad_dir }}/synapse_port_db_with_patch:/usr/local/bin/synapse_port_db_with_patch" | |||
| links: | |||
| - "matrix-postgres:postgres" | |||
| - name: Ensure scratchpad directory is deleted | |||
| file: | |||
| path: "{{ matrix_scratchpad_dir }}" | |||
| state: absent | |||
| @@ -0,0 +1,45 @@ | |||
| --- | |||
| - include: tasks/setup_base.yml | |||
| tags: | |||
| - setup-main | |||
| - include: tasks/setup_main.yml | |||
| tags: | |||
| - setup-main | |||
| - include: tasks/setup_ssl.yml | |||
| tags: | |||
| - setup-main | |||
| - include: tasks/setup_postgres.yml | |||
| tags: | |||
| - setup-main | |||
| - include: tasks/setup_synapse.yml | |||
| tags: | |||
| - setup-main | |||
| - include: tasks/setup_riot_web.yml | |||
| tags: | |||
| - setup-main | |||
| - include: tasks/setup_nginx_proxy.yml | |||
| tags: | |||
| - setup-main | |||
| - include: tasks/start.yml | |||
| tags: | |||
| - start | |||
| - include: tasks/register_user.yml | |||
| tags: | |||
| - register-user | |||
| - include: tasks/import_sqlite_db.yml | |||
| tags: | |||
| - import-sqlite-db | |||
| - include: tasks/import_media_store.yml | |||
| tags: | |||
| - import-media-store | |||
| @@ -0,0 +1,20 @@ | |||
| --- | |||
| - name: Fail if playbook called incorrectly | |||
| fail: msg="The `username` variable needs to be provided to this playbook, via --extra-vars" | |||
| when: "username is not defined or username == '<your-username>'" | |||
| - name: Fail if playbook called incorrectly | |||
| fail: msg="The `password` variable needs to be provided to this playbook, via --extra-vars" | |||
| when: "password is not defined or password == '<your-password>'" | |||
| - name: Ensure matrix-synapse is started | |||
| service: name=matrix-synapse state=started daemon_reload=yes | |||
| register: start_result | |||
| - name: Wait a while, so that Matrix Synapse can manage to start | |||
| pause: seconds=7 | |||
| when: start_result.changed | |||
| - name: Register user | |||
| shell: "matrix-synapse-register-user {{ username }} {{ password }}" | |||
| @@ -0,0 +1,46 @@ | |||
| --- | |||
| - name: Ensure Docker repository is enabled (CentOS) | |||
| template: | |||
| src: "{{ role_path }}/files/yum.repos.d/{{ item }}" | |||
| dest: "/etc/yum.repos.d/{{ item }}" | |||
| owner: "root" | |||
| group: "root" | |||
| mode: 0644 | |||
| with_items: | |||
| - docker-ce.repo | |||
| when: ansible_distribution == 'CentOS' | |||
| - name: Ensure Docker's RPM key is trusted | |||
| rpm_key: | |||
| state: present | |||
| key: https://download.docker.com/linux/centos/gpg | |||
| when: ansible_distribution == 'CentOS' | |||
| - name: Ensure yum packages are installed (base) | |||
| yum: name="{{ item }}" state=latest update_cache=yes | |||
| with_items: | |||
| - bash-completion | |||
| - docker-ce | |||
| - docker-python | |||
| - ntp | |||
| when: ansible_distribution == 'CentOS' | |||
| - name: Ensure Docker is started and autoruns | |||
| service: name=docker state=started enabled=yes | |||
| - name: Ensure firewalld is started and autoruns | |||
| service: name=firewalld state=started enabled=yes | |||
| - name: Ensure ntpd is started and autoruns | |||
| service: name=ntpd state=started enabled=yes | |||
| - name: Ensure SELinux disabled | |||
| selinux: state=disabled | |||
| - name: Ensure correct hostname set | |||
| hostname: name="{{ hostname_matrix }}" | |||
| - name: Ensure timezone is UTC | |||
| timezone: | |||
| name: UTC | |||
| @@ -0,0 +1,20 @@ | |||
| --- | |||
| - name: Ensure Matrix group is created | |||
| group: | |||
| name: "{{ matrix_user_username }}" | |||
| gid: "{{ matrix_user_gid }}" | |||
| state: present | |||
| - name: Ensure Matrix user is created | |||
| user: | |||
| name: "{{ matrix_user_username }}" | |||
| uid: "{{ matrix_user_uid }}" | |||
| state: present | |||
| group: "{{ matrix_user_username }}" | |||
| - name: Ensure environment variables data path exists | |||
| file: | |||
| path: "{{ matrix_environment_variables_data_path }}" | |||
| state: directory | |||
| mode: 0700 | |||
| @@ -0,0 +1,41 @@ | |||
| --- | |||
| - name: Ensure Matrix nginx-proxy paths exists | |||
| file: | |||
| path: "{{ item }}" | |||
| state: directory | |||
| mode: 0750 | |||
| owner: root | |||
| group: root | |||
| with_items: | |||
| - "{{ matrix_nginx_proxy_data_path }}" | |||
| - "{{ matrix_nginx_proxy_confd_path }}" | |||
| - name: Ensure nginx Docker image is pulled | |||
| docker_image: | |||
| name: "{{ docker_nginx_image }}" | |||
| - name: Ensure Matrix Synapse proxy vhost configured | |||
| template: | |||
| src: "{{ role_path }}/templates/nginx-conf.d/{{ item }}.j2" | |||
| dest: "{{ matrix_nginx_proxy_confd_path }}/{{ item }}" | |||
| mode: 0644 | |||
| with_items: | |||
| - "matrix-synapse.conf" | |||
| - "matrix-riot-web.conf" | |||
| - name: Allow access to nginx proxy ports in firewalld | |||
| firewalld: | |||
| service: "{{ item }}" | |||
| state: enabled | |||
| immediate: yes | |||
| permanent: yes | |||
| with_items: | |||
| - "http" | |||
| - "https" | |||
| - name: Ensure matrix-nginx-proxy.service installed | |||
| template: | |||
| src: "{{ role_path }}/templates/systemd/matrix-nginx-proxy.service.j2" | |||
| dest: "/etc/systemd/system/matrix-nginx-proxy.service" | |||
| mode: 0644 | |||
| @@ -0,0 +1,34 @@ | |||
| --- | |||
| - name: Ensure postgres data path exists | |||
| file: | |||
| path: "{{ matrix_postgres_data_path }}" | |||
| state: directory | |||
| mode: 0700 | |||
| owner: "{{ matrix_user_username }}" | |||
| group: "{{ matrix_user_username }}" | |||
| - name: Ensure postgres Docker image is pulled | |||
| docker_image: | |||
| name: "{{ docker_postgres_image }}" | |||
| - name: Ensure Postgres environment variables file created | |||
| template: | |||
| src: "{{ role_path }}/templates/env/{{ item }}.j2" | |||
| dest: "{{ matrix_environment_variables_data_path }}/{{ item }}" | |||
| mode: 0640 | |||
| with_items: | |||
| - "env-postgres-pgsql-docker" | |||
| - "env-postgres-server-docker" | |||
| - name: Ensure matrix-postgres-cli script created | |||
| template: | |||
| src: "{{ role_path }}/templates/usr-local-bin/matrix-postgres-cli.j2" | |||
| dest: "/usr/local/bin/matrix-postgres-cli" | |||
| mode: 0750 | |||
| - name: Ensure matrix-postgres.service installed | |||
| template: | |||
| src: "{{ role_path }}/templates/systemd/matrix-postgres.service.j2" | |||
| dest: "/etc/systemd/system/matrix-postgres.service" | |||
| mode: 0644 | |||
| @@ -0,0 +1,30 @@ | |||
| --- | |||
| - name: Ensure Matrix riot-web paths exists | |||
| file: | |||
| path: "{{ matrix_nginx_riot_web_data_path }}" | |||
| state: directory | |||
| mode: 0750 | |||
| owner: "{{ matrix_user_username }}" | |||
| group: "{{ matrix_user_username }}" | |||
| - name: Ensure riot-web Docker image is pulled | |||
| docker_image: | |||
| name: "{{ docker_riot_image }}" | |||
| - name: Ensure Matrix riot-web configured | |||
| template: | |||
| src: "{{ role_path }}/templates/riot-web/{{ item }}.j2" | |||
| dest: "{{ matrix_nginx_riot_web_data_path }}/{{ item }}" | |||
| mode: 0644 | |||
| owner: "{{ matrix_user_username }}" | |||
| group: "{{ matrix_user_username }}" | |||
| with_items: | |||
| - "riot.im.conf" | |||
| - "config.json" | |||
| - name: Ensure matrix-riot-web.service installed | |||
| template: | |||
| src: "{{ role_path }}/templates/systemd/matrix-riot-web.service.j2" | |||
| dest: "/etc/systemd/system/matrix-riot-web.service" | |||
| mode: 0644 | |||
| @@ -0,0 +1,37 @@ | |||
| --- | |||
| - name: Allow access to HTTP/HTTPS in firewalld | |||
| firewalld: | |||
| service: "{{ item }}" | |||
| state: enabled | |||
| immediate: yes | |||
| permanent: yes | |||
| with_items: | |||
| - http | |||
| - https | |||
| - name: Ensure acmetool Docker image is pulled | |||
| docker_image: | |||
| name: willwill/acme-docker | |||
| - name: Ensure SSL certificates path exists | |||
| file: | |||
| path: "{{ ssl_certs_path }}" | |||
| state: directory | |||
| mode: 0770 | |||
| owner: "{{ matrix_user_username }}" | |||
| group: "{{ matrix_user_username }}" | |||
| - name: Ensure SSL certificates are marked as wanted in acmetool | |||
| shell: >- | |||
| /usr/bin/docker run --rm --name acmetool-host-grab -p 80:80 | |||
| -v {{ ssl_certs_path }}:/certs | |||
| -e ACME_EMAIL={{ ssl_support_email }} | |||
| willwill/acme-docker | |||
| acmetool want {{ hostname_matrix }} {{ hostname_riot }} --xlog.severity=debug | |||
| - name: Ensure periodic SSL renewal cronjob configured | |||
| template: | |||
| src: "{{ role_path }}/templates/cron.d/ssl-certificate-renewal.j2" | |||
| dest: "/etc/cron.d/ssl-certificate-renewal" | |||
| mode: 0600 | |||
| @@ -0,0 +1,87 @@ | |||
| --- | |||
| - name: Ensure Matrix Synapse data path exists | |||
| file: | |||
| path: "{{ matrix_synapse_data_path }}" | |||
| state: directory | |||
| mode: 0750 | |||
| owner: "{{ matrix_user_username }}" | |||
| group: "{{ matrix_user_username }}" | |||
| - name: Ensure Matrix Docker image is pulled | |||
| docker_image: | |||
| name: "{{ docker_matrix_image }}" | |||
| - name: Generate initial Matrix config | |||
| docker_container: | |||
| name: matrix-config | |||
| image: "{{ docker_matrix_image }}" | |||
| detach: no | |||
| cleanup: yes | |||
| command: generate | |||
| env: | |||
| SERVER_NAME: "{{ hostname_matrix }}" | |||
| REPORT_STATS: "no" | |||
| user: "{{ matrix_user_uid }}:{{ matrix_user_gid }}" | |||
| volumes: | |||
| - "{{ matrix_synapse_data_path }}:/data" | |||
| - name: Augment Matrix config (configure SSL fullchain location) | |||
| lineinfile: "dest={{ matrix_synapse_data_path }}/homeserver.yaml" | |||
| args: | |||
| regexp: "^tls_certificate_path:" | |||
| line: 'tls_certificate_path: "/acmetool-certs/live/{{ hostname_matrix }}/fullchain"' | |||
| - name: Augment Matrix config (configure SSL private key location) | |||
| lineinfile: "dest={{ matrix_synapse_data_path }}/homeserver.yaml" | |||
| args: | |||
| regexp: "^tls_private_key_path:" | |||
| line: 'tls_private_key_path: "/acmetool-certs/live/{{ hostname_matrix }}/privkey"' | |||
| - name: Augment Matrix config (configure server name) | |||
| lineinfile: "dest={{ matrix_synapse_data_path }}/homeserver.yaml" | |||
| args: | |||
| regexp: "^server_name:" | |||
| line: 'server_name: "{{ hostname_identity }}"' | |||
| - name: Augment Matrix config (change database from SQLite to Postgres) | |||
| lineinfile: | |||
| dest: "{{ matrix_synapse_data_path }}/homeserver.yaml" | |||
| regexp: '(.*)name: "sqlite3"' | |||
| line: '\1name: "psycopg2"' | |||
| backrefs: yes | |||
| - name: Augment Matrix config (add the Postgres connection parameters) | |||
| lineinfile: | |||
| dest: "{{ matrix_synapse_data_path }}/homeserver.yaml" | |||
| regexp: '(.*)database: "(.*)homeserver.db"' | |||
| line: '\1user: "{{ matrix_postgres_connection_username }}"\n\1password: "{{ matrix_postgres_connection_password }}"\n\1database: "homeserver"\n\1host: "postgres"\n\1cp_min: 5\n\1cp_max: 10' | |||
| backrefs: yes | |||
| - name: Allow access to Matrix ports in firewalld | |||
| firewalld: | |||
| port: "{{ item }}/tcp" | |||
| state: enabled | |||
| immediate: yes | |||
| permanent: yes | |||
| with_items: | |||
| - 3478 # Coturn | |||
| - 8448 # Matrix federation | |||
| - name: Ensure matrix-synapse.service installed | |||
| template: | |||
| src: "{{ role_path }}/templates/systemd/matrix-synapse.service.j2" | |||
| dest: "/etc/systemd/system/matrix-synapse.service" | |||
| mode: 0644 | |||
| - name: Ensure matrix-synapse-register-user script created | |||
| template: | |||
| src: "{{ role_path }}/templates/usr-local-bin/matrix-synapse-register-user.j2" | |||
| dest: "/usr/local/bin/matrix-synapse-register-user" | |||
| mode: 0750 | |||
| - name: Ensure periodic restarting of Matrix is configured (for SSL renewal) | |||
| template: | |||
| src: "{{ role_path }}/templates/cron.d/matrix-periodic-restarter.j2" | |||
| dest: "/etc/cron.d/matrix-periodic-restarter" | |||
| mode: 0600 | |||
| @@ -0,0 +1,13 @@ | |||
| --- | |||
| - name: Ensure matrix-postgres autoruns and is restarted | |||
| service: name=matrix-postgres enabled=yes state=restarted daemon_reload=yes | |||
| - name: Ensure matrix-synapse autoruns and is restarted | |||
| service: name=matrix-synapse enabled=yes state=restarted daemon_reload=yes | |||
| - name: Ensure matrix-riot-web autoruns and is restarted | |||
| service: name=matrix-riot-web enabled=yes state=restarted daemon_reload=yes | |||
| - name: Ensure matrix-nginx-proxy autoruns and is restarted | |||
| service: name=matrix-nginx-proxy enabled=yes state=restarted daemon_reload=yes | |||
| @@ -0,0 +1,11 @@ | |||
| MAILTO="{{ ssl_support_email }}" | |||
| # This periodically restarts the Matrix services | |||
| # to ensure they're using the latest SSL certificate | |||
| # in case it got renewed by the `ssl-certificate-renewal` cronjob | |||
| # (which happens once every ~2-3 months). | |||
| # | |||
| # Because `matrix-nginx-proxy.service` depends on `matrix-synapse.service`, | |||
| # both would be restarted. | |||
| {{ matrix_services_restart_cron_time_definition }} root /usr/bin/systemctl restart matrix-synapse.service | |||
| @@ -0,0 +1,14 @@ | |||
| MAILTO="{{ ssl_support_email }}" | |||
| # The goal of this cronjob is to ask acmetool to check | |||
| # the current SSL certificates and to see if some need renewal. | |||
| # It so, it would attempt to renew. | |||
| # | |||
| # Various services depend on these certificates and would need to be restarted. | |||
| # This is not our concern here. We simply make sure the certificates are up to date. | |||
| # Restarting of services happens on its own different schedule (other cronjobs). | |||
| # | |||
| # acmetool is supposed to bind to port :80 (forwarded to the host) and solve the challenge directly. | |||
| # We can afford to do that, because all our services run on other ports. | |||
| 15 4 */5 * * root /usr/bin/docker run --rm --name acmetool-once -p 80:80 -v {{ ssl_certs_path }}:/certs -e ACME_EMAIL={{ ssl_support_email }} willwill/acme-docker acmetool --batch reconcile # --xlog.severity=debug | |||
| @@ -0,0 +1,3 @@ | |||
| PGUSER={{ matrix_postgres_connection_username }} | |||
| PGPASSWORD={{ matrix_postgres_connection_password }} | |||
| PGDATABASE={{ matrix_postgres_db_name }} | |||
| @@ -0,0 +1,3 @@ | |||
| POSTGRES_USER={{ matrix_postgres_connection_username }} | |||
| POSTGRES_PASSWORD={{ matrix_postgres_connection_password }} | |||
| POSTGRES_DB={{ matrix_postgres_db_name }} | |||
| @@ -0,0 +1,21 @@ | |||
| server { | |||
| listen 443 ssl http2; | |||
| listen [::]:443 ssl http2; | |||
| server_name {{ hostname_riot }}; | |||
| server_tokens off; | |||
| root /dev/null; | |||
| ssl on; | |||
| ssl_certificate /acmetool-certs/live/{{ hostname_riot }}/fullchain; | |||
| ssl_certificate_key /acmetool-certs/live/{{ hostname_riot }}/privkey; | |||
| ssl_protocols TLSv1 TLSv1.1 TLSv1.2; | |||
| ssl_prefer_server_ciphers on; | |||
| ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; | |||
| location / { | |||
| proxy_pass http://riot:8765; | |||
| proxy_set_header X-Forwarded-For $remote_addr; | |||
| } | |||
| } | |||
| @@ -0,0 +1,21 @@ | |||
| server { | |||
| listen 443 ssl http2; | |||
| listen [::]:443 ssl http2; | |||
| server_name {{ hostname_matrix }}; | |||
| server_tokens off; | |||
| root /dev/null; | |||
| ssl on; | |||
| ssl_certificate /acmetool-certs/live/{{ hostname_matrix }}/fullchain; | |||
| ssl_certificate_key /acmetool-certs/live/{{ hostname_matrix }}/privkey; | |||
| ssl_protocols TLSv1 TLSv1.1 TLSv1.2; | |||
| ssl_prefer_server_ciphers on; | |||
| ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; | |||
| location /_matrix { | |||
| proxy_pass http://synapse:8008; | |||
| proxy_set_header X-Forwarded-For $remote_addr; | |||
| } | |||
| } | |||
| @@ -0,0 +1,15 @@ | |||
| { | |||
| "default_hs_url": "https://{{ hostname_matrix }}", | |||
| "default_is_url": "https://vector.im", | |||
| "brand": "Riot", | |||
| "integrations_ui_url": "https://scalar.vector.im/", | |||
| "integrations_rest_url": "https://scalar.vector.im/api", | |||
| "bug_report_endpoint_url": "https://riot.im/bugreports/submit", | |||
| "enableLabs": true, | |||
| "roomDirectory": { | |||
| "servers": [ | |||
| "matrix.org" | |||
| ] | |||
| }, | |||
| "welcomeUserId": "@riot-bot:matrix.org" | |||
| } | |||
| @@ -0,0 +1,3 @@ | |||
| -p 8765 | |||
| -A 0.0.0.0 | |||
| -c 3500 | |||
| @@ -0,0 +1,27 @@ | |||
| [Unit] | |||
| Description=Matrix nginx proxy server | |||
| After=docker.service | |||
| Requires=docker.service | |||
| Requires=matrix-synapse.service | |||
| After=matrix-synapse.service | |||
| Requires=matrix-riot-web.service | |||
| After=matrix-riot-web.service | |||
| [Service] | |||
| Type=simple | |||
| ExecStartPre=-/usr/bin/docker kill matrix-nginx-proxy | |||
| ExecStartPre=-/usr/bin/docker rm matrix-nginx-proxy | |||
| ExecStart=/usr/bin/docker run --rm --name matrix-nginx-proxy \ | |||
| -p 443:443 \ | |||
| --link matrix-synapse:synapse \ | |||
| --link matrix-riot-web:riot \ | |||
| -v {{ matrix_nginx_proxy_confd_path }}:/etc/nginx/conf.d \ | |||
| -v {{ ssl_certs_path }}:/acmetool-certs \ | |||
| {{ docker_nginx_image }} | |||
| ExecStop=-/usr/bin/docker kill matrix-nginx-proxy | |||
| ExecStop=-/usr/bin/docker rm matrix-nginx-proxy | |||
| Restart=always | |||
| RestartSec=30 | |||
| [Install] | |||
| WantedBy=multi-user.target | |||
| @@ -0,0 +1,24 @@ | |||
| [Unit] | |||
| Description=Matrix Postgres server | |||
| After=docker.service | |||
| Requires=docker.service | |||
| [Service] | |||
| Type=simple | |||
| ExecStartPre=-/usr/bin/docker stop matrix-postgres | |||
| ExecStartPre=-/usr/bin/docker rm matrix-postgres | |||
| ExecStartPre=-/usr/bin/mkdir {{ matrix_postgres_data_path }} | |||
| ExecStartPre=-/usr/bin/chown {{ matrix_user_uid }}:{{ matrix_user_gid }} {{ matrix_postgres_data_path }} | |||
| ExecStart=/usr/bin/docker run --rm --name matrix-postgres \ | |||
| --user={{ matrix_user_uid }}:{{ matrix_user_gid }} \ | |||
| --env-file={{ matrix_environment_variables_data_path }}/env-postgres-server-docker \ | |||
| -v {{ matrix_postgres_data_path }}:/var/lib/postgresql/data \ | |||
| -v /etc/passwd:/etc/passwd:ro \ | |||
| {{ docker_postgres_image }} | |||
| ExecStop=-/usr/bin/docker stop matrix-postgres | |||
| ExecStop=-/usr/bin/docker rm matrix-postgres | |||
| Restart=always | |||
| RestartSec=30 | |||
| [Install] | |||
| WantedBy=multi-user.target | |||
| @@ -0,0 +1,19 @@ | |||
| [Unit] | |||
| Description=Matrix Riot web server | |||
| After=docker.service | |||
| Requires=docker.service | |||
| [Service] | |||
| Type=simple | |||
| ExecStartPre=-/usr/bin/docker kill matrix-riot-web | |||
| ExecStartPre=-/usr/bin/docker rm matrix-riot-web | |||
| ExecStart=/usr/bin/docker run --rm --name matrix-riot-web \ | |||
| -v {{ matrix_nginx_riot_web_data_path }}:/data \ | |||
| {{ docker_riot_image }} | |||
| ExecStop=-/usr/bin/docker kill matrix-riot-web | |||
| ExecStop=-/usr/bin/docker rm matrix-riot-web | |||
| Restart=always | |||
| RestartSec=30 | |||
| [Install] | |||
| WantedBy=multi-user.target | |||
| @@ -0,0 +1,26 @@ | |||
| [Unit] | |||
| Description=Matrix Synapse server | |||
| After=docker.service | |||
| Requires=docker.service | |||
| Requires=matrix-postgres.service | |||
| After=matrix-postgres.service | |||
| [Service] | |||
| Type=simple | |||
| ExecStartPre=-/usr/bin/docker kill matrix-synapse | |||
| ExecStartPre=-/usr/bin/docker rm matrix-synapse | |||
| ExecStartPre=-/usr/bin/chown {{ matrix_user_username }}:{{ matrix_user_username }} {{ ssl_certs_path }} -R | |||
| ExecStart=/usr/bin/docker run --rm --name matrix-synapse \ | |||
| --link matrix-postgres:postgres \ | |||
| -p 8448:8448 \ | |||
| -p 3478:3478 \ | |||
| -v {{ matrix_synapse_data_path }}:/data \ | |||
| -v {{ ssl_certs_path }}:/acmetool-certs \ | |||
| {{ docker_matrix_image }} | |||
| ExecStop=-/usr/bin/docker kill matrix-synapse | |||
| ExecStop=-/usr/bin/docker rm matrix-synapse | |||
| Restart=always | |||
| RestartSec=30 | |||
| [Install] | |||
| WantedBy=multi-user.target | |||
| @@ -0,0 +1,3 @@ | |||
| #!/bin/bash | |||
| docker run --env-file={{ matrix_environment_variables_data_path }}/env-postgres-pgsql-docker -it --link=matrix-postgres:postgres postgres:9.6.3-alpine psql -h postgres | |||
| @@ -0,0 +1,11 @@ | |||
| #!/bin/bash | |||
| if [ $# -ne 2 ]; then | |||
| echo "Usage: "$0" <username> <password>" | |||
| exit 1 | |||
| fi | |||
| user=$1 | |||
| password=$2 | |||
| docker exec matrix-synapse register_new_matrix_user -u $user -p $password -a -c /data/homeserver.yaml https://localhost:8448 | |||
| @@ -0,0 +1,10 @@ | |||
| --- | |||
| - name: "Set up a Matrix server" | |||
| hosts: "{{ target if target is defined else 'matrix-servers' }}" | |||
| become: true | |||
| vars_files: | |||
| - vars/vars.yml | |||
| roles: | |||
| - matrix-server | |||
| @@ -0,0 +1,41 @@ | |||
| # The bare hostname which represents your identity. | |||
| # This is something like "example.com". | |||
| # Note: this playbook does not touch the server referenced here. | |||
| hostname_identity: "{{ host_specific_hostname_identity }}" | |||
| # This is where your data lives and what we set up here. | |||
| # This and the Riot hostname (see below) are expected to be on the same server. | |||
| hostname_matrix: "matrix.{{ hostname_identity }}" | |||
| # This is where you access the web UI from and what we set up here. | |||
| # This and the Matrix hostname (see above) are expected to be on the same server. | |||
| hostname_riot: "riot.{{ hostname_identity }}" | |||
| ssl_certs_path: /etc/pki/acmetool-certs | |||
| ssl_support_email: "{{ host_specific_ssl_support_email }}" | |||
| matrix_user_username: "matrix" | |||
| matrix_user_uid: 991 | |||
| matrix_user_gid: 991 | |||
| matrix_postgres_connection_username: "synapse" | |||
| matrix_postgres_connection_password: "synapse-password" | |||
| matrix_postgres_db_name: "homeserver" | |||
| matrix_base_data_path: "/matrix" | |||
| matrix_environment_variables_data_path: "{{ matrix_base_data_path }}/environment-variables" | |||
| matrix_synapse_data_path: "{{ matrix_base_data_path }}/synapse" | |||
| matrix_postgres_data_path: "{{ matrix_base_data_path }}/postgres" | |||
| matrix_nginx_proxy_data_path: "{{ matrix_base_data_path }}/nginx-proxy" | |||
| matrix_nginx_proxy_confd_path: "{{ matrix_nginx_proxy_data_path }}/conf.d" | |||
| matrix_nginx_riot_web_data_path: "{{ matrix_base_data_path }}/riot-web" | |||
| matrix_scratchpad_dir: "{{ matrix_base_data_path }}/scratchpad" | |||
| docker_postgres_image: "postgres:9.6.3-alpine" | |||
| docker_matrix_image: "silviof/docker-matrix" | |||
| docker_nginx_image: "nginx:1.13.3-alpine" | |||
| docker_riot_image: "silviof/matrix-riot-docker" | |||
| # Specifies when to restart the Matrix services so that | |||
| # a new SSL certificate could go into effect (UTC time). | |||
| matrix_services_restart_cron_time_definition: "15 4 3 * *" | |||