The playbook can install the Jitsi video-conferencing platform and integrate it with Element clients (Element Web/Desktop, Android and iOS).
See the project’s documentation to learn what it does and why it might be useful to you.
Note: the configuration by the playbook is similar to the one by docker-jitsi-meet. You can refer to the official documentation for Docker deployment here.
You may need to open the following ports to your server:
4443/tcp - RTP media fallback over TCP10000/udp - RTP media over UDP. Depending on your firewall/NAT setup, incoming RTP packets on port 10000 may have the external IP of your firewall as destination address, due to the usage of STUN in JVB (see jitsi_jvb_stun_servers).To enable Jitsi, add the following configuration to your inventory/host_vars/matrix.example.com/vars.yml file:
jitsi_enabled: true
By default, this playbook installs Jitsi on the jitsi. subdomain (jitsi.example.com) and requires you to adjust your DNS records.
By tweaking the jitsi_hostname variable, you can easily make the service available at a different hostname than the default one.
Example additional configuration for your vars.yml file:
# Change the default hostname
jitsi_hostname: call.example.com
Once you’ve decided on the domain and path, you may need to adjust your DNS records to point the Jitsi domain to the Matrix server.
By default, you will need to create a CNAME record for jitsi. See Configuring DNS for details about DNS changes.
By default the Jitsi instance does not require for anyone to log in, and is open to use without an account. To control who is allowed to start meetings on your Jitsi instance, you’d need to enable Jitsi’s authentication and optionally guests mode.
Currently, there are three supported authentication methods: internal (default), matrix and ldap.
Note: authentication is not tested by the playbook’s self-checks. We therefore recommend that you would make sure by yourself that authentication is configured properly. To test it, start a meeting at jitsi.example.com on your browser.
internal)The default authentication mechanism is internal auth, which requires a Jitsi account to have been configured. This is the recommended method, as it also works in federated rooms.
With authentication enabled, all meetings have to be started by a registered user. After the meeting is started by that user, then guests are free to join. If the registered user is not yet present, the guests are put on hold in individual waiting rooms.
To enable authentication with a Jitsi account, add the following configuration to your vars.yml file. Make sure to replace USERNAME_… and PASSWORD_… with your own values.
jitsi_enable_auth: true
jitsi_enable_guests: true
jitsi_prosody_auth_internal_accounts:
- username: "USERNAME_FOR_THE_FIRST_USER_HERE"
password: "PASSWORD_FOR_THE_FIRST_USER_HERE"
- username: "USERNAME_FOR_THE_SECOND_USER_HERE"
password: "PASSWORD_FOR_THE_SECOND_USER_HERE"
Note: as Jitsi account removal function is not integrated into the playbook, these accounts will not be able to be removed from the Prosody server automatically, even if they are removed from your vars.yml file subsequently.
Note: if you get an error like Error: Account creation/modification not supported., it’s likely that you had previously installed Jitsi without auth/guest support. In such a case, you should look into Rebuilding your Jitsi installation.
matrix)⚠️ Warning: probably this breaks the Jitsi instance in federated rooms and does not allow sharing conference links with guests.
This authentication method requires Matrix User Verification Service, which can be installed using this playbook. By default, the playbook creates and configures a user-verification-service so that it runs locally.
To enable authentication with Matrix OpenID, add the following configuration to your vars.yml file:
jitsi_enable_auth: true
jitsi_auth_type: matrix
matrix_user_verification_service_enabled: true
For more information see also https://github.com/matrix-org/prosody-mod-auth-matrix-user-verification.
ldap)To enable authentication with LDAP, add the following configuration to your vars.yml file (adapt to your needs):
jitsi_enable_auth: true
jitsi_auth_type: ldap
jitsi_ldap_url: "ldap://ldap.example.com"
jitsi_ldap_base: "OU=People,DC=example.com"
#jitsi_ldap_binddn: ""
#jitsi_ldap_bindpw: ""
jitsi_ldap_filter: "uid=%u"
jitsi_ldap_auth_method: "bind"
jitsi_ldap_version: "3"
jitsi_ldap_use_tls: true
jitsi_ldap_tls_ciphers: ""
jitsi_ldap_tls_check_peer: true
jitsi_ldap_tls_cacert_file: "/etc/ssl/certs/ca-certificates.crt"
jitsi_ldap_tls_cacert_dir: "/etc/ssl/certs"
jitsi_ldap_start_tls: false
For more information refer to the docker-jitsi-meet and the saslauthd LDAP_SASLAUTHD documentation.
By default the Jitsi Meet instance does not work with a client in LAN (Local Area Network), even if others are connected from WAN. There are no video and audio. In the case of WAN to WAN everything is ok.
The reason is the Jitsi VideoBridge git to LAN client the IP address of the docker image instead of the host. The documentation of Jitsi in docker suggest to add JVB_ADVERTISE_IPS in enviornment variable to make it work.
To enable it, add the following configuration to your vars.yml file:
jitsi_jvb_container_extra_arguments:
- '--env "JVB_ADVERTISE_IPS=<Local IP address of the host>"'
The playbook allows a user to set a max number of participants allowed to join a Jitsi conference. By default the number is not limited.
To set the max number of participants, add the following configuration to your vars.yml file (adapt to your needs):
jitsi_prosody_max_participants: 4 # example value
In the default Jisti Meet configuration, gravatar.com is enabled as an avatar service. This results in third party request leaking data to gravatar. Since Element clients already send the url of configured Matrix avatars to Jitsi, we disabled gravatar.
To enable Gravatar, add the following configuration to your vars.yml file:
jitsi_disable_gravatar: false
⚠️ Warning: This leaks information to a third party, namely the Gravatar-Service (unless configured otherwise: gravatar.com). Besides metadata, this includes the Matrix user_id and possibly the room identifier (via referrer header).
If you’d like to have Jitsi save up resources, add the following configuration to your vars.yml file (adapt to your needs):
jitsi_web_custom_config_extension: |
config.enableLayerSuspension = true;
config.disableAudioLevels = true;
// Limit the number of video feeds forwarded to each client
config.channelLastN = 4;
jitsi_web_config_resolution_width_ideal_and_max: 480
jitsi_web_config_resolution_height_ideal_and_max: 240
These configurations:
After configuring the playbook and potentially adjusting your DNS records, run the playbook with playbook tags as below:
ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start
The shortcut commands with the just program are also available: just install-all or just setup-all
just install-all is useful for maintaining your setup quickly (2x-5x faster than just setup-all) when its components remain unchanged. If you adjust your vars.yml to remove other components, you’d need to run just setup-all, or these components will still remain installed. Note these shortcuts run the ensure-matrix-users-created tag too.
You can use the self-hosted Jitsi server in multiple ways:
by adding a widget to a room via Element Web (the one configured by the playbook at https://element.example.com). Just start a voice or a video call in a room containing more than 2 members and that would create a Jitsi widget which utilizes your self-hosted Jitsi server.
by adding a widget to a room via the Dimension integration manager. You’ll have to point the widget to your own Jitsi server manually. See our Dimension integration manager documentation page for more details. Naturally, Dimension would need to be installed first (the playbook doesn’t install it by default).
directly (without any Matrix integration). Just go to https://jitsi.example.com
By default, a single JVB (Jitsi VideoBridge) is deployed on the same host as the Matrix server. To allow more video-conferences to happen at the same time, you’d need to provision additional JVB services on other hosts.
These settings below will allow you to provision those extra JVB instances. The instances will register themselves with the Prosody service, and be available for Jicofo to route conferences too.
jitsi_jvb_servers section on hosts fileFor additional JVBs, you’d need to add the section titled jitsi_jvb_servers on the ansible hosts file with the details of the JVB hosts as below:
[jitsi_jvb_servers]
jvb-2.example.com ansible_host=192.168.0.2
Make sure to replace jvb-2.example.com with your hostname for the JVB and 192.168.0.2 with your JVB’s external IP address, respectively.
You could add JVB hosts as many as you would like. When doing so, add lines with the details of them.
Each JVB requires a server ID to be set, so that it will be uniquely identified. The server ID allows Jitsi to keep track of which conferences are on which JVB.
The server ID can be set with the variable jitsi_jvb_server_id. It will end up as the JVB_WS_SERVER_ID environment variables in the JVB docker container.
To set the server ID to jvb-2, add the following configuration to either vars.yml or hosts file (adapt to your needs). If you set the value on the hosts file, add jitsi_jvb_server_id=jvb-2 after your JVB’s external IP addresses as below.
On vars.yml:
jitsi_jvb_server_id: 'jvb-2'
On hosts:
[jitsi_jvb_servers]
jvb-2.example.com ansible_host=192.168.0.2 jitsi_jvb_server_id=jvb-2
jvb-3.example.com ansible_host=192.168.0.3 jitsi_jvb_server_id=jvb-2
Alternatively, you can specify the variable as a parameter to the ansible command.
Note: the server ID jvb-1 is reserved for the JVB instance running on the Matrix host, therefore should not be used as the ID of an additional JVB host.
The additional JVBs will need to expose the colibri WebSocket port.
To expose the port, add the following configuration to your vars.yml file:
jitsi_jvb_container_colibri_ws_host_bind_port: 9090
The JVB will also need to know the location of the Prosody XMPP server.
Similar to the server ID (jitsi_jvb_server_id), this can be set with the variable for the JVB by using the variable jitsi_xmpp_server.
The Jitsi Prosody container is deployed on the Matrix server by default, so the value can be set to the Matrix domain. To set the value, add the following configuration to your vars.yml file:
jitsi_xmpp_server: "{{ matrix_domain }}"
Alternatively, the IP address of the Matrix server can be set. This can be useful if you would like to use a private IP address.
To set the IP address of the Matrix server, add the following configuration to your vars.yml file:
jitsi_xmpp_server: "192.168.0.1"
By default, the Matrix server does not expose the XMPP port (5222); only the XMPP container exposes it internally inside the host. This means that the first JVB (which runs on the Matrix server) can reach it but the additional JVBs cannot. Therefore, the XMPP server needs to expose the port, so that the additional JVBs can connect to it.
To expose the port and have Docker forward the port, add the following configuration to your vars.yml file:
jitsi_prosody_container_jvb_host_bind_port: 5222
To make Traefik reverse-proxy to these additional JVBs (living on other hosts), add the following configuration to your vars.yml file:
# Traefik proxying for additional JVBs. These can't be configured using Docker
# labels, like the first JVB is, because they run on different hosts, so we add
# the necessary configuration to the file provider.
traefik_provider_configuration_extension_yaml: |
http:
routers:
{% for host in groups['jitsi_jvb_servers'] %}
additional-{{ hostvars[host]['jitsi_jvb_server_id'] }}-router:
entryPoints:
- "{{ traefik_entrypoint_primary }}"
rule: "Host(`{{ jitsi_hostname }}`) && PathPrefix(`/colibri-ws/{{ hostvars[host]['jitsi_jvb_server_id'] }}/`)"
service: additional-{{ hostvars[host]['jitsi_jvb_server_id'] }}-service
{% if traefik_entrypoint_primary != 'web' %}
tls:
certResolver: "{{ traefik_certResolver_primary }}"
{% endif %}
{% endfor %}
services:
{% for host in groups['jitsi_jvb_servers'] %}
additional-{{ hostvars[host]['jitsi_jvb_server_id'] }}-service:
loadBalancer:
servers:
- url: "http://{{ host }}:9090/"
{% endfor %}
After configuring vars.yml and hosts files, run the playbook with playbook tags as below:
ansible-playbook -i inventory/hosts --limit jitsi_jvb_servers jitsi_jvb.yml --tags=common,setup-additional-jitsi-jvb,start
If you ever run into any trouble or if you change configuration (jitsi_* variables) too much, we urge you to rebuild your Jitsi setup.
We normally don’t require such manual intervention for other services, but Jitsi services generate a lot of configuration files on their own.
These files are not all managed by Ansible (at least not yet), so you may sometimes need to delete them all and start fresh.
To rebuild your Jitsi configuration:
just run-tags stop-group --extra-vars=group=jitsirm -rf /matrix/jitsi)just install-service jitsi)