Large-scale multi-tenancy hosting for TI-Messenger and beyond

November 05, 2025
TI-Messenger

At The Matrix Conference 2025, I had the opportunity to present how we’re building large-scale, multi-tenancy hosting for TI-Messenger - the standard for secure communications across the German healthcare system. 

Specified by Gematik, the German national agency for the digitalisation of the healthcare system, TI-Messenger is built on the Matrix open standard to provide sovereign, secure and interoperable communications. Germany’s public health insurance organisations are already using TI-Messenger to support real time messaging for 74M publicly insured citizens. These are huge individual deployments (typically needing to be compliant with TI-M ePA) that support millions of end-users, such as T-Systems’ solution for Barmer which uses Element Server Suite for TI-Messenger (ESS for TI-Messenger).

The need for multi-tenancy hosting

The presentation focused mainly on our work to help address one of the next phases of TI-Messenger; rolling the standard out to hundreds of thousands of small organisations, such as local clinics and pharmacies (typically needing to be compliant with TI-M Pro).

The challenge here, of course, is almost the opposite of those faced by the large public health insurers. Instead of huge individual deployments, the requirement is how a service provider can deliver and manage a TI-Messenger solution to multiple separate organisations, each with just a handful of end-users. Cost efficiency is of prime importance, hence the need for multi-tenancy to ensure economies of scale.

It’s also important to acknowledge that local pharmacies cannot be expected to run their own TI-Messenger deployment, so TI-M Pro solutions will be delivered by service providers. Those providers will need a powerful server-side solution that can deliver economies of scale while maintaining the ability to manage each customer’s service.

Delivering multi-tenancy

The solution starts with Synapse Pro, the commercial implementation of Element’s community version Synapse. Community Synapse has a high resource footprint at scale, so the resource efficiency of Synapse Pro is crucial for TI-Messenger deployments; both large individual deployments (TI-M ePA) and for thousands of small hosts (TI-M Pro).

Synapse Pro address the limitations of Synapse FOSS so that it can:

  • Scale to massive size effortlessly and automatically
  • Save resources and reduce operational cost
  • Ease operations and improve operational stability

Let’s look at the TI-M Pro use case, where a provider is delivering a service to thousands of separate customers such as local clinics or pharmacies. 

With the community version of Synapse, a small host with just five end-users has a memory footprint of around 150MB. That is a lot if you are a service provider and want to host 50,000 of those small hosts (for, say 50,000 local pharmacies), making it uneconomic as running each of these hosts on dedicated infrastructure quickly becomes expensive and operationally complex. Multi-tenancy allows the pooling of resources, keeping costs predictable and performance consistent - while preserving the isolation each tenant requires.

The small host solution enables a service provider to run multiple distinct tenants within one Python process, which we call a shard. Each shard (you can run as many as you want) can support up to 50 tenants. Every tenant within a shard is still a fully featured Matrix server, and tenant data is segregated on a database schema level. The tenant management API enables easy provisioning for each tenant. The solution is delivered with a Kubernetes controller to manage the shards, and enables integration with continuous deployment tooling and GitOps processes for automation. And of course, the whole solution meets TI-Messenger specifications.

Reduced total cost of ownership

To demonstrate the cost efficiencies, we ran comparison tests between community Synapse and Synapse Pro to show the difference in memory consumption when operating a series of small hosts.

Our load test shows that in the community version of Synapse, you always have around 150 megabytes per tenant. If you use the sharding Synapse Pro application, you can see how the memory consumption per tenant goes down depending on the number of tenants, so that for 50 tenants in one shard we only have 19 megabytes of memory per tenant. That equates to almost 90% resource savings.

Per-tenant memory consumption

Remembering that this is really about a service provider delivering a TI-M Pro compliant offering, we looked at what this might mean over five years as the service provider adds more and more end-users organisations (as increasing numbers of, let’s say, local pharmacies sign up to the service).

In year five, with 50,000 tenants, with community Synapse you'd have more than seven TBs of memory usage. If using Synapse Pro, it would be under one terabyte which is a substantial saving.

Average yearly memory consumption
Average yearly memory consumption

What this means for service providers

For service providers, Synapse Pro makes it practical to provide a TI-Messenger service for hundreds of thousands of small customers. Simultaneously, it can also be used to cost efficiently support huge individual deployments supporting millions of end-users.

Synapse Pro sits at the core of Element Server Suite Pro (ESS Pro) - our professional backend distribution for Matrix-based deployments. It also comes in a TI-Messenger edition (ESS Pro for TI-M) designed specifically to host and manage a TI-Messenger compliant backend deployment.Compliant with gematik’s TI-M ePA and TI-M Pro specifications, it is a server-side solution that enables service providers and software vendors to build TI-M compliant communications into their own offerings. 

Our goal is to make Matrix not just the open standard for interoperable communication, but also a platform that’s truly production-ready at any scale. The work on multi-tenant hosting is a major step toward that, enabling both cost-efficient small hosts and huge individual hosts in the same interoperable ecosystem.

Related Posts

By the same author

Thanks for reading our blog— if you got this far, you should head toelement.ioto learn more!