The following information are mostly taken from a discussion we had on the mailing list. You can see this discussion here:
Clustering or not clustering?¶
When deploying RabbitMQ, you have two possibility:
Deploy rabbit in a cluster
Deploy only one rabbit node
Deploying only one rabbit node can be seen as dangerous, mostly because if the node is down, your service is also down.
On the other hand, clustering rabbit has some downside that make it harder to configure / manage.
So, if your cluster is less reliable than a single node, the single node solution is better.
Moreover, as many OpenStack services are using RabbitMQ for internal communication (a.k.a. RPC), having a highly available rabbit solution is a must have.
If you choose the clustering mode, you should always keep an odd number of servers in the cluster (like 3 / 5 / 7, etc) to avoid split-brain issues.
One rabbit to rule them all?¶
You can also consider deploying rabbit in two ways:
one rabbit (cluster or not) for each OpenStack services
one big rabbit (cluster or not) for all OpenStack services
The recommendation is to split your rabbit in separate clusters for multiple reasons:
Reduce the impact when a rabbit cluster is down
Allow intervention on smaller part of infrastructure
Note: you can considerer splitting at least neutron and nova from other services, so you would end up with 3 clusters.
Note 2: neutron is the heaviest project for rabbit. So keep in mind that the rabbit cluster for neutron should be your biggest.
Which version of rabbit should I run?¶
You should always try to consider running the latest version of rabbit.
We also know that rabbit before 3.8 may have some issues on clustering side, so you might consider running at least rabbitmq 3.8.x.
Rabbit config recommendation¶
Most of the config we explain in the following parts apply only on rabbitmq in clustering mode.
When running the rabbit software on a node, you can configure some parameters for it in:
/etc/rabbitmq/rabbitmq.config # (ubuntu)
The most important configuration are the following
Erlang scheduler configuration¶
If you run multiple rabbitmq clusters it is common to run these clusters on the same 3 hosts. If you do this then you must ensure that all of the rabbitmq/erlang processes are not pinned to particular cores. This ensures that erlang does not compete over using the same cpu cores. To do this start erlang/the beam VM using -stbt u or -sbt u (you might need to replace - with + depending on how you start erlang).
For details see https://www.erlang.org/doc/man/erl.html#+sbt
net_ticktime and heartbeat¶
A node is considered down after this time
This config mostly depends on the network you have between the nodes.
We consider that the default values are good.
disk / ram mode¶
“In the vast majority of cases you want all your nodes to be disk nodes; RAM nodes are a special case that can be used to improve the performance clusters with high queue, exchange, or binding churn. RAM nodes do not provide higher message rates. When in doubt, use disk nodes only.”
So we recommend staying on disk node
RabbitMQ apply some policies on queues and exchanges. See here: https://www.rabbitmq.com/parameters.html
If you plan to deploy a cluster of RabbitMQ, you will have to add a policy.
Remember that Rabbit can apply only one policy to a queue or an exchange. So you should avoid having multiples policies in your deployment, or if you do, try to avoid overlapping policies because you wont be able to predict which one is effective on a queue.
Policies are applied based on a regex pattern. The pattern we agreed on (from the mailing list discussion) is the following:
which will set HA on all queues, except the one that:
starts with amq.
starts with reply_
A policy will apply some parameters to queues / exchanges.
Here are the parameters we recommend when running rabbit in cluster mode (time is in milliseconds as per https://www.rabbitmq.com/ttl.html):
"alternate-exchange": "unroutable", "expires": 3600000, "ha-mode": "all", "ha-promote-on-failure": "always", "ha-promote-on-shutdown": "always", "ha-sync-mode": "manual", "message-ttl": 600000, "queue-master-locator": "client-local"
This is not mandatory, but a nice to have feature to collect “lost” messages from rabbit (the messages that could not be routed).
Queue expiration period in milliseconds. By default there is no expires.
So a queue without any consumer for 1H will be automatically deleted.
Can be one of:
all: queues are mirrored across all nodes
exactly: need also ha-params “count”. Will be replicated on “count” nodes
nodes: need also ha-params “node-names:. Will be replicated on all nodes in “node-names”
We recommend to mirror all queues across nodes, so a queue which is created on a node will also be created on other nodes.
always: (default) will force moving queue master to another node if master die unexpectedly
when-synced: will allow moving queue master only on a synced node. If no synced node, then queue will need to be removed
We keep the default here, to make sure that on failure, a new queue master will be elected and it will continue working.
always: will force moving queue master to another node if master is shutdown
We prefer to have queue master moved to an unsynchronized mirror in all circumstances (i.e. we choose availability of the queue over avoiding message loss due to unsynchronised mirror promotion).
automatic: can be blocking, will always replicate the queue, but can block the io while doing it
manual: (default) mode. A new queue mirror will only receive new messages (messages already in the queue wont be mirrored).
Using manual is not a big issue for us, as most of the time, OpenStack queues are empty.
Message TTL in queues.
By default, no TTL.
We recommend to set it to 600000 (10 minutes).
This is supposed to be safe because most of the OpenStack elements are timeouting after 300 sec.
So a message not consumed in 10 minutes will be dropped from queues.
Determine which node is elected master when creating the queue.
client-local: (default) Pick the node the client that declares the queue is connected to
min-masters: Pick the node hosting the minimum number of bound masters
We recommend keeping the client-local (default value).
On OpenStack services¶
You may see this parameters in some of the configuration file.
But this is useless now because the policy is setting this.
“In most other cases, durable queues are the recommended option. For replicated queues, the only reasonable option is to use durable queues.”
So, because we enabled HA in our policy, we MUST enable durable queues:
amqp_durable_queues = True
in every OpenStack config file
Note that the durability of a queue (or an exchange) cannot be set AFTER the queue has been created. So if you forgot to set this at the beginning, you will have to delete your queue before OpenStack can recreate the queues with correct durability.
The default TTL for transient queues is 30 minutes, but this is too much for neutron (both server and agent).
For example, when restarting an agent, it will keep the transient queues in the rabbit cluster for 30 minutes. Most of those queues are fanout queues, so they will stack the messages.
On a large cluster, you can end up with million of messages very quickly (before the 30 minutes).
So the recommendation is to drastically lower this value to get rid of transient queues much quicker (like 60sec):
rabbit_transient_queues_ttl = 60