Follow

Adding a new RabbitMQ worker node to an existing ring

Synopsis:

Adding a new RabbitMQ worker node to an existing ring.

 

Problem/Question:

Kinetica recommends maintaining odd numbers (1,3,5,etc..) of RabbitMQ worker nodes in HA deployment to ensure the possibility of a quorum in event of network partition.

How do we add additional RabbitMQ worker node into an existing ring?

 

Environment:

Kinetica On-prem 7.0

 

Solution/Answer:

The following set of instructions are pretty straight forward as guidance through the process of adding RabbitMQ worker node to the cluster.

1. Install gpudb-ha rpm on new RabbitMQ worker node.

2. Copy the file from an existing RabbitMQ worker node. Configuration file is located at following location:

$ ls /opt/gpudb/ha/rabbitmq-server/conf/rabbitmq.config

 

3. Add the new server hostname to the list in the file, (adding the new RabbitMQ node hostname and uncommenting cluster_partition_handling). Here is an example:

[
        {rabbit,
                [
                        {default_user,        <<"gpudb">>},
                        {default_pass,        <<"gpudb123">>},
                        {cluster_nodes,
                                {[
                                'rabbit@ha1','rabbit@ha2','rabbit@<new_hostname>'
                                ], disc}
                        },
                        {loopback_users, []}
%%                      ,{collect_statistics, fine}
                        ,{cluster_partition_handling, pause_minority}
%%                      ,{delegate_count, 64}
%%                      ,{hipe_compile, true}
                ]
        }
]

 

4. Start the new RabbitMQ worker.

/etc/init.d/gpudb-ha mq-start

 

4a. Validate that the worker has started  correctly by visiting its admin page and that it sees the other workers.

5. Go to the existing nodes and one by one do the following:

5a. Add the new server to rabbitmq.config file similar to point 3.

$ vi /opt/gpudb/ha/rabbitmq-server/conf/rabbitmq.config 


5b. Restart the RabbitMQ worker.

/etc/init.d/gpudb-ha mq-restart


6. Verify that each rabbit worker sees all the others by visiting each console.

 

References:

https://www.kinetica.com/docs/7.0/ha/ha_configuration.html

https://support.kinetica.com/hc/en-us/articles/360048619634-HA-0001-HA-Basic-Operators-Guide

https://www.rabbitmq.com/partitions.html#automatic-handling

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

0 Comments

Article is closed for comments.