I have a network I needed to change the IPs on and I already had my hadoop network setup. After some googling I came across this link. It is a great write up on how to do it in Cloudera Manager 4 but I noticed one major difference and I also wanted to document the entire process. Caveat: This was done on a 10 node cluster, anything larger will be a major pain in the ass (let’s hope you aren’t here for that).

First step is to stop the entire cluster. Once done stop Cloudera Management Services. Both of these are done via Cloudera Manager. Next ssh to the server running Cloudera Manager and run “service cloudera-scm-server stop“. Once Cloudera Manager is stopped ssh to each node and run “service cloudera-scm-agent stop“. This will stop the agents from sending heartbeats and will also allow you to update the agent config to the new IP if needed.

Back on the Cloudera Manager server run the following command to return your postgresql database password:

grep password /etc/cloudera-scm-server/db.properties

The following will return:

com.cloudera.cmf.db.password=password

Use the password returned to open the database with the following command:

psql -h localhost -p 7432 -U scm
Password for user scm: "Enter the postgresql password here"

In the linked write up from bigdata-helpline you are asked to run:

scm=>select host_id,host_identifier,name,ip_address from hosts;

However in Cloudera Manager 5 the host_identifier is a unique hash, which leads me to believe there is an easier way to do this then modifying the database. That being said I ran the following:

scm=> select host_id,name,ip_address from hosts;

And was returned:

host_id | name | ip_address
---------+---------------------------+-------------
2 | hadoop3.patrickpierson.us | 192.168.2.17
1 | hadoop2.patrickpierson.us | 192.168.2.16
3 | hadoop1.patrickpierson.us | 192.168.2.15

I needed to change both the name and ip_address to look like he following:

host_id | name | ip_address
---------+---------------------------+-------------
2 | hadoop3.test.patrickpierson.us | 192.168.3.17
1 | hadoop2.test.patrickpierson.us | 192.168.3.16
3 | hadoop1.test.patrickpierson.us | 192.168.3.15

To do so run the following for each node. Be sure to match up each host_id with the correct host entry when you change it.

update hosts set (name,ip_address) = ('hadoop1.test.patrickpierson.us','192.168.3.15') where host_id=3;

Press enter and you should see:

UPDATE 1

Once done all the nodes run:

scm=> select host_id,name,ip_address from hosts;

again to verify each IP and hostname is correct. Exit with:

\q

Log into each node and change your IP and hostname. Be sure to update the hostname in /etc/sysconfig/network or cloudera-scm-agent will not start. You will also need to update “/etc/cloudera-scm-agent/config.ini” if it’s IP or hostname has changed (depending on the server_host entry). The first entry should be:

server_host=clouderamanager.patrickpierson.us

I have it set to a hostname that did not change, but if you have an IP address or hostname that is incorrect change it now.

Once all the nodes agent config is correct start Cloudera Manager (on the Cloudera Manager server) with:

service cloudera-scm-server start

then on each node run:

service cloudera-scm-agent start

Login to Cloudera Manager with your credentials and navigate to the “Hosts” page. Verify each host is sending a heartbeat. Then navigate to the “Home” page and start the “Cloudera Management Services”. Once all management services have started deploy all client configs to the nodes. This will update information like the namenode address, resource manager address, etc. Once all configs are deployed, start the cluster back up.

Update: I did my testing wrong and found there was some other things that need to be updated in Cloudera Manager. You also need to update the Hive Metastore Server Address in the configure section of Cloudera Manager.