Details
-
Bug
-
Status: Resolved
-
Normal
-
Resolution: Not A Problem
-
None
-
None
-
None
-
Centos 5.5 64 bit
-
Normal
Description
Hi,
I have 2 cassandra cluster and each cluster have 2 nodes.Initially both cluster was running fine nodetool status show 2 node status up for each cluster.
Then I have taken backup of Cassandra first cluster of 2 nodes and restored it to second 2 nodes cluster. nodetool status got messed up now it is showing all 4 nodes as part of cluster on each nodes of both cluster.
I tried to debug this issue but not able to come up to solution. Please note that both cassandra clusters are running terremark ecloud. Is this SAN issue for node mess up????
Then i found both cluster have same name in the cassandra.yaml. Then i changed the cluster name of second cluster. After that restore is not working both cassandra nodes are not coming up. they are throwing following error:-
ERROR [main] 2011-10-31 12:37:02,552 AbstractCassandraDaemon.java (line 131) Fatal exception during initialization
org.apache.cassandra.config.ConfigurationException: Saved cluster name Test Cluster != configured name grijes64 Cluster
at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:259)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:127)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:314)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:79)
Please help me solving this issue thanks in advance.