Ah, the fact that system was local-only had escaped me, this makes your original comments (brandon) make alot more sense to me.
As Jonathan points out, it makes no sense to save this information in the system keyspace then, as the point of this storage is for easing disaster recovery, and if the node goes down, the yaml as depicted in the keyspace is no more likely to survive than the yaml file in the conf directory. If this feature is to have value at all, it must be replicated, so a secondary quasi-system keyspace that is replicated would be needed.
I planned to add support for the cassandra-env and log4j files as well, but wanted to get feed back first before proceeding with those as those files are not first class files in cassandra as the yaml file is, and thus marginally more tricky. Glad I did
As for an IP being a poor choice for key, I agree it's sub-optimal, however it was used to ensure uniqueness, and have some tie back to the original machine (albeit not absolute). The only other choice I could think of is an added field in the yaml file such as cluster-node-id that had to be set by the administrator, but then we need to rely on the administrator to choose a unique one which is far less likely. To enforce this uniqueness that id would need to be added to the gossip protocol, and nodes need to be rejected with conflicting unique ids. All of that seems like overkill. While IPs seem problematic, I don't know how problematic they really are in practice. While these cluster nodes use gossip for communication, I still would wonder whether the standard deployment would use dhcp for these machines, as they still would need to be administered remotely. Thus for static ips, the changing ip problem wouldn't be one. But perhaps i'm talking out my ear.
Let me know what you want to do, and i'll be glad to go forward, or cancel whatever is appropriate.