Cassandra
  1. Cassandra
  2. CASSANDRA-44

It is difficult to modify the set of ColumnFamliies in an existing cluster

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Fix Version/s: 0.7 beta 1
    • Component/s: Core
    • Labels:
      None

      Description

      ColumnFamilies may be added when cassandr is not running by editing the configuration file.

      If you need to delete or re-order CFs, you must

      1) kill cassandra
      2) start it again and wait for log replay to finish
      3) kill cassandra AGAIN

      Alternatively on Cassandra 0.4.2 or later:
      1) run nodeprobe flush and wait for it to finish
      2) kill cassandra

      Then:
      4) make your edits (now there is no data in the commitlog)
      5) manually remove the sstable files (-Data.db, -Index.db, and -Filter.db) for the CFs you removed, and rename files for CFs you renamed
      6) start cassandra and your edits should take effect

        Issue Links

        There are no Sub-Tasks for this issue.

          Activity

          Hide
          Jonathan Ellis added a comment -

          Maintaining CF definitions looks like a good use case for Zookeeper to me. We could do it like this:

          • on startup, a cassandra node must contact zookeeper and read the column family data. this is the only time it will abort if ZK is not available.
          • when an operation is requested for a columnfamily that does not exist, the node checks zookeeper to see if that column has been added
          • additionally, we can check every hour or so for new columns and removed columns. so removed CFs could accept ops for a while after officially being "removed."
          • adding and removing CFs would be done with a web interface. (I'm strongly in favor of moving the web UI to Jython; it's much better suited for this than raw Java.)

          At the znode level, we would have /columfamilies/[tablename]/[columnfamily1|columnfamily2|...] where the columnfamily znodes contain the sort information and any other attributes that was previously being stored in TableMetadata.

          Notes to keep adminning a ZK ensemble relatively painless:

          • use the Cassandra seed nodes as the ZK ensemble members. (Both seed nodes and ZK require a relatively small number of machines in the cluster to participate.) We can ship a config file so that cassandra will continue to Just Work on localhost.

          Thoughts?

          Show
          Jonathan Ellis added a comment - Maintaining CF definitions looks like a good use case for Zookeeper to me. We could do it like this: on startup, a cassandra node must contact zookeeper and read the column family data. this is the only time it will abort if ZK is not available. when an operation is requested for a columnfamily that does not exist, the node checks zookeeper to see if that column has been added additionally, we can check every hour or so for new columns and removed columns. so removed CFs could accept ops for a while after officially being "removed." adding and removing CFs would be done with a web interface. (I'm strongly in favor of moving the web UI to Jython; it's much better suited for this than raw Java.) At the znode level, we would have /columfamilies/ [tablename] / [columnfamily1|columnfamily2|...] where the columnfamily znodes contain the sort information and any other attributes that was previously being stored in TableMetadata. Notes to keep adminning a ZK ensemble relatively painless: use the Cassandra seed nodes as the ZK ensemble members. (Both seed nodes and ZK require a relatively small number of machines in the cluster to participate.) We can ship a config file so that cassandra will continue to Just Work on localhost. Thoughts?
          Hide
          Jonathan Ellis added a comment -

          Brett pointed out on IRC that we'd want to start the namespace with /cassandra/[clustername] to avoid collisions with other ZK users and cassandra clusters.

          He also said that he will try to find time this week or next to work on this.

          Show
          Jonathan Ellis added a comment - Brett pointed out on IRC that we'd want to start the namespace with /cassandra/ [clustername] to avoid collisions with other ZK users and cassandra clusters. He also said that he will try to find time this week or next to work on this.
          Hide
          Jun Rao added a comment -

          Changing CFs has implications on the logs. Today, the log header has an entry for each CF. Those entries are preallocated at startup and may not be easy to adjust.

          Show
          Jun Rao added a comment - Changing CFs has implications on the logs. Today, the log header has an entry for each CF. Those entries are preallocated at startup and may not be easy to adjust.
          Hide
          Jonathan Ellis added a comment -

          Can you point out the relevant code?

          Show
          Jonathan Ellis added a comment - Can you point out the relevant code?
          Hide
          Jun Rao added a comment -

          You can trace from db.CommitLogHeader.turnon/turnoff.

          Show
          Jun Rao added a comment - You can trace from db.CommitLogHeader.turnon/turnoff.
          Hide
          Jonathan Ellis added a comment -

          I think that if you were to wipe the system/ and commitlog/ directories (after shutdown, restart, shutdown to make sure everything from the commitlog gets replayed into sstables) then restart w/ a new config, it would work fine.

          Do test this before trying it on important data though!

          Show
          Jonathan Ellis added a comment - I think that if you were to wipe the system/ and commitlog/ directories (after shutdown, restart, shutdown to make sure everything from the commitlog gets replayed into sstables) then restart w/ a new config, it would work fine. Do test this before trying it on important data though!
          Hide
          Jonathan Ellis added a comment -

          and by shutdown of course I mean "kill."

          Show
          Jonathan Ellis added a comment - and by shutdown of course I mean "kill."
          Hide
          Jonathan Ellis added a comment -

          (Note that per CASSANDRA-211 we shouldn't jam this into the existing per-node web ui.)

          Show
          Jonathan Ellis added a comment - (Note that per CASSANDRA-211 we shouldn't jam this into the existing per-node web ui.)
          Hide
          Jonathan Ellis added a comment -

          So what we'll want to do is add a preamble to the commitlog stating "these are the columnfamilies and the indexes i am mapping them to in this commitlog." so if we add or delete CFs we won't screw up that ordering.

          Show
          Jonathan Ellis added a comment - So what we'll want to do is add a preamble to the commitlog stating "these are the columnfamilies and the indexes i am mapping them to in this commitlog." so if we add or delete CFs we won't screw up that ordering.
          Hide
          Jonathan Ellis added a comment -

          Or if we are using ZK (for instance) to hold CF definitions such that we can keep the index/CF map there and not re-use indexes for deleted CFs then we can rely on the commitlog ordering always being valid (and just check to make sure we don't replay into a decomissioned CF).

          Show
          Jonathan Ellis added a comment - Or if we are using ZK (for instance) to hold CF definitions such that we can keep the index/CF map there and not re-use indexes for deleted CFs then we can rely on the commitlog ordering always being valid (and just check to make sure we don't replay into a decomissioned CF).
          Hide
          Evan Weaver added a comment -

          Can Zookeeper be bundled in Cassandra itself? Would prefer not to have to manage a separate component.

          Show
          Evan Weaver added a comment - Can Zookeeper be bundled in Cassandra itself? Would prefer not to have to manage a separate component.
          Hide
          Jonathan Ellis added a comment -

          after CASSANDRA-79 the config file is the only source of truth for CF definitions which makes editing them easier. Updated the issue description to reflect this.

          Obviously we want to make this less crappy but maybe it's not worth making ZK a dependency anymore.

          Show
          Jonathan Ellis added a comment - after CASSANDRA-79 the config file is the only source of truth for CF definitions which makes editing them easier. Updated the issue description to reflect this. Obviously we want to make this less crappy but maybe it's not worth making ZK a dependency anymore.
          Hide
          Jonathan Ellis added a comment -

          I no longer thing we need to involve ZK for this. Let's keep it simple.

          Show
          Jonathan Ellis added a comment - I no longer thing we need to involve ZK for this. Let's keep it simple.
          Hide
          Chris Goffinet added a comment -

          Jonathan could you explain in the ticket your thoughts on this (no need for Zookeeper)? You told me in IRC, but I forgot and having it on record is best.

          Show
          Chris Goffinet added a comment - Jonathan could you explain in the ticket your thoughts on this (no need for Zookeeper)? You told me in IRC, but I forgot and having it on record is best.
          Hide
          Jonathan Ellis added a comment -

          split out the CF definitions into a separate config file. poll that file for changes periodically and reload.

          this does push the problem of keeping the config files in sync across nodes onto ops, but ops would probably prefer that to requiring a completely new service piece.

          Note that either way the we need to automate the transition from an old set of definitions to the new, at the commitlog level. ZK doesn't make that go away.

          Show
          Jonathan Ellis added a comment - split out the CF definitions into a separate config file. poll that file for changes periodically and reload. this does push the problem of keeping the config files in sync across nodes onto ops, but ops would probably prefer that to requiring a completely new service piece. Note that either way the we need to automate the transition from an old set of definitions to the new, at the commitlog level. ZK doesn't make that go away.
          Hide
          Michael Greene added a comment -

          I don't get how one would do renames in this seperate-config-file solution without either specify something like a migration in the file, or having seperate unique id's in addition to names that are shared on the cluster.

          Show
          Michael Greene added a comment - I don't get how one would do renames in this seperate-config-file solution without either specify something like a migration in the file, or having seperate unique id's in addition to names that are shared on the cluster.
          Hide
          Eric Evans added a comment -

          > this does push the problem of keeping the config files in sync across nodes onto ops, but ops would
          > probably prefer that to requiring a completely new service piece.

          That problem already exists. If you are making additions, removals, or renames, then you are going to have to sync those changes with the entire cluster. Whether it's a single file or many doesn't make much of a difference IMO.

          Also, if this is implemented with an additional config, then we should probably expose it through get_string_property() in the same way we do the current configuration (which incidentally could be useful to ops in syncing up the cluster).

          Show
          Eric Evans added a comment - > this does push the problem of keeping the config files in sync across nodes onto ops, but ops would > probably prefer that to requiring a completely new service piece. That problem already exists. If you are making additions, removals, or renames, then you are going to have to sync those changes with the entire cluster. Whether it's a single file or many doesn't make much of a difference IMO. Also, if this is implemented with an additional config, then we should probably expose it through get_string_property() in the same way we do the current configuration (which incidentally could be useful to ops in syncing up the cluster).
          Hide
          Jonathan Ellis added a comment -

          Yes, describing renames is a minor problem (with ZK too).

          We could add JMX commands for managing CFs but then you get back to the Bad Old Days of having this xml around that is overridden by "invisible" information in the system table.

          I would just add an attribute e.g. MigrateFrom to the CF definition that says "here's what the old name was, rename the data files when you move to the new definition."

          <ColumnFamily name="Standard82" migratefrom="Standard1" ...>

          (Note that I only suggest splitting this into a separate file for clarity, not because it's somehow inherent to manage-by-re-parse.)

          Show
          Jonathan Ellis added a comment - Yes, describing renames is a minor problem (with ZK too). We could add JMX commands for managing CFs but then you get back to the Bad Old Days of having this xml around that is overridden by "invisible" information in the system table. I would just add an attribute e.g. MigrateFrom to the CF definition that says "here's what the old name was, rename the data files when you move to the new definition." <ColumnFamily name="Standard82" migratefrom="Standard1" ...> (Note that I only suggest splitting this into a separate file for clarity, not because it's somehow inherent to manage-by-re-parse.)
          Hide
          Jon Mischo added a comment -

          Adding JMX commands to manage CFs doesn't have to be evil, provided we distribute the information to all nodes and they serialize a new config out to disk. If the config change has a serial and a checksum that is sent to every node, the xml config can be stamped with them, and any node with an old or corrupt config could pull the latest config during bootstrapping on restart or after failed validation against checksum.

          Just an idea, but it's one that solves for the "invisible" configuration issue and adds manageability without sacrificing uptime. Since adding and removing CFs should be a rare event, I don't think we need to designate a single point of failure to be the "authoritative" node for the config, though the first node to get the config change could be the one responsible for gaining consensus on the "current" configuration and then also be responsible for generating the new serial and checksum.

          The real question in my mind is whether this is something that we require a management tool to contact every node via JMX for, or whether a success message from a single node means it's already successfully distributed the configuration change to N nodes/quorum/all nodes.

          Show
          Jon Mischo added a comment - Adding JMX commands to manage CFs doesn't have to be evil, provided we distribute the information to all nodes and they serialize a new config out to disk. If the config change has a serial and a checksum that is sent to every node, the xml config can be stamped with them, and any node with an old or corrupt config could pull the latest config during bootstrapping on restart or after failed validation against checksum. Just an idea, but it's one that solves for the "invisible" configuration issue and adds manageability without sacrificing uptime. Since adding and removing CFs should be a rare event, I don't think we need to designate a single point of failure to be the "authoritative" node for the config, though the first node to get the config change could be the one responsible for gaining consensus on the "current" configuration and then also be responsible for generating the new serial and checksum. The real question in my mind is whether this is something that we require a management tool to contact every node via JMX for, or whether a success message from a single node means it's already successfully distributed the configuration change to N nodes/quorum/all nodes.
          Hide
          Jonathan Ellis added a comment -

          If the goal is "write out a new config file" then contacting each machine via JMX and programatically re-writing the config file is a lot more complicated than just pushing out a new file via an existing puppet / dsh / etc infrastructure.

          Show
          Jonathan Ellis added a comment - If the goal is "write out a new config file" then contacting each machine via JMX and programatically re-writing the config file is a lot more complicated than just pushing out a new file via an existing puppet / dsh / etc infrastructure.
          Hide
          Ryan Daum added a comment -

          What about using a special default 'meta' Cassandra keyspace for the description of the keyspaces/column families?

          This would handle the issue of propagation and availability.

          Show
          Ryan Daum added a comment - What about using a special default 'meta' Cassandra keyspace for the description of the keyspaces/column families? This would handle the issue of propagation and availability.
          Hide
          Jonathan Ellis added a comment -

          Latest thoughts:

          Other things being equal, I would prefer to provide a thrift interface to add/rename/remove keyspace and CFs through a single coordinator node (vs having to update each node via JMX, or push out a new config file). Keeping things config-file based has two drawbacks:

          • it requires filesystem access for whoever is doing the update, which is problematic in some environments
          • it makes life difficult for systems building on top of cassandra that want to automate this (easy for a human to dsh scp from somewhere; possible, but painful, to integrate this into an automated system that is more than a one-off)
          • it requires either all nodes being up for the upgrade, which is simple but unrealistic, or ops manually re-pushing the update to nodes that are down, which is a pita

          So if we can instead move to a system where KS/CF definitions are stored in a system CF and updated programatically, I think that would be best.

          Possible evolution of the code might look like
          (1) move KS/CF definitions into the system table
          (2) add schema change methods internally and tests (possibly expose via JMX for manual testing, but not nodeprobe)
          (3) add thrift interface to send schema changes out to other nodes
          (4) add gossip of MetadataVersion (a user provided? automatically generated? identifier string): gossip automatically handles updating nodes that were down on what happened while they were out. Full schema will not fit in gossip but a version id will. A node whose internal MV is lower than one it sees in gossip, should ask the node w/ the higher version to send it the new version. (Remember we cannot rely on HH for this since the FD may not have recognized that the node was down when the update was happening).

          We punt completely on two clients requesting conflicting changes from different coordinator nodes. "Don't do that." (Just as copying out two conflicting config files is Bad.)

          One possible layout for the metadata CFs:

          migrations: hardcoded key of "migrations": each column w/ name of MetadataVersion contains the op performed

          schema: key of MV, supercolumns of KS, columns of serialized CF definitions

          so on startup, we read latest MV from migrations row, then the associated schema.

          (Looked at this way it seems like we should just have MV be a TimeUUID and not make client deal w/ that.)

          Show
          Jonathan Ellis added a comment - Latest thoughts: Other things being equal, I would prefer to provide a thrift interface to add/rename/remove keyspace and CFs through a single coordinator node (vs having to update each node via JMX, or push out a new config file). Keeping things config-file based has two drawbacks: it requires filesystem access for whoever is doing the update, which is problematic in some environments it makes life difficult for systems building on top of cassandra that want to automate this (easy for a human to dsh scp from somewhere; possible, but painful, to integrate this into an automated system that is more than a one-off) it requires either all nodes being up for the upgrade, which is simple but unrealistic, or ops manually re-pushing the update to nodes that are down, which is a pita So if we can instead move to a system where KS/CF definitions are stored in a system CF and updated programatically, I think that would be best. Possible evolution of the code might look like (1) move KS/CF definitions into the system table (2) add schema change methods internally and tests (possibly expose via JMX for manual testing, but not nodeprobe) (3) add thrift interface to send schema changes out to other nodes (4) add gossip of MetadataVersion (a user provided? automatically generated? identifier string): gossip automatically handles updating nodes that were down on what happened while they were out. Full schema will not fit in gossip but a version id will. A node whose internal MV is lower than one it sees in gossip, should ask the node w/ the higher version to send it the new version. (Remember we cannot rely on HH for this since the FD may not have recognized that the node was down when the update was happening). We punt completely on two clients requesting conflicting changes from different coordinator nodes. "Don't do that." (Just as copying out two conflicting config files is Bad.) One possible layout for the metadata CFs: migrations: hardcoded key of "migrations": each column w/ name of MetadataVersion contains the op performed schema: key of MV, supercolumns of KS, columns of serialized CF definitions so on startup, we read latest MV from migrations row, then the associated schema. (Looked at this way it seems like we should just have MV be a TimeUUID and not make client deal w/ that.)
          Hide
          Jonathan Ellis added a comment -

          Instead of manually propagating & reconciling schema changes in (3) and (4), we could add per-keyspace replication factor as suggested in CASSANDRA-620. We'd still need to gossip version id so schema changes don't rely on a weekly (for instance) repair operation to be consistent, but we could still use the repair code that is already done rather than rolling it manually.

          Show
          Jonathan Ellis added a comment - Instead of manually propagating & reconciling schema changes in (3) and (4), we could add per-keyspace replication factor as suggested in CASSANDRA-620 . We'd still need to gossip version id so schema changes don't rely on a weekly (for instance) repair operation to be consistent, but we could still use the repair code that is already done rather than rolling it manually.
          Hide
          Gary Dusbabek added a comment -

          I like the idea of 1) making this automatic 2) not exposing via thrift. I'll take a look at 620 to see how much work it will be.

          Show
          Gary Dusbabek added a comment - I like the idea of 1) making this automatic 2) not exposing via thrift. I'll take a look at 620 to see how much work it will be.
          Hide
          Jonathan Ellis added a comment -

          We do want to have a thrift interface to the functionality so that ordinary clients can create new columnfamilies, though. But having a special-purpose backend to handle that is what 620 might let us avoid.

          Show
          Jonathan Ellis added a comment - We do want to have a thrift interface to the functionality so that ordinary clients can create new columnfamilies, though. But having a special-purpose backend to handle that is what 620 might let us avoid.
          Hide
          Gary Dusbabek added a comment -

          Having this with 620 would be a lot cleaner, but implementing 620 would be a lot more work than just creating a system to propagate system changes. Is putting the work in for 620 worth it or not?

          Show
          Gary Dusbabek added a comment - Having this with 620 would be a lot cleaner, but implementing 620 would be a lot more work than just creating a system to propagate system changes. Is putting the work in for 620 worth it or not?
          Hide
          Jonathan Ellis added a comment -

          People (including Rackspace) want 620 independently, so if it's feasible then yes. (I'm 60% sure it's feasible.

          Show
          Jonathan Ellis added a comment - People (including Rackspace) want 620 independently, so if it's feasible then yes. (I'm 60% sure it's feasible.
          Hide
          Gary Dusbabek added a comment -

          Having CASSANDRA-620 will make this easier.

          Show
          Gary Dusbabek added a comment - Having CASSANDRA-620 will make this easier.
          Hide
          Jonathan Ellis added a comment -

          Was going to put this on a comment to CASSANDRA-826 but maybe here is better.

          How do we deal with the rename partition problem?

          that is, say we rename a CF A -> B, then create a new A (A'). Some node N is down for the rename.

          Now N comes back up while clients are inserting into A'. N will happily insert them to A, then rename A -> B when it gets news of the update.

          AFAICS this is an issue under either push or pull schema propagation.

          I see a couple options:

          1. only allow schema changes when 100% of nodes are up. This is both limiting (when you get to 100s of nodes it could be pretty inconvenient) and buggy (a node could be up when you start the update, but go down before it's complete; FD is not instant).
          2. disallow renames, only allow add and drop
          3. allow renames, but only allow reusing an old CF name once the entire cluster has completed the first rename (so kind of 1., but only apply the 100% rule in this one corner case)

          Are there other options? I'm trying to think of a way that having a ZK ensemble would give us a magic wand here, as originally envisioned, but I don't see one (short of having nodes ask ZK for schema definitions on every op, which is unacceptable for performance). So I think my inclination would be to start w/ (2) and then add (3).

          Show
          Jonathan Ellis added a comment - Was going to put this on a comment to CASSANDRA-826 but maybe here is better. How do we deal with the rename partition problem? that is, say we rename a CF A -> B, then create a new A (A'). Some node N is down for the rename. Now N comes back up while clients are inserting into A'. N will happily insert them to A, then rename A -> B when it gets news of the update. AFAICS this is an issue under either push or pull schema propagation. I see a couple options: 1. only allow schema changes when 100% of nodes are up. This is both limiting (when you get to 100s of nodes it could be pretty inconvenient) and buggy (a node could be up when you start the update, but go down before it's complete; FD is not instant). 2. disallow renames, only allow add and drop 3. allow renames, but only allow reusing an old CF name once the entire cluster has completed the first rename (so kind of 1., but only apply the 100% rule in this one corner case) Are there other options? I'm trying to think of a way that having a ZK ensemble would give us a magic wand here, as originally envisioned, but I don't see one (short of having nodes ask ZK for schema definitions on every op, which is unacceptable for performance). So I think my inclination would be to start w/ (2) and then add (3).
          Hide
          Gary Dusbabek added a comment -

          It won't be hard to handle as long as the downed node doesn't participate in writes until it's schema is stable. rename(A,B) is migration0, add(A) is migration1. migration1 won't be applied to N because it realizes it isn't migrating from migration0 (migration1 specifies that it is a migration from migration0). N quickly realizes it is using an older version of the schema, it requests all the versions it is missing and then applies them in order (migration0, migration1).

          Show
          Gary Dusbabek added a comment - It won't be hard to handle as long as the downed node doesn't participate in writes until it's schema is stable. rename(A,B) is migration0, add(A) is migration1. migration1 won't be applied to N because it realizes it isn't migrating from migration0 (migration1 specifies that it is a migration from migration0). N quickly realizes it is using an older version of the schema, it requests all the versions it is missing and then applies them in order (migration0, migration1).
          Hide
          Gary Dusbabek added a comment -

          Nevermind. I see the problem with that approach. I'll have to think on this.

          Show
          Gary Dusbabek added a comment - Nevermind. I see the problem with that approach. I'll have to think on this.
          Hide
          Gary Dusbabek added a comment -

          CF ids do not change during a rename. So if we encode the cfid in the row mutation it can be used to make sure the changes are applied properly. (At this point keeping the cfname as part of the rowmuation would just become baggage, so if it make sense and can be done, we should take it out.)

          Show
          Gary Dusbabek added a comment - CF ids do not change during a rename. So if we encode the cfid in the row mutation it can be used to make sure the changes are applied properly. (At this point keeping the cfname as part of the rowmuation would just become baggage, so if it make sense and can be done, we should take it out.)
          Hide
          Jonathan Ellis added a comment -

          That would work, as long as IDs are global. It should actually make things faster, too.

          Show
          Jonathan Ellis added a comment - That would work, as long as IDs are global. It should actually make things faster, too.
          Hide
          Gary Dusbabek added a comment -

          w00t!

          Show
          Gary Dusbabek added a comment - w00t!
          Hide
          Ryan King added a comment -

          Gary - This is seriously awesome. I owe you a beer when we see each other.

          Show
          Ryan King added a comment - Gary - This is seriously awesome. I owe you a beer when we see each other.
          Hide
          Pablo Cuadrado added a comment -

          Great news!!! Congratz.

          Show
          Pablo Cuadrado added a comment - Great news!!! Congratz.
          Hide
          Hudson added a comment -

          Integrated in Cassandra #402 (See http://hudson.zones.apache.org/hudson/job/Cassandra/402/)
          RingCache fixups in the wake of CASSANDRA-44

          • invoke DatabaseDescriptor.loadSchemas from RingCache ctor (needed
            now to populate DD's table list).
          • improved error handling in DD.getReplicaPlacementStrategyClass so that
            any future failures to call loadSchemas are easier to spot.
          • updated TestRingCache. This was never runnable as a unit test, (it
            requires a running instance), and doing static initialization is
            problematic now that RingCache's ctor throws IOExceptions.

          Patch by eevans

          Show
          Hudson added a comment - Integrated in Cassandra #402 (See http://hudson.zones.apache.org/hudson/job/Cassandra/402/ ) RingCache fixups in the wake of CASSANDRA-44 invoke DatabaseDescriptor.loadSchemas from RingCache ctor (needed now to populate DD's table list). improved error handling in DD.getReplicaPlacementStrategyClass so that any future failures to call loadSchemas are easier to spot. updated TestRingCache. This was never runnable as a unit test, (it requires a running instance), and doing static initialization is problematic now that RingCache's ctor throws IOExceptions. Patch by eevans
          Hide
          Ted Zlatanov added a comment -

          Can you please update the Thrift VERSION (minor rev I suppose)?

          Show
          Ted Zlatanov added a comment - Can you please update the Thrift VERSION (minor rev I suppose)?
          Hide
          Gary Dusbabek added a comment -

          Major version is getting bumped to 4 in 0.7 anyway as a result of switching to byte[] keys.

          Show
          Gary Dusbabek added a comment - Major version is getting bumped to 4 in 0.7 anyway as a result of switching to byte[] keys.
          Hide
          Hudson added a comment -

          Integrated in Cassandra #407 (See http://hudson.zones.apache.org/hudson/job/Cassandra/407/)
          added notes about changes for CASSANDRA-44.

          Show
          Hudson added a comment - Integrated in Cassandra #407 (See http://hudson.zones.apache.org/hudson/job/Cassandra/407/ ) added notes about changes for CASSANDRA-44 .

            People

            • Assignee:
              Gary Dusbabek
              Reporter:
              Eric Evans
            • Votes:
              2 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development