Details
-
Improvement
-
Status: Resolved
-
Normal
-
Resolution: Fixed
-
None
-
None
Description
In 2.1, we switched cqlsh to the python-driver.
In 3.0, we got rid of cassandra-cli.
Yet there is still code that's using legacy Thrift API. We want to convert it all to use the java-driver instead.
1. BulkLoader uses Thrift to query the schema tables. It should be using java-driver metadata APIs directly instead.
2. o.a.c.hadoop.cql3.CqlRecordWriter is using Thrift
3. o.a.c.hadoop.ColumnFamilyRecordReader is using Thrift
4. o.a.c.hadoop.AbstractCassandraStorage is using Thrift
5. o.a.c.hadoop.pig.CqlStorage is using Thrift
Some of the things listed above use Thrift to get the list of partition key columns or clustering columns. Those should be converted to use the Metadata API of the java-driver.
Somewhat related to that, we also have badly ported code from Thrift in o.a.c.hadoop.cql3.CqlRecordReader (see fetchKeys()) that manually fetches columns from schema tables instead of properly using the driver's Metadata API.
We need all of it fixed. One exception, for now, is o.a.c.hadoop.AbstractColumnFamilyInputFormat - it's using Thrift for its describe_splits_ex() call that cannot be currently replaced by any java-driver call .
Once this is done, we can stop starting Thrift RPC port by default in cassandra.yaml.
Attachments
Attachments
Issue Links
- depends upon
-
CASSANDRA-7688 Add data sizing to a system table
- Resolved
- is related to
-
CASSANDRA-9311 Hadoop and pig are inadequately tested
- Open
- relates to
-
CASSANDRA-7688 Add data sizing to a system table
- Resolved
- requires
-
CASSANDRA-8725 CassandraStorage erroring due to system keyspace schema changes
- Resolved