Details

    • Type: New Feature
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.8.0
    • Labels:
      None

      Description

      Hbase (HBASE-8015) has the concept of namespaces in the form of myNamespace:MyTable it would be great if Phoenix leveraged this feature to give a database like feature on top of the table.
      Maybe to stay close to Hbase it could also be a create DB:Table...
      or DB.Table which is a more standard annotation?

      1. PHOENIX-1311_v1.patch
        435 kB
        Ankit Singhal
      2. PHOENIX-1311_v2.patch
        619 kB
        Ankit Singhal
      3. PHOENIX-1311_v3_rebased_0.98.patch
        717 kB
        Ankit Singhal
      4. PHOENIX-1311_v3_rebased_1.0.patch
        718 kB
        Ankit Singhal
      5. PHOENIX-1311_v3_rebased.patch
        717 kB
        Ankit Singhal
      6. PHOENIX-1311_wip_2.patch
        118 kB
        Ankit Singhal
      7. PHOENIX-1311_wip.patch
        23 kB
        Ankit Singhal
      8. PHOENIX-1311.docx
        19 kB
        Ankit Singhal

        Issue Links

          Activity

          Hide
          jamestaylor James Taylor added a comment -

          Good idea, nicolas maillard. Phoenix already has the concept of a "schema", so it'd be good to tie the concept of a Phoenix schema to an HBase namespace. For example, you can define a table like this:

              CREATE TABLE my_schema.my_table (k VARCHAR PRIMARY KEY);
          

          In this case, my_table could be defined in the the my_schema namespace.

          Show
          jamestaylor James Taylor added a comment - Good idea, nicolas maillard . Phoenix already has the concept of a "schema", so it'd be good to tie the concept of a Phoenix schema to an HBase namespace. For example, you can define a table like this: CREATE TABLE my_schema.my_table (k VARCHAR PRIMARY KEY); In this case, my_table could be defined in the the my_schema namespace.
          Hide
          nmaillard nicolas maillard added a comment -

          Great James pretty much how I envisionned it was just wondering wether to keep the Hbase ":" or the more natural ".".
          would you be so kind as to assign this to me, I would like to take stab at it.

          Show
          nmaillard nicolas maillard added a comment - Great James pretty much how I envisionned it was just wondering wether to keep the Hbase ":" or the more natural ".". would you be so kind as to assign this to me, I would like to take stab at it.
          Hide
          apurtell Andrew Purtell added a comment -

          Assigned!

          Show
          apurtell Andrew Purtell added a comment - Assigned!
          Hide
          roc_chu_nk roc chu added a comment -

          I want to use Hbase namespace for multi-tenant. I want to every user can access only one namespace. but when I use phoenix, I must grant users 'RWC' for default namespace, because phoenix create system tables in default namespace.
          Is that possible, I just add a namespace to connecting string. so all system tables, this user have to used are created in this namespace.
          That will make different namespace looks like different db. Of course default namespace will be the default db.
          I think that will very helpful for use hbase namespace features.

          Show
          roc_chu_nk roc chu added a comment - I want to use Hbase namespace for multi-tenant. I want to every user can access only one namespace. but when I use phoenix, I must grant users 'RWC' for default namespace, because phoenix create system tables in default namespace. Is that possible, I just add a namespace to connecting string. so all system tables, this user have to used are created in this namespace. That will make different namespace looks like different db. Of course default namespace will be the default db. I think that will very helpful for use hbase namespace features.
          Hide
          sergey.b Serhiy Bilousov added a comment - - edited

          I have to admit that it was my thinking too. Considering separation HBase provide you with namespace that would be very very useful to be able to tell PHOENIX that you want tenant to map to HBase namespace.

          Is this something that make sense guys (especially making it configurable)?

          It feels like namespace would be better mapped to the CATALOG that SHEMA in database terms.

          Show
          sergey.b Serhiy Bilousov added a comment - - edited I have to admit that it was my thinking too. Considering separation HBase provide you with namespace that would be very very useful to be able to tell PHOENIX that you want tenant to map to HBase namespace. Is this something that make sense guys (especially making it configurable)? It feels like namespace would be better mapped to the CATALOG that SHEMA in database terms.
          Hide
          nagab Naga Vijayapuram added a comment -

          I would like to provide fix/patch. Can someone familiar with the codebase please guide me on which classes to look into?

          Show
          nagab Naga Vijayapuram added a comment - I would like to provide fix/patch. Can someone familiar with the codebase please guide me on which classes to look into?
          Hide
          ankit.singhal Ankit Singhal added a comment -

          Rajeshbabu Chintaguntla/James Taylor, PFA .. wip patch for the same.

          Show
          ankit.singhal Ankit Singhal added a comment - Rajeshbabu Chintaguntla / James Taylor , PFA .. wip patch for the same.
          Hide
          ankit.singhal Ankit Singhal added a comment -

          James Taylor, can you please review wip patch and confirm if this is the right way to do it.?
          I also need your help in supporting backward compatibility.

          Regards,
          Ankit Singhal

          Show
          ankit.singhal Ankit Singhal added a comment - James Taylor , can you please review wip patch and confirm if this is the right way to do it.? I also need your help in supporting backward compatibility. Regards, Ankit Singhal
          Hide
          jamestaylor James Taylor added a comment -

          I think we should push this work to post 4.7.0 and do PHOENIX-2571 in the same release cycle to make sure they cooperate. Too much good stuff already in (or committed to be in) 4.7.0 to wait. There may be b/w compat issues depending on how this is implemented as well. I would like to get PHOENIX-2143 and PHOENIX-2417 into 4.7.0 if possible, though. Maybe you could take PHOENIX-2417 off of the hands of Samarth Jain. I think these two round out the stats functionality improvements nicely.

          Show
          jamestaylor James Taylor added a comment - I think we should push this work to post 4.7.0 and do PHOENIX-2571 in the same release cycle to make sure they cooperate. Too much good stuff already in (or committed to be in) 4.7.0 to wait. There may be b/w compat issues depending on how this is implemented as well. I would like to get PHOENIX-2143 and PHOENIX-2417 into 4.7.0 if possible, though. Maybe you could take PHOENIX-2417 off of the hands of Samarth Jain . I think these two round out the stats functionality improvements nicely.
          Hide
          ankit.singhal Ankit Singhal added a comment -

          ok James Taylor, I'll work on these tickets ( PHOENIX-2143 and PHOENIX-2417 )

          Show
          ankit.singhal Ankit Singhal added a comment - ok James Taylor , I'll work on these tickets ( PHOENIX-2143 and PHOENIX-2417 )
          Hide
          mathias.kluba mathias kluba added a comment -

          The advantage of mapping to CATALOG is that users with permissions on his namespace will be able to create/alter tables that requires modification in the SYSTEM.CATALOG table.
          With multiple namespaces, one can have multiple catalogs like "CATALOG_01.SYSTEM.CATALOG".
          It's especially useful with dynamic columns...

          Show
          mathias.kluba mathias kluba added a comment - The advantage of mapping to CATALOG is that users with permissions on his namespace will be able to create/alter tables that requires modification in the SYSTEM.CATALOG table. With multiple namespaces, one can have multiple catalogs like "CATALOG_01.SYSTEM.CATALOG". It's especially useful with dynamic columns...
          Hide
          ankit.singhal Ankit Singhal added a comment - - edited

          James Taylor, can you please review the approach(wip_2 patch):-

          • we can take one flag (isUsingDefaultNamespace or something) for tables in meta table which will differentiate the tables created with default namespace even if they have schema associated with namespace mapped tables. This will help in b/w compat.
          • And for system.catalog table , we can either upgrade it to use always namespace or we can check if table present in system namespace and fallback to default if not.
          • Need to see how we can handle local indexes and views as they have prefix appended to the parent table(prefix_schema.table). should we map this to schema.perfix_table not sure if it can impact the existing customers.
          • We can use same approach in bulkload tools as well.
          • Need to workout on conflicts if we need to support PHOENIX-2571 as well.
          Show
          ankit.singhal Ankit Singhal added a comment - - edited James Taylor , can you please review the approach(wip_2 patch):- we can take one flag (isUsingDefaultNamespace or something) for tables in meta table which will differentiate the tables created with default namespace even if they have schema associated with namespace mapped tables. This will help in b/w compat. And for system.catalog table , we can either upgrade it to use always namespace or we can check if table present in system namespace and fallback to default if not. Need to see how we can handle local indexes and views as they have prefix appended to the parent table(prefix_schema.table). should we map this to schema.perfix_table not sure if it can impact the existing customers. We can use same approach in bulkload tools as well. Need to workout on conflicts if we need to support PHOENIX-2571 as well.
          Hide
          jamestaylor James Taylor added a comment -

          Sounds messy, Ankit Singhal. I was thinking that for b/w compat we could introduce a version column in SYSTEM.CATALOG. New tables would uniformly map the schema name to a namespace and old tables wouldn't. We could then have a MR-based conversion tool to migrate the existing data of a table into a namespace aware Phoenix table. We can handle the upgrade of the SYSTEM tables automatically as they're not that big (probably piggybacking on the MR-based conversion code).

          The default namespace would tie into the default schema in SQL (see Postgres and other RDBMS for how this works). We'd need a way of creating and dropping schemas and a way for the user to set the default schema for their session. The default schema is used to qualify a table when there's no reference to a schema (i.e. SELECT * FROM T when the default namespace is S would look for the table named T in the HBase namespace of S.

          Rather than starting with a patch, I'd recommend writing up a design doc for how this feature will work, the new SQL commands we'll support, the b/w compat story, how default schemas will be supported, etc. After that's agreed upon, I think a JIRA comment outlining the implementation would be next, and only after that's captured would we want to start writing code.

          Show
          jamestaylor James Taylor added a comment - Sounds messy, Ankit Singhal . I was thinking that for b/w compat we could introduce a version column in SYSTEM.CATALOG. New tables would uniformly map the schema name to a namespace and old tables wouldn't. We could then have a MR-based conversion tool to migrate the existing data of a table into a namespace aware Phoenix table. We can handle the upgrade of the SYSTEM tables automatically as they're not that big (probably piggybacking on the MR-based conversion code). The default namespace would tie into the default schema in SQL (see Postgres and other RDBMS for how this works). We'd need a way of creating and dropping schemas and a way for the user to set the default schema for their session. The default schema is used to qualify a table when there's no reference to a schema (i.e. SELECT * FROM T when the default namespace is S would look for the table named T in the HBase namespace of S . Rather than starting with a patch, I'd recommend writing up a design doc for how this feature will work, the new SQL commands we'll support, the b/w compat story, how default schemas will be supported, etc. After that's agreed upon, I think a JIRA comment outlining the implementation would be next, and only after that's captured would we want to start writing code.
          Hide
          enis Enis Soztutar added a comment -

          We could then have a MR-based conversion tool to migrate the existing data of a table into a namespace aware Phoenix table

          If there is no data layout change, but only hbase table name change, the best way would be to take snapshot then restore snapshot as a table in the desired namespace.

          We may also need to find a way to auto-create the namespaces in HBase for existing schemas in Phoenix. Agreed that a short design doc talking about how the end state would be good.

          Show
          enis Enis Soztutar added a comment - We could then have a MR-based conversion tool to migrate the existing data of a table into a namespace aware Phoenix table If there is no data layout change, but only hbase table name change, the best way would be to take snapshot then restore snapshot as a table in the desired namespace. We may also need to find a way to auto-create the namespaces in HBase for existing schemas in Phoenix. Agreed that a short design doc talking about how the end state would be good.
          Hide
          ankit.singhal Ankit Singhal added a comment -

          James Taylor/Enis Soztutar, PFA , a high level design doc for the same.

          Show
          ankit.singhal Ankit Singhal added a comment - James Taylor / Enis Soztutar , PFA , a high level design doc for the same.
          Hide
          jamestaylor James Taylor added a comment -

          Thanks for the design doc, Ankit Singhal. A couple of questions/comments:

          • I don't think we need to support SELECT SCHEMA() or SHOW SCHEMAS as there's a standard JDBC method in DatabaseMetaData that would return the list of schemas. Plus we don't have that kind of thing for tables. We could start introducing stuff like that (or we could leave it to the tooling), but if we are going to introduce that, let's do it in a separate JIRA.
          • Would you mind providing a couple of examples in SELECT queries for how the schema would be used and resolved? You're not proposing using a different SELECT * FROM my_schema:my_table syntax are you?
          • For b/w compat, I'm not sure a version flag on PTable is enough. We need something outside of this, as this will change the way we find the PTable in the first place. How do we know how to look for it, as we currently look for an HTable with a name of "MY_SCHEMA.MY_TABLE". Perhaps a global config on whether this feature is on or off, plus a requirement that the upgrade is done if it's turned on?
          Show
          jamestaylor James Taylor added a comment - Thanks for the design doc, Ankit Singhal . A couple of questions/comments: I don't think we need to support SELECT SCHEMA() or SHOW SCHEMAS as there's a standard JDBC method in DatabaseMetaData that would return the list of schemas. Plus we don't have that kind of thing for tables. We could start introducing stuff like that (or we could leave it to the tooling), but if we are going to introduce that, let's do it in a separate JIRA. Would you mind providing a couple of examples in SELECT queries for how the schema would be used and resolved? You're not proposing using a different SELECT * FROM my_schema:my_table syntax are you? For b/w compat, I'm not sure a version flag on PTable is enough. We need something outside of this, as this will change the way we find the PTable in the first place. How do we know how to look for it, as we currently look for an HTable with a name of "MY_SCHEMA.MY_TABLE". Perhaps a global config on whether this feature is on or off, plus a requirement that the upgrade is done if it's turned on?
          Hide
          ankit.singhal Ankit Singhal added a comment - - edited

          I don't think we need to support SELECT SCHEMA() or SHOW SCHEMAS as there's a standard JDBC method in DatabaseMetaData that would return the list of schemas. Plus we don't have that kind of thing for tables. We could start introducing stuff like that (or we could leave it to the tooling), but if we are going to introduce that, let's do it in a separate JIRA.

          Agreed James Taylor, we will modify api's in DatabaseMetaData if necessary and just wanted to know whether we should store schema entity in SYSTEM.SCHEMA or in SYSTEM.CATALOG with empty tablename.

          Would you mind providing a couple of examples in SELECT queries for how the schema would be used and resolved? You're not proposing using a different SELECT * FROM my_schema:my_table syntax are you?

          > Use test_schema
          > select * from T// schema will be resolved as 'test_schema' and hbase table "test_schema:T" will be referenced
          > select * from new_schema.T //  schema will be resolved as 'new_schema' and hbase table "new_schema:T" will be referenced.
          

          No, I'm not proposing my_schema:my_table syntax.

          For b/w compat, I'm not sure a version flag on PTable is enough. We need something outside of this, as this will change the way we find the PTable in the first place. How do we know how to look for it, as we currently look for an HTable with a name of "MY_SCHEMA.MY_TABLE". Perhaps a global config on whether this feature is on or off, plus a requirement that the upgrade is done if it's turned on?

          Ok, so started modifying one flow to understand if any extra config required. For system tables, I can see the requirement of global config. can you please refer any code where we need to resolve table before forming PTable?

          Below is the test case on which I'll check:-

          @Test
          	public void testBackWardCompatibility() throws Exception {
          		String namespace="TEST_SCHEMA";
          		String schemaName = namespace;
          		String tableName="TEST";
          		
          		String phoenixFullTableName=schemaName+"."+tableName;
          		String hbaseFullTableName=schemaName+":"+tableName;
          		HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), TestUtil.TEST_PROPERTIES).getAdmin();
          		admin.createNamespace(NamespaceDescriptor.create(namespace).build());
          		admin.createTable(new HTableDescriptor(TableName.valueOf(namespace, tableName))
          				.addFamily(new HColumnDescriptor(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES)));
          		admin.createTable(new HTableDescriptor(TableName.valueOf(phoenixFullTableName))
          				.addFamily(new HColumnDescriptor(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES)));
          		
          		Put put=new Put(PVarchar.INSTANCE.toBytes(phoenixFullTableName));
          		put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
          				QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
          		HTable phoenixSchematable=new HTable(admin.getConfiguration(), phoenixFullTableName);
          		phoenixSchematable.put(put);
          		
          		put=new Put(PVarchar.INSTANCE.toBytes(hbaseFullTableName));
          		put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES,
          				QueryConstants.EMPTY_COLUMN_VALUE_BYTES);
          		
          		HTable namespaceMappedtable=new HTable(admin.getConfiguration(), hbaseFullTableName);
          		namespaceMappedtable.put(put);
          		
          		Properties props = new Properties();
          		Connection conn = DriverManager.getConnection(getUrl(), props);
          		String ddl = "create table "+phoenixFullTableName+"(tableName varchar primary key)";
          		conn.createStatement().execute(ddl);
          		String query = "select tableName from "+phoenixFullTableName;
          		ResultSet rs = conn.createStatement().executeQuery(query);
          		assertTrue(rs.next());
          		assertEquals(phoenixFullTableName, rs.getString(1));
          		
          		put=new Put(SchemaUtil.getTableKey(null,schemaName,tableName));
          		put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.VERSION_BYTES,
          				PVarchar.INSTANCE.toBytes("4.8.0"));
          		admin.disableTable(phoenixFullTableName);
          		admin.deleteTable(phoenixFullTableName);
          		HTable metatable=new HTable(admin.getConfiguration(), TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES));
          		metatable.put(put);
          		
          		driver.getConnectionQueryServices(getUrl(), TestUtil.TEST_PROPERTIES).clearCache();
          		rs = conn.createStatement().executeQuery(query);
          		assertTrue(rs.next());
          		assertEquals(hbaseFullTableName, rs.getString(1));
          	}
          
          Show
          ankit.singhal Ankit Singhal added a comment - - edited I don't think we need to support SELECT SCHEMA() or SHOW SCHEMAS as there's a standard JDBC method in DatabaseMetaData that would return the list of schemas. Plus we don't have that kind of thing for tables. We could start introducing stuff like that (or we could leave it to the tooling), but if we are going to introduce that, let's do it in a separate JIRA. Agreed James Taylor , we will modify api's in DatabaseMetaData if necessary and just wanted to know whether we should store schema entity in SYSTEM.SCHEMA or in SYSTEM.CATALOG with empty tablename. Would you mind providing a couple of examples in SELECT queries for how the schema would be used and resolved? You're not proposing using a different SELECT * FROM my_schema:my_table syntax are you? > Use test_schema > select * from T // schema will be resolved as 'test_schema' and hbase table "test_schema:T" will be referenced > select * from new_schema.T // schema will be resolved as 'new_schema' and hbase table "new_schema:T" will be referenced. No, I'm not proposing my_schema:my_table syntax. For b/w compat, I'm not sure a version flag on PTable is enough. We need something outside of this, as this will change the way we find the PTable in the first place. How do we know how to look for it, as we currently look for an HTable with a name of "MY_SCHEMA.MY_TABLE". Perhaps a global config on whether this feature is on or off, plus a requirement that the upgrade is done if it's turned on? Ok, so started modifying one flow to understand if any extra config required. For system tables, I can see the requirement of global config. can you please refer any code where we need to resolve table before forming PTable? Below is the test case on which I'll check:- @Test public void testBackWardCompatibility() throws Exception { String namespace= "TEST_SCHEMA" ; String schemaName = namespace; String tableName= "TEST" ; String phoenixFullTableName=schemaName+ "." +tableName; String hbaseFullTableName=schemaName+ ":" +tableName; HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), TestUtil.TEST_PROPERTIES).getAdmin(); admin.createNamespace(NamespaceDescriptor.create(namespace).build()); admin.createTable( new HTableDescriptor(TableName.valueOf(namespace, tableName)) .addFamily( new HColumnDescriptor(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES))); admin.createTable( new HTableDescriptor(TableName.valueOf(phoenixFullTableName)) .addFamily( new HColumnDescriptor(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES))); Put put= new Put(PVarchar.INSTANCE.toBytes(phoenixFullTableName)); put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES, QueryConstants.EMPTY_COLUMN_VALUE_BYTES); HTable phoenixSchematable= new HTable(admin.getConfiguration(), phoenixFullTableName); phoenixSchematable.put(put); put= new Put(PVarchar.INSTANCE.toBytes(hbaseFullTableName)); put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, QueryConstants.EMPTY_COLUMN_BYTES, QueryConstants.EMPTY_COLUMN_VALUE_BYTES); HTable namespaceMappedtable= new HTable(admin.getConfiguration(), hbaseFullTableName); namespaceMappedtable.put(put); Properties props = new Properties(); Connection conn = DriverManager.getConnection(getUrl(), props); String ddl = "create table " +phoenixFullTableName+ "(tableName varchar primary key)" ; conn.createStatement().execute(ddl); String query = "select tableName from " +phoenixFullTableName; ResultSet rs = conn.createStatement().executeQuery(query); assertTrue(rs.next()); assertEquals(phoenixFullTableName, rs.getString(1)); put= new Put(SchemaUtil.getTableKey( null ,schemaName,tableName)); put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.VERSION_BYTES, PVarchar.INSTANCE.toBytes( "4.8.0" )); admin.disableTable(phoenixFullTableName); admin.deleteTable(phoenixFullTableName); HTable metatable= new HTable(admin.getConfiguration(), TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES)); metatable.put(put); driver.getConnectionQueryServices(getUrl(), TestUtil.TEST_PROPERTIES).clearCache(); rs = conn.createStatement().executeQuery(query); assertTrue(rs.next()); assertEquals(hbaseFullTableName, rs.getString(1)); }
          Hide
          jamestaylor James Taylor added a comment -

          The backward compatibility case. let's assume you have a table FOO.BAR and the query:

          SELECT * FROM FOO.BAR;
          

          The table would be in the empty HBase namespace today. We attempt to resolve this to an HTable with the name "FOO.BAR" on the client side (ConnectionQueryServices.getTable()), but instead now we'd resolve it as "FOO:BAR". What will tell the client which way we should resolve it?

          Show
          jamestaylor James Taylor added a comment - The backward compatibility case. let's assume you have a table FOO.BAR and the query: SELECT * FROM FOO.BAR; The table would be in the empty HBase namespace today. We attempt to resolve this to an HTable with the name "FOO.BAR" on the client side (ConnectionQueryServices.getTable()), but instead now we'd resolve it as "FOO:BAR". What will tell the client which way we should resolve it?
          Hide
          ankit.singhal Ankit Singhal added a comment -

          Old client will continue to resolve FOO.BAR as FOO.BAR even if server jars are upgraded. From server , we will send the flag in the PTable proto not the changed physical name and we will convert from FOO.BAR to FOO:BAR at the client depending upon the flag. As old client will not see this flag ,so keep resolving table names in old way only.

          Show
          ankit.singhal Ankit Singhal added a comment - Old client will continue to resolve FOO.BAR as FOO.BAR even if server jars are upgraded. From server , we will send the flag in the PTable proto not the changed physical name and we will convert from FOO.BAR to FOO:BAR at the client depending upon the flag. As old client will not see this flag ,so keep resolving table names in old way only.
          Hide
          jamestaylor James Taylor added a comment -

          I like the idea of having this just impact the physical name we store, but I think there may be corner cases. Also, the CREATE TABLE case may be tricky, as we create the metadata before we really have a PTable. I think it'd be ok to have a global client-side config that enables/disables this functionality (or alternatively, it's always on, but we force an upgrade prior to usage).

          One more minor nit: rather than a version flag on the SYSTEM.CATALOG table, we typically have a boolean indicator specific to the feature on whether it's enabled or not.

          Show
          jamestaylor James Taylor added a comment - I like the idea of having this just impact the physical name we store, but I think there may be corner cases. Also, the CREATE TABLE case may be tricky, as we create the metadata before we really have a PTable. I think it'd be ok to have a global client-side config that enables/disables this functionality (or alternatively, it's always on, but we force an upgrade prior to usage). One more minor nit: rather than a version flag on the SYSTEM.CATALOG table, we typically have a boolean indicator specific to the feature on whether it's enabled or not.
          Hide
          ankit.singhal Ankit Singhal added a comment - - edited

          Thanks James Taylor for the input.

          PFA, patch for review

          Brief summary of the current changes:-

          • CREATE SCHEMA [IF NOT EXISTS] construct to store schema in SYSTEM.CATALOG table with (no tenant and blank table).( let me know if you think we should change table for storing schema to SYSTEM.SCHEMA or something).
          • USE <schema> construct and jdbc url property to set schema in connection.
          • CREATE TABLE will create table in namespace if global config (phoenix.query.isNamespaceMappingEnabled) is enabled .And, In case of index/view , it will inherit the namespace from parent table.
          • SchemaResolver to resolve schema during creation of table(not index/view) and USE <schema> construct. We will not resolve schema for query, as we will throw tableNotFoundException.
          • is_Namespace_Mapped column to identify tables mapped to namespace or not mapped.

          there are some pending tasks which I am currently working on:-

          • DROP SCHEMA construct
          • how to migrate system tables to a SYSTEM namespace.
          • Moving local index and view index prefix to tables.
          • Adding test cases to handle corner cases.
          • SQL exception CODES(currently they are dummy)
          • UpgradeUtil for migration of tables to respective namespace.
          Show
          ankit.singhal Ankit Singhal added a comment - - edited Thanks James Taylor for the input. PFA, patch for review Brief summary of the current changes:- CREATE SCHEMA [IF NOT EXISTS] construct to store schema in SYSTEM.CATALOG table with (no tenant and blank table).( let me know if you think we should change table for storing schema to SYSTEM.SCHEMA or something). USE <schema> construct and jdbc url property to set schema in connection. CREATE TABLE will create table in namespace if global config (phoenix.query.isNamespaceMappingEnabled) is enabled .And, In case of index/view , it will inherit the namespace from parent table. SchemaResolver to resolve schema during creation of table(not index/view) and USE <schema> construct. We will not resolve schema for query, as we will throw tableNotFoundException. is_Namespace_Mapped column to identify tables mapped to namespace or not mapped. there are some pending tasks which I am currently working on:- DROP SCHEMA construct how to migrate system tables to a SYSTEM namespace. Moving local index and view index prefix to tables. Adding test cases to handle corner cases. SQL exception CODES(currently they are dummy) UpgradeUtil for migration of tables to respective namespace.
          Hide
          jamestaylor James Taylor added a comment -

          Samarth Jain - would you mind taking a look?

          Show
          jamestaylor James Taylor added a comment - Samarth Jain - would you mind taking a look?
          Hide
          samarthjain Samarth Jain added a comment - - edited

          Thanks for the patch, Ankit Singhal. Here is some initial feedback (more to come):

          1) The new test classes, NamespaceSchemaMappingIT and UseSchemaIT, that you have added don't need to extend BaseClientManagedTimeIT since I don't see your tests dependent on the connection SCN timestamp. It is always better for tests to extend BaseHBaseManagedTimeIT instead.

          2) Remove commented code in NamespaceSchemaMappingIT.

          3) Modify the catch block in UseSchemaIT#testUseSchema() and get rid of printing stacktrace.

          4) Is this change needed in IndexIT.java?

          props.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, Boolean.toString(true));
          

          5) In CreateSchemaCompiler#compile(), get rid of connectionToBe variable as I don't think you need it.

          public MutationPlan compile(final CreateSchemaStatement create) throws SQLException {
          +        final PhoenixConnection connection = statement.getConnection();
          +        PhoenixConnection connectionToBe = connection; 
          

          6) Make sure your code is formatted and coding guidelines followed. I see wrong indentation, missing spaces, etc. in several places.

          Show
          samarthjain Samarth Jain added a comment - - edited Thanks for the patch, Ankit Singhal . Here is some initial feedback (more to come): 1) The new test classes, NamespaceSchemaMappingIT and UseSchemaIT, that you have added don't need to extend BaseClientManagedTimeIT since I don't see your tests dependent on the connection SCN timestamp. It is always better for tests to extend BaseHBaseManagedTimeIT instead. 2) Remove commented code in NamespaceSchemaMappingIT. 3) Modify the catch block in UseSchemaIT#testUseSchema() and get rid of printing stacktrace. 4) Is this change needed in IndexIT.java? props.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, Boolean .toString( true )); 5) In CreateSchemaCompiler#compile(), get rid of connectionToBe variable as I don't think you need it. public MutationPlan compile( final CreateSchemaStatement create) throws SQLException { + final PhoenixConnection connection = statement.getConnection(); + PhoenixConnection connectionToBe = connection; 6) Make sure your code is formatted and coding guidelines followed. I see wrong indentation, missing spaces, etc. in several places.
          Hide
          ankit.singhal Ankit Singhal added a comment -

          Thanks Samarth Jain for the review comments. I'll incorporate them and look forward for more comments.

          Is this change needed in IndexIT.java

          props.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, Boolean.toString(true));
          

          No , it is not needed , I'll enable this for some test to check whether namespaces and indexes are getting created properly.

          Make sure your code is formatted and coding guidelines followed. I see wrong indentation, missing spaces, etc. in several places.

          Please ignore formatting for now, I'll do it during rebasing of the patch.

          Show
          ankit.singhal Ankit Singhal added a comment - Thanks Samarth Jain for the review comments. I'll incorporate them and look forward for more comments. Is this change needed in IndexIT.java props.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, Boolean .toString( true )); No , it is not needed , I'll enable this for some test to check whether namespaces and indexes are getting created properly. Make sure your code is formatted and coding guidelines followed. I see wrong indentation, missing spaces, etc. in several places. Please ignore formatting for now, I'll do it during rebasing of the patch.
          Hide
          ankit.singhal Ankit Singhal added a comment -

          Updated with

          • review comments
          • drop schema construct(currently drop schema is not allowed if any table is present) let me know if we need to support deletion of all tables when schema is dropped(by giving control to user by setting a client side property)
          • LocalIndex and viewIndex backward compatibility and prefixes are moved from schema to tablename.
          • some more test cases
          Show
          ankit.singhal Ankit Singhal added a comment - Updated with review comments drop schema construct(currently drop schema is not allowed if any table is present) let me know if we need to support deletion of all tables when schema is dropped(by giving control to user by setting a client side property) LocalIndex and viewIndex backward compatibility and prefixes are moved from schema to tablename. some more test cases
          Hide
          githubbot ASF GitHub Bot added a comment -

          GitHub user ankitsinghal opened a pull request:

          https://github.com/apache/phoenix/pull/153

          PHOENIX-1311 HBase namespaces surfaced in phoenix

          You can merge this pull request into a Git repository by running:

          $ git pull https://github.com/ankitsinghal/phoenix master

          Alternatively you can review and apply these changes as the patch at:

          https://github.com/apache/phoenix/pull/153.patch

          To close this pull request, make a commit to your master/trunk branch
          with (at least) the following in the commit message:

          This closes #153


          commit ef0a9d0a7e76afae4d12f44185244066a33f67e2
          Author: Ankit Singhal <ankitsinghal59@gmail.com>
          Date: 2016-03-22T08:04:48Z

          PHOENIX-1311 HBase namespaces surfaced in phoenix


          Show
          githubbot ASF GitHub Bot added a comment - GitHub user ankitsinghal opened a pull request: https://github.com/apache/phoenix/pull/153 PHOENIX-1311 HBase namespaces surfaced in phoenix You can merge this pull request into a Git repository by running: $ git pull https://github.com/ankitsinghal/phoenix master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/phoenix/pull/153.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #153 commit ef0a9d0a7e76afae4d12f44185244066a33f67e2 Author: Ankit Singhal <ankitsinghal59@gmail.com> Date: 2016-03-22T08:04:48Z PHOENIX-1311 HBase namespaces surfaced in phoenix
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-200194873

          Thanks for spinning up a pull request, @ankitsinghal. One functional question: how are you handling the case of a VIEW having a different schema name than it's base/physical table? Since the rows of a view are in the same physical table as the base table, they cannot be in different namespaces, correct?

          Please review, @samarthjain.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-200194873 Thanks for spinning up a pull request, @ankitsinghal. One functional question: how are you handling the case of a VIEW having a different schema name than it's base/physical table? Since the rows of a view are in the same physical table as the base table, they cannot be in different namespaces, correct? Please review, @samarthjain.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-200264320

          Currently ,VIEW can be created in different schema name than it's base table name. are you recommending to use same schema?

          I checked MySQL and postgres and they also allow creation of VIEW in different namespace/schema than its physical table. (may be it's because VIEW is logical and can be created on joined tables residing in different namespaces [though phoenix doesn't support join query in VIEW currently])

          http://dev.mysql.com/doc/refman/5.7/en/create-view.html
          http://www.postgresql.org/docs/9.2/static/sql-createview.html
          "If a schema name is given (for example, CREATE VIEW myschema.myview ...) then the view is created in the specified schema. Otherwise it is created in the current schema"

          What do you think about VIEW Indexes created with different schema as they all share a single physical table for storage and currently with above changes we are keeping a index table in same hbase namespace as of it's base/physical table.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-200264320 Currently ,VIEW can be created in different schema name than it's base table name. are you recommending to use same schema? I checked MySQL and postgres and they also allow creation of VIEW in different namespace/schema than its physical table. (may be it's because VIEW is logical and can be created on joined tables residing in different namespaces [though phoenix doesn't support join query in VIEW currently] ) http://dev.mysql.com/doc/refman/5.7/en/create-view.html http://www.postgresql.org/docs/9.2/static/sql-createview.html "If a schema name is given (for example, CREATE VIEW myschema.myview ...) then the view is created in the specified schema. Otherwise it is created in the current schema" What do you think about VIEW Indexes created with different schema as they all share a single physical table for storage and currently with above changes we are keeping a index table in same hbase namespace as of it's base/physical table.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-200406903

          I just wonder how that'll work, @ankitsinghal. If it works, I suppose it's ok, but it muddies things a bit as views are all supposed to be contained in the same physical table. Do we change the views physical table name property? Do you have test cases around this?

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-200406903 I just wonder how that'll work, @ankitsinghal. If it works, I suppose it's ok, but it muddies things a bit as views are all supposed to be contained in the same physical table. Do we change the views physical table name property? Do you have test cases around this?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-200819479

          still also, all views will be pointing to same physical table even if they have different schemas . we do update physical table property during the creation of view as per the parent physical table.

          I have updated one test ViewIT#testViewAndTableInDifferentSchemas in latest commit. Is this you are expecting?

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-200819479 still also, all views will be pointing to same physical table even if they have different schemas . we do update physical table property during the creation of view as per the parent physical table. I have updated one test ViewIT#testViewAndTableInDifferentSchemas in latest commit. Is this you are expecting?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57340354

          — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java —
          @@ -0,0 +1,67 @@
          +/*
          + * Licensed to the Apache Software Foundation (ASF) under one
          + * or more contributor license agreements. See the NOTICE file
          + * distributed with this work for additional information
          + * regarding copyright ownership. The ASF licenses this file
          + * to you under the Apache License, Version 2.0 (the
          + * "License"); you may not use this file except in compliance
          + * with the License. You may obtain a copy of the License at
          + *
          + * http://www.apache.org/licenses/LICENSE-2.0
          + *
          + * Unless required by applicable law or agreed to in writing, software
          + * distributed under the License is distributed on an "AS IS" BASIS,
          + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
          + * See the License for the specific language governing permissions and
          + * limitations under the License.
          + */
          +package org.apache.phoenix.end2end;
          +
          +import static org.junit.Assert.assertNotEquals;
          +import static org.junit.Assert.fail;
          +
          +import java.sql.Connection;
          +import java.sql.DriverManager;
          +import java.util.Properties;
          +
          +import org.apache.hadoop.hbase.client.HBaseAdmin;
          +import org.apache.phoenix.schema.NewerSchemaAlreadyExistsException;
          +import org.apache.phoenix.schema.SchemaAlreadyExistsException;
          +import org.apache.phoenix.util.PhoenixRuntime;
          +import org.apache.phoenix.util.TestUtil;
          +import org.junit.Test;
          +
          +public class CreateSchemaIT extends BaseClientManagedTimeIT {
          +
          + @Test
          + public void testCreateSchema() throws Exception {
          + long ts = nextTimestamp();
          + Properties props = new Properties();
          + props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts));
          + Connection conn = DriverManager.getConnection(getUrl(), props);
          + HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), TestUtil.TEST_PROPERTIES).getAdmin();
          — End diff –

          You can also alternatively directly go through PhoenixConnection to get hold of the admin. Like this:

          conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin().

          Minor nit: Even though this is test only, it is always better to close connections and HBaseAdmin using a try-with-resources construct.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57340354 — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java — @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import static org.junit.Assert.assertNotEquals; +import static org.junit.Assert.fail; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Properties; + +import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.phoenix.schema.NewerSchemaAlreadyExistsException; +import org.apache.phoenix.schema.SchemaAlreadyExistsException; +import org.apache.phoenix.util.PhoenixRuntime; +import org.apache.phoenix.util.TestUtil; +import org.junit.Test; + +public class CreateSchemaIT extends BaseClientManagedTimeIT { + + @Test + public void testCreateSchema() throws Exception { + long ts = nextTimestamp(); + Properties props = new Properties(); + props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts)); + Connection conn = DriverManager.getConnection(getUrl(), props); + HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), TestUtil.TEST_PROPERTIES).getAdmin(); — End diff – You can also alternatively directly go through PhoenixConnection to get hold of the admin. Like this: conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin(). Minor nit: Even though this is test only, it is always better to close connections and HBaseAdmin using a try-with-resources construct.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57341237

          — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java —
          @@ -94,6 +99,8 @@ public void testCreateTable() throws Exception {
          " id INTEGER not null primary key desc\n" +
          " ) ";
          conn.createStatement().execute(ddl);
          + HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), props).getAdmin();
          + assertNotEquals(null, admin.getTableDescriptor(Bytes.toBytes(tableName)));
          — End diff –

          Minor nits: assertNotNull reads better here. Same recommendations as above regarding getting and closing HBaseAdmin.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57341237 — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java — @@ -94,6 +99,8 @@ public void testCreateTable() throws Exception { " id INTEGER not null primary key desc\n" + " ) "; conn.createStatement().execute(ddl); + HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), props).getAdmin(); + assertNotEquals(null, admin.getTableDescriptor(Bytes.toBytes(tableName))); — End diff – Minor nits: assertNotNull reads better here. Same recommendations as above regarding getting and closing HBaseAdmin.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57343095

          — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/NamespaceSchemaMappingIT.java —
          @@ -0,0 +1,105 @@
          +/*
          + * Licensed to the Apache Software Foundation (ASF) under one
          + * or more contributor license agreements. See the NOTICE file
          + * distributed with this work for additional information
          + * regarding copyright ownership. The ASF licenses this file
          + * to you under the Apache License, Version 2.0 (the
          + * "License"); you may not use this file except in compliance
          + * with the License. You may obtain a copy of the License at
          + *
          + * http://www.apache.org/licenses/LICENSE-2.0
          + *
          + * Unless required by applicable law or agreed to in writing, software
          + * distributed under the License is distributed on an "AS IS" BASIS,
          + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
          + * See the License for the specific language governing permissions and
          + * limitations under the License.
          + */
          +package org.apache.phoenix.end2end;
          +
          +import static org.junit.Assert.assertEquals;
          +import static org.junit.Assert.assertTrue;
          +
          +import java.sql.Connection;
          +import java.sql.DriverManager;
          +import java.sql.ResultSet;
          +import java.util.Properties;
          +
          +import org.apache.hadoop.hbase.HColumnDescriptor;
          +import org.apache.hadoop.hbase.HTableDescriptor;
          +import org.apache.hadoop.hbase.NamespaceDescriptor;
          +import org.apache.hadoop.hbase.TableName;
          +import org.apache.hadoop.hbase.client.HBaseAdmin;
          +import org.apache.hadoop.hbase.client.HTable;
          +import org.apache.hadoop.hbase.client.Put;
          +import org.apache.phoenix.jdbc.PhoenixConnection;
          +import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
          +import org.apache.phoenix.query.QueryConstants;
          +import org.apache.phoenix.schema.types.PBoolean;
          +import org.apache.phoenix.schema.types.PVarchar;
          +import org.apache.phoenix.util.SchemaUtil;
          +import org.apache.phoenix.util.TestUtil;
          +import org.junit.Test;
          +
          +public class NamespaceSchemaMappingIT extends BaseHBaseManagedTimeIT {
          +
          + @Test
          + @SuppressWarnings("deprecation")
          + public void testBackWardCompatibility() throws Exception {
          +
          + String namespace = "TEST_SCHEMA";
          — End diff –

          Its great that you have added tests to check backward compatibility. Can you add some comments to the test here, though. It is difficult to follow right now exactly what this test is doing here.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57343095 — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/NamespaceSchemaMappingIT.java — @@ -0,0 +1,105 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.sql.ResultSet; +import java.util.Properties; + +import org.apache.hadoop.hbase.HColumnDescriptor; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.NamespaceDescriptor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.Put; +import org.apache.phoenix.jdbc.PhoenixConnection; +import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData; +import org.apache.phoenix.query.QueryConstants; +import org.apache.phoenix.schema.types.PBoolean; +import org.apache.phoenix.schema.types.PVarchar; +import org.apache.phoenix.util.SchemaUtil; +import org.apache.phoenix.util.TestUtil; +import org.junit.Test; + +public class NamespaceSchemaMappingIT extends BaseHBaseManagedTimeIT { + + @Test + @SuppressWarnings("deprecation") + public void testBackWardCompatibility() throws Exception { + + String namespace = "TEST_SCHEMA"; — End diff – Its great that you have added tests to check backward compatibility. Can you add some comments to the test here, though. It is difficult to follow right now exactly what this test is doing here.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57363423

          — Diff: phoenix-protocol/src/main/MetaDataService.proto —
          @@ -72,6 +79,12 @@ message GetFunctionsRequest

          { optional int32 clientVersion = 5; }

          +message GetSchemaRequest {
          + required string schemaName = 1;
          + required int64 clientTimestamp = 2;
          + optional int32 clientVersion = 3;
          — End diff –

          Should the client version be optional or required? Considering this is a new metadata entity, I would think most of the things here should have "required" unless of course functionally they don't have to be present always. This applies to all the other schema related requests.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57363423 — Diff: phoenix-protocol/src/main/MetaDataService.proto — @@ -72,6 +79,12 @@ message GetFunctionsRequest { optional int32 clientVersion = 5; } +message GetSchemaRequest { + required string schemaName = 1; + required int64 clientTimestamp = 2; + optional int32 clientVersion = 3; — End diff – Should the client version be optional or required? Considering this is a new metadata entity, I would think most of the things here should have "required" unless of course functionally they don't have to be present always. This applies to all the other schema related requests.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57364562

          — Diff: phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java —
          @@ -825,9 +825,27 @@ protected static void ensureTableCreated(String url, String tableName, Long ts)

          protected static void ensureTableCreated(String url, String tableName, byte[][] splits, Long ts) throws SQLException

          { String ddl = tableDDLMap.get(tableName); + createSchema(url,tableName, ts); createTestTable(url, ddl, splits, ts); }

          + public static void createSchema(String url, String tableName, Long ts) throws SQLException {
          + if (tableName.contains(".")) {
          + String schema = tableName.substring(0, tableName.indexOf("."));
          + if (!schema.equals("")) {
          + Properties props = new Properties();
          + if (ts != null)

          { + props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts)); + }

          + try (Connection conn = DriverManager.getConnection(url, props)

          { + conn.createStatement().executeUpdate("CREATE SCHEMA IF NOT EXISTS " + schema); + }

          catch (TableAlreadyExistsException e) {
          — End diff –

          Remove the catch() block here as it doesn't apply.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57364562 — Diff: phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java — @@ -825,9 +825,27 @@ protected static void ensureTableCreated(String url, String tableName, Long ts) protected static void ensureTableCreated(String url, String tableName, byte[][] splits, Long ts) throws SQLException { String ddl = tableDDLMap.get(tableName); + createSchema(url,tableName, ts); createTestTable(url, ddl, splits, ts); } + public static void createSchema(String url, String tableName, Long ts) throws SQLException { + if (tableName.contains(".")) { + String schema = tableName.substring(0, tableName.indexOf(".")); + if (!schema.equals("")) { + Properties props = new Properties(); + if (ts != null) { + props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts)); + } + try (Connection conn = DriverManager.getConnection(url, props) { + conn.createStatement().executeUpdate("CREATE SCHEMA IF NOT EXISTS " + schema); + } catch (TableAlreadyExistsException e) { — End diff – Remove the catch() block here as it doesn't apply.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57364814

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java —
          @@ -341,4 +345,13 @@ public static LinkType fromSerializedValue(byte serializedValue) {
          */
          int getRowTimestampColPos();
          long getUpdateCacheFrequency();
          +
          + boolean isNamespaceMapped();
          +
          + /**
          + * For a view, return the name of table in Phoenix that physically stores data.
          + *
          + * @return the name of the Phoenix table storing the data.
          + */
          + PName getPhoenixPhysicalName();
          — End diff –

          Can you tell me more about this new method? What is its purpose? Looking at the comments it looks like exactly same as getPhysicalName()

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57364814 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java — @@ -341,4 +345,13 @@ public static LinkType fromSerializedValue(byte serializedValue) { */ int getRowTimestampColPos(); long getUpdateCacheFrequency(); + + boolean isNamespaceMapped(); + + /** + * For a view, return the name of table in Phoenix that physically stores data. + * + * @return the name of the Phoenix table storing the data. + */ + PName getPhoenixPhysicalName(); — End diff – Can you tell me more about this new method? What is its purpose? Looking at the comments it looks like exactly same as getPhysicalName()
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57371542

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/parse/PSchema.java —
          @@ -0,0 +1,86 @@
          +/*
          + * Licensed to the Apache Software Foundation (ASF) under one
          + * or more contributor license agreements. See the NOTICE file
          + * distributed with this work for additional information
          + * regarding copyright ownership. The ASF licenses this file
          + * to you under the Apache License, Version 2.0 (the
          + * "License"); you may not use this file except in compliance
          + * with the License. You may obtain a copy of the License at
          + *
          + * http://www.apache.org/licenses/LICENSE-2.0
          + *
          + * Unless required by applicable law or agreed to in writing, software
          + * distributed under the License is distributed on an "AS IS" BASIS,
          + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
          + * See the License for the specific language governing permissions and
          + * limitations under the License.
          + */
          +package org.apache.phoenix.parse;
          +
          +import org.apache.hadoop.hbase.HConstants;
          +import org.apache.phoenix.coprocessor.generated.PSchemaProtos;
          +import org.apache.phoenix.schema.PMetaDataEntity;
          +import org.apache.phoenix.schema.PName;
          +import org.apache.phoenix.schema.PNameFactory;
          +import org.apache.phoenix.schema.PTableKey;
          +import org.apache.phoenix.util.SchemaUtil;
          +import org.apache.phoenix.util.SizedUtil;
          +
          +public class PSchema implements PMetaDataEntity {
          +
          + private final PName schemaName;
          + private PTableKey schemaKey;
          + private long timeStamp;
          + private int estimatedSize;
          +
          + public PSchema(long timeStamp)

          { // For index delete marker + this.timeStamp = timeStamp; + this.schemaName = null; + }

          +
          + public PSchema(String schemaName)

          { + this(schemaName, HConstants.LATEST_TIMESTAMP); + }

          +
          + public PSchema(String schemaName, long timeStamp) {
          + this.schemaName = PNameFactory.newName(SchemaUtil.normalizeIdentifier(schemaName));
          + this.schemaKey = new PTableKey(null, this.schemaName.getString());
          — End diff –

          Thinking about this a little bit more, is it possible for tenant views on the same multi-tenant table to have different schema/namespaces? If yes, how would you differentiate these schemas considering the tenantId is being set as null here. At a minimum, it would be ideal to have some testing around this scenario.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57371542 — Diff: phoenix-core/src/main/java/org/apache/phoenix/parse/PSchema.java — @@ -0,0 +1,86 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.parse; + +import org.apache.hadoop.hbase.HConstants; +import org.apache.phoenix.coprocessor.generated.PSchemaProtos; +import org.apache.phoenix.schema.PMetaDataEntity; +import org.apache.phoenix.schema.PName; +import org.apache.phoenix.schema.PNameFactory; +import org.apache.phoenix.schema.PTableKey; +import org.apache.phoenix.util.SchemaUtil; +import org.apache.phoenix.util.SizedUtil; + +public class PSchema implements PMetaDataEntity { + + private final PName schemaName; + private PTableKey schemaKey; + private long timeStamp; + private int estimatedSize; + + public PSchema(long timeStamp) { // For index delete marker + this.timeStamp = timeStamp; + this.schemaName = null; + } + + public PSchema(String schemaName) { + this(schemaName, HConstants.LATEST_TIMESTAMP); + } + + public PSchema(String schemaName, long timeStamp) { + this.schemaName = PNameFactory.newName(SchemaUtil.normalizeIdentifier(schemaName)); + this.schemaKey = new PTableKey(null, this.schemaName.getString()); — End diff – Thinking about this a little bit more, is it possible for tenant views on the same multi-tenant table to have different schema/namespaces? If yes, how would you differentiate these schemas considering the tenantId is being set as null here. At a minimum, it would be ideal to have some testing around this scenario.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57371653

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St

          return result;
          }

          • +
            + public MetaDataMutationResult updateCache(String schemaName) throws SQLException

            { + return updateCache(schemaName, false); + }

            +
            + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException {
            + long clientTimeStamp = getClientTimeStamp();
            + PSchema schema = null;
            + try {
            + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName));

              • End diff –

          Is tenantId a factor here? Is it always ok to have tenantId as null.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57371653 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St return result; } + + public MetaDataMutationResult updateCache(String schemaName) throws SQLException { + return updateCache(schemaName, false); + } + + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException { + long clientTimeStamp = getClientTimeStamp(); + PSchema schema = null; + try { + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName)); End diff – Is tenantId a factor here? Is it always ok to have tenantId as null.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57371949

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -1115,6 +1212,24 @@ private PFunction loadFunction(RegionCoprocessorEnvironment env, byte[] key,
          return null;
          }

          + private PSchema loadSchema(RegionCoprocessorEnvironment env, byte[] key, ImmutableBytesPtr cacheKey,
          + long clientTimeStamp, long asOfTimeStamp) throws IOException, SQLException {
          + Region region = env.getRegion();
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env).getMetaDataCache();
          + PSchema schema = (PSchema)metaDataCache.getIfPresent(cacheKey);
          + // We always cache the latest version - fault in if not in cache
          + if (schema != null)

          { return schema; }

          + ArrayList<byte[]> arrayList = new ArrayList<byte[]>(1);
          + arrayList.add(key);
          + List<PSchema> schemas = buildSchemas(arrayList, region, asOfTimeStamp, cacheKey);
          + if (schemas != null) return schemas.get(0);
          + // if not found then check if newer table already exists and add delete marker for timestamp
          — End diff –

          Change comment to mention schema and not table.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57371949 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -1115,6 +1212,24 @@ private PFunction loadFunction(RegionCoprocessorEnvironment env, byte[] key, return null; } + private PSchema loadSchema(RegionCoprocessorEnvironment env, byte[] key, ImmutableBytesPtr cacheKey, + long clientTimeStamp, long asOfTimeStamp) throws IOException, SQLException { + Region region = env.getRegion(); + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env).getMetaDataCache(); + PSchema schema = (PSchema)metaDataCache.getIfPresent(cacheKey); + // We always cache the latest version - fault in if not in cache + if (schema != null) { return schema; } + ArrayList<byte[]> arrayList = new ArrayList<byte[]>(1); + arrayList.add(key); + List<PSchema> schemas = buildSchemas(arrayList, region, asOfTimeStamp, cacheKey); + if (schemas != null) return schemas.get(0); + // if not found then check if newer table already exists and add delete marker for timestamp — End diff – Change comment to mention schema and not table.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57372367

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -3301,4 +3356,86 @@ private PTable getParentOfView(PTable view) throws SQLException

          { String parentName = SchemaUtil.normalizeFullTableName(select.getFrom().toString().trim()); return connection.getTable(new PTableKey(view.getTenantId(), parentName)); }

          +
          + public MutationState createSchema(CreateSchemaStatement create) throws SQLException {
          + boolean wasAutoCommit = connection.getAutoCommit();
          + connection.rollback();
          + try {
          + boolean isIfNotExists = create.isIfNotExists();
          + PSchema schema = new PSchema(create.getSchemaName());
          + connection.setAutoCommit(false);
          + List<Mutation> schemaMutations;
          +
          + try (PreparedStatement schemaUpsert = connection.prepareStatement(CREATE_SCHEMA))

          { + schemaUpsert.setString(1, schema.getSchemaName()); + schemaUpsert.setString(2, MetaDataClient.EMPTY_TABLE); + schemaUpsert.execute(); + schemaMutations = connection.getMutationState().toMutations(null).next().getSecond(); + connection.rollback(); + }

          + MetaDataMutationResult result = connection.getQueryServices().createSchema(schemaMutations,
          + schema.getSchemaName());
          + MutationCode code = result.getMutationCode();
          + switch (code) {
          + case SCHEMA_ALREADY_EXISTS:
          + if (result.getTable() != null) { // Can happen for transactional table that already exists as HBase
          — End diff –

          result.getSchema() ? Also, please remove the comment.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57372367 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -3301,4 +3356,86 @@ private PTable getParentOfView(PTable view) throws SQLException { String parentName = SchemaUtil.normalizeFullTableName(select.getFrom().toString().trim()); return connection.getTable(new PTableKey(view.getTenantId(), parentName)); } + + public MutationState createSchema(CreateSchemaStatement create) throws SQLException { + boolean wasAutoCommit = connection.getAutoCommit(); + connection.rollback(); + try { + boolean isIfNotExists = create.isIfNotExists(); + PSchema schema = new PSchema(create.getSchemaName()); + connection.setAutoCommit(false); + List<Mutation> schemaMutations; + + try (PreparedStatement schemaUpsert = connection.prepareStatement(CREATE_SCHEMA)) { + schemaUpsert.setString(1, schema.getSchemaName()); + schemaUpsert.setString(2, MetaDataClient.EMPTY_TABLE); + schemaUpsert.execute(); + schemaMutations = connection.getMutationState().toMutations(null).next().getSecond(); + connection.rollback(); + } + MetaDataMutationResult result = connection.getQueryServices().createSchema(schemaMutations, + schema.getSchemaName()); + MutationCode code = result.getMutationCode(); + switch (code) { + case SCHEMA_ALREADY_EXISTS: + if (result.getTable() != null) { // Can happen for transactional table that already exists as HBase — End diff – result.getSchema() ? Also, please remove the comment.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57372836

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PSchemaKey.java —
          @@ -0,0 +1,67 @@
          +/*
          — End diff –

          Where is this used? I don't see any mention of PSchemaKey outside this class itself.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57372836 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PSchemaKey.java — @@ -0,0 +1,67 @@ +/* — End diff – Where is this used? I don't see any mention of PSchemaKey outside this class itself.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57373091

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java —
          @@ -634,15 +642,19 @@ private PMetaData metaDataMutated(PName tenantId, String tableName, long tableSe
          }
          }

          • @Override
          • public PMetaData addColumn(final PName tenantId, final String tableName, final List<PColumn> columns, final long tableTimeStamp,
          • final long tableSeqNum, final boolean isImmutableRows, final boolean isWalDisabled, final boolean isMultitenant,
          • final boolean storeNulls, final boolean isTransactional, final long updateCacheFrequency, final long resolvedTime) throws SQLException {
          • return metaDataMutated(tenantId, tableName, tableSeqNum, new Mutator() {
            + @Override
              • End diff –

          Is this just white-space diff here? If yes, please revert the change.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57373091 — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java — @@ -634,15 +642,19 @@ private PMetaData metaDataMutated(PName tenantId, String tableName, long tableSe } } @Override public PMetaData addColumn(final PName tenantId, final String tableName, final List<PColumn> columns, final long tableTimeStamp, final long tableSeqNum, final boolean isImmutableRows, final boolean isWalDisabled, final boolean isMultitenant, final boolean storeNulls, final boolean isTransactional, final long updateCacheFrequency, final long resolvedTime) throws SQLException { return metaDataMutated(tenantId, tableName, tableSeqNum, new Mutator() { + @Override End diff – Is this just white-space diff here? If yes, please revert the change.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57373401

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java —
          @@ -958,6 +973,32 @@ private boolean allowOnlineTableSchemaUpdate()

          { QueryServicesOptions.DEFAULT_ALLOW_ONLINE_TABLE_SCHEMA_UPDATE); }

          + private NamespaceDescriptor ensureNamespaceCreated(String schemaName) throws SQLException {
          + SQLException sqlE = null;
          + try (HBaseAdmin admin = getAdmin()) {
          + final String quorum = ZKConfig.getZKQuorumServersString(config);
          — End diff –

          Remove this two variables and the following logger statement.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57373401 — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java — @@ -958,6 +973,32 @@ private boolean allowOnlineTableSchemaUpdate() { QueryServicesOptions.DEFAULT_ALLOW_ONLINE_TABLE_SCHEMA_UPDATE); } + private NamespaceDescriptor ensureNamespaceCreated(String schemaName) throws SQLException { + SQLException sqlE = null; + try (HBaseAdmin admin = getAdmin()) { + final String quorum = ZKConfig.getZKQuorumServersString(config); — End diff – Remove this two variables and the following logger statement.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57373598

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java —
          @@ -958,6 +973,32 @@ private boolean allowOnlineTableSchemaUpdate()

          { QueryServicesOptions.DEFAULT_ALLOW_ONLINE_TABLE_SCHEMA_UPDATE); }

          + private NamespaceDescriptor ensureNamespaceCreated(String schemaName) throws SQLException {
          + SQLException sqlE = null;
          + try (HBaseAdmin admin = getAdmin()) {
          + final String quorum = ZKConfig.getZKQuorumServersString(config);
          + final String znode = this.props.get(HConstants.ZOOKEEPER_ZNODE_PARENT);
          + logger.debug("Found quorum: " + quorum + ":" + znode);
          + boolean nameSpaceExists = true;
          + NamespaceDescriptor namespaceDescriptor = null;
          + try {
          + namespaceDescriptor = admin.getNamespaceDescriptor(schemaName);
          — End diff –

          How about you try creating the name space and catch the expected NamespaceExistException?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57373598 — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java — @@ -958,6 +973,32 @@ private boolean allowOnlineTableSchemaUpdate() { QueryServicesOptions.DEFAULT_ALLOW_ONLINE_TABLE_SCHEMA_UPDATE); } + private NamespaceDescriptor ensureNamespaceCreated(String schemaName) throws SQLException { + SQLException sqlE = null; + try (HBaseAdmin admin = getAdmin()) { + final String quorum = ZKConfig.getZKQuorumServersString(config); + final String znode = this.props.get(HConstants.ZOOKEEPER_ZNODE_PARENT); + logger.debug("Found quorum: " + quorum + ":" + znode); + boolean nameSpaceExists = true; + NamespaceDescriptor namespaceDescriptor = null; + try { + namespaceDescriptor = admin.getNamespaceDescriptor(schemaName); — End diff – How about you try creating the name space and catch the expected NamespaceExistException?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57558393

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PSchemaKey.java —
          @@ -0,0 +1,67 @@
          +/*
          — End diff –

          It is not needed anymore as we are storing schema in SYSTEM.CATALOG only. So I have removed in latest commit

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57558393 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PSchemaKey.java — @@ -0,0 +1,67 @@ +/* — End diff – It is not needed anymore as we are storing schema in SYSTEM.CATALOG only. So I have removed in latest commit
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57558494

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St

          return result;
          }

          • +
            + public MetaDataMutationResult updateCache(String schemaName) throws SQLException

            { + return updateCache(schemaName, false); + }

            +
            + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException {
            + long clientTimeStamp = getClientTimeStamp();
            + PSchema schema = null;
            + try {
            + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName));

              • End diff –

          I'm not sure that whether we should keep schema per tenantId so currently it is kept as global , if you or @JamesRTaylor think, then I can make the necessary changes to support schema per tenantId.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57558494 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St return result; } + + public MetaDataMutationResult updateCache(String schemaName) throws SQLException { + return updateCache(schemaName, false); + } + + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException { + long clientTimeStamp = getClientTimeStamp(); + PSchema schema = null; + try { + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName)); End diff – I'm not sure that whether we should keep schema per tenantId so currently it is kept as global , if you or @JamesRTaylor think, then I can make the necessary changes to support schema per tenantId.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57558752

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java —
          @@ -634,15 +642,19 @@ private PMetaData metaDataMutated(PName tenantId, String tableName, long tableSe
          }
          }

          • @Override
          • public PMetaData addColumn(final PName tenantId, final String tableName, final List<PColumn> columns, final long tableTimeStamp,
          • final long tableSeqNum, final boolean isImmutableRows, final boolean isWalDisabled, final boolean isMultitenant,
          • final boolean storeNulls, final boolean isTransactional, final long updateCacheFrequency, final long resolvedTime) throws SQLException {
          • return metaDataMutated(tenantId, tableName, tableSeqNum, new Mutator() {
            + @Override
              • End diff –

          No, isNamespaceMapped parameter is added.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57558752 — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java — @@ -634,15 +642,19 @@ private PMetaData metaDataMutated(PName tenantId, String tableName, long tableSe } } @Override public PMetaData addColumn(final PName tenantId, final String tableName, final List<PColumn> columns, final long tableTimeStamp, final long tableSeqNum, final boolean isImmutableRows, final boolean isWalDisabled, final boolean isMultitenant, final boolean storeNulls, final boolean isTransactional, final long updateCacheFrequency, final long resolvedTime) throws SQLException { return metaDataMutated(tenantId, tableName, tableSeqNum, new Mutator() { + @Override End diff – No, isNamespaceMapped parameter is added.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-202332916

          Thanks @samarthjain for reviews, I have incorporated the changes as you suggested except one: supporting schema per tenantId , for which I need your confirmation again.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-202332916 Thanks @samarthjain for reviews, I have incorporated the changes as you suggested except one: supporting schema per tenantId , for which I need your confirmation again.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57581391

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St

          return result;
          }

          • +
            + public MetaDataMutationResult updateCache(String schemaName) throws SQLException

            { + return updateCache(schemaName, false); + }

            +
            + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException {
            + long clientTimeStamp = getClientTimeStamp();
            + PSchema schema = null;
            + try {
            + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName));

              • End diff –

          No, we wouldn't need a schema per tenantId. With namespace support, is all data still stored in the same physical table for all views, even if the views have different schemas? For example, given the following:

          CREATE VIEW s1.a AS SELECT * FROM t WHERE k=1;
          CREATE VIEW s2.b AS SELECT * FROM t WHERE k=2;
          CREATE VIEW s3.c AS SELECT * FROM t WHERE k=3;

          Would the following query still return all rows across all three views?

          SELECT * FROM t;

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57581391 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St return result; } + + public MetaDataMutationResult updateCache(String schemaName) throws SQLException { + return updateCache(schemaName, false); + } + + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException { + long clientTimeStamp = getClientTimeStamp(); + PSchema schema = null; + try { + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName)); End diff – No, we wouldn't need a schema per tenantId. With namespace support, is all data still stored in the same physical table for all views, even if the views have different schemas? For example, given the following: CREATE VIEW s1.a AS SELECT * FROM t WHERE k=1; CREATE VIEW s2.b AS SELECT * FROM t WHERE k=2; CREATE VIEW s3.c AS SELECT * FROM t WHERE k=3; Would the following query still return all rows across all three views? SELECT * FROM t;
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57591528

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java —
          @@ -341,4 +345,13 @@ public static LinkType fromSerializedValue(byte serializedValue) {
          */
          int getRowTimestampColPos();
          long getUpdateCacheFrequency();
          +
          + boolean isNamespaceMapped();
          +
          + /**
          + * For a view, return the name of table in Phoenix that physically stores data.
          + *
          + * @return the name of the Phoenix table storing the data.
          + */
          + PName getPhoenixPhysicalName();
          — End diff –

          getPhoenixPhysicalName():- This is needed only in case of views when we want phoenix representation(X.Y) of data table.

          getPhysicalName():- this is to get physical/hbase representation(X:Y) of any data table/index.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57591528 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java — @@ -341,4 +345,13 @@ public static LinkType fromSerializedValue(byte serializedValue) { */ int getRowTimestampColPos(); long getUpdateCacheFrequency(); + + boolean isNamespaceMapped(); + + /** + * For a view, return the name of table in Phoenix that physically stores data. + * + * @return the name of the Phoenix table storing the data. + */ + PName getPhoenixPhysicalName(); — End diff – getPhoenixPhysicalName():- This is needed only in case of views when we want phoenix representation(X.Y) of data table. getPhysicalName():- this is to get physical/hbase representation(X:Y) of any data table/index.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57592706

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St

          return result;
          }

          • +
            + public MetaDataMutationResult updateCache(String schemaName) throws SQLException

            { + return updateCache(schemaName, false); + }

            +
            + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException {
            + long clientTimeStamp = getClientTimeStamp();
            + PSchema schema = null;
            + try {
            + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName));

              • End diff –

          yes, union of all above views will be equal to `select * from t`.
          as s1.a , s2.b and s3.c are pointing to same physical table "t" only.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57592706 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St return result; } + + public MetaDataMutationResult updateCache(String schemaName) throws SQLException { + return updateCache(schemaName, false); + } + + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException { + long clientTimeStamp = getClientTimeStamp(); + PSchema schema = null; + try { + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName)); End diff – yes, union of all above views will be equal to `select * from t`. as s1.a , s2.b and s3.c are pointing to same physical table "t" only.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57596801

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St

          return result;
          }

          • +
            + public MetaDataMutationResult updateCache(String schemaName) throws SQLException

            { + return updateCache(schemaName, false); + }

            +
            + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException {
            + long clientTimeStamp = getClientTimeStamp();
            + PSchema schema = null;
            + try {
            + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName));

              • End diff –

          But each view will be in a different namespace? I thought namespace was a physical property.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57596801 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St return result; } + + public MetaDataMutationResult updateCache(String schemaName) throws SQLException { + return updateCache(schemaName, false); + } + + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException { + long clientTimeStamp = getClientTimeStamp(); + PSchema schema = null; + try { + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName)); End diff – But each view will be in a different namespace? I thought namespace was a physical property.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57598863

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St

          return result;
          }

          • +
            + public MetaDataMutationResult updateCache(String schemaName) throws SQLException

            { + return updateCache(schemaName, false); + }

            +
            + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException {
            + long clientTimeStamp = getClientTimeStamp();
            + PSchema schema = null;
            + try {
            + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName));

              • End diff –

          views with schema may not be needed to mapped to any namespace right as view does not have any physical presence, so they can reside in different schema referencing to any table(in any namespace/schema).
          And, It seems tat other database like MySQL and Postgres also support views across schemas like this.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57598863 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St return result; } + + public MetaDataMutationResult updateCache(String schemaName) throws SQLException { + return updateCache(schemaName, false); + } + + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException { + long clientTimeStamp = getClientTimeStamp(); + PSchema schema = null; + try { + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName)); End diff – views with schema may not be needed to mapped to any namespace right as view does not have any physical presence, so they can reside in different schema referencing to any table(in any namespace/schema). And, It seems tat other database like MySQL and Postgres also support views across schemas like this.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57606218

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -1562,7 +1678,7 @@ private MetaDataMutationResult doDropTable(byte[] key, byte[] tenantId, byte[] s
          }

          if (tableType != PTableType.VIEW) { // Add to list of HTables to delete, unless it's a view

          • tableNamesToDelete.add(table.getName().getBytes());
            + tableNamesToDelete.add(table.getPhysicalName().getBytes());
              • End diff –

          There seems to be a lot of these changes of getName to getPhysicalName which I don't believe is correct. The getName() returns the logical name while getPhysicalName() returns the physical table name. Why was this change made?

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57606218 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -1562,7 +1678,7 @@ private MetaDataMutationResult doDropTable(byte[] key, byte[] tenantId, byte[] s } if (tableType != PTableType.VIEW) { // Add to list of HTables to delete, unless it's a view tableNamesToDelete.add(table.getName().getBytes()); + tableNamesToDelete.add(table.getPhysicalName().getBytes()); End diff – There seems to be a lot of these changes of getName to getPhysicalName which I don't believe is correct. The getName() returns the logical name while getPhysicalName() returns the physical table name. Why was this change made?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57606351

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java —
          @@ -74,7 +74,7 @@ public ChunkedResultIteratorFactory(ParallelIteratorFactory

          @Override
          public PeekingResultIterator newIterator(StatementContext context, ResultIterator scanner, Scan scan, String tableName) throws SQLException {

          • if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan)));
            + if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getPhysicalName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan)));
              • End diff –

          Another getName -> getPhysicalName. Why?

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57606351 — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java — @@ -74,7 +74,7 @@ public ChunkedResultIteratorFactory(ParallelIteratorFactory @Override public PeekingResultIterator newIterator(StatementContext context, ResultIterator scanner, Scan scan, String tableName) throws SQLException { if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan))); + if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getPhysicalName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan))); End diff – Another getName -> getPhysicalName. Why?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57606577

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java —
          @@ -120,7 +120,7 @@ protected void explain(String prefix, List<String> planSteps) {
          } else

          { explainSkipScan(buf); }
          • buf.append("OVER ").append(tableRef.getTable().getPhysicalName().getString());
            + buf.append("OVER ").append(tableRef.getTable().getPhoenixPhysicalName().getString());
              • End diff –

          Now here getPhysicalName -> getPhoenixPhysicalName. Why are we introducing a new name method?

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57606577 — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java — @@ -120,7 +120,7 @@ protected void explain(String prefix, List<String> planSteps) { } else { explainSkipScan(buf); } buf.append("OVER ").append(tableRef.getTable().getPhysicalName().getString()); + buf.append("OVER ").append(tableRef.getTable().getPhoenixPhysicalName().getString()); End diff – Now here getPhysicalName -> getPhoenixPhysicalName. Why are we introducing a new name method?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57608015

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -1562,7 +1678,7 @@ private MetaDataMutationResult doDropTable(byte[] key, byte[] tenantId, byte[] s
          }

          if (tableType != PTableType.VIEW) { // Add to list of HTables to delete, unless it's a view

          • tableNamesToDelete.add(table.getName().getBytes());
            + tableNamesToDelete.add(table.getPhysicalName().getBytes());
              • End diff –

          Yes!! it was not correct earlier but it didn't matter because logical and physical representation was same. but now this change is required to get correct physical name so that we can delete descriptors on the client.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57608015 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -1562,7 +1678,7 @@ private MetaDataMutationResult doDropTable(byte[] key, byte[] tenantId, byte[] s } if (tableType != PTableType.VIEW) { // Add to list of HTables to delete, unless it's a view tableNamesToDelete.add(table.getName().getBytes()); + tableNamesToDelete.add(table.getPhysicalName().getBytes()); End diff – Yes!! it was not correct earlier but it didn't matter because logical and physical representation was same. but now this change is required to get correct physical name so that we can delete descriptors on the client.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57608384

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java —
          @@ -74,7 +74,7 @@ public ChunkedResultIteratorFactory(ParallelIteratorFactory

          @Override
          public PeekingResultIterator newIterator(StatementContext context, ResultIterator scanner, Scan scan, String tableName) throws SQLException {

          • if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan)));
            + if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getPhysicalName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan)));
              • End diff –

          I changed it as there is a reference to scan so I thought, physical names in the logs would be required for better debugging. let me know if you think we should keep phoenix names in *logging* as it is.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57608384 — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java — @@ -74,7 +74,7 @@ public ChunkedResultIteratorFactory(ParallelIteratorFactory @Override public PeekingResultIterator newIterator(StatementContext context, ResultIterator scanner, Scan scan, String tableName) throws SQLException { if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan))); + if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getPhysicalName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan))); End diff – I changed it as there is a reference to scan so I thought, physical names in the logs would be required for better debugging. let me know if you think we should keep phoenix names in * logging * as it is.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57609323

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java —
          @@ -120,7 +120,7 @@ protected void explain(String prefix, List<String> planSteps) {
          } else

          { explainSkipScan(buf); }
          • buf.append("OVER ").append(tableRef.getTable().getPhysicalName().getString());
            + buf.append("OVER ").append(tableRef.getTable().getPhoenixPhysicalName().getString());
              • End diff –

          getPhoenixPhysicalName() is required to get phoenix table name (X.Y) , this is different from getName() as in case of view, it return phoenix name of the data table ,which sometimes required for "Explain" plan or sometimes to get tableRef for data table. I think , we can have different name for this method to avoid confusion.

          getPhysicalName gives the hbase representation(X:Y) of table/view depending upon isNamespaceMapped flag.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57609323 — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java — @@ -120,7 +120,7 @@ protected void explain(String prefix, List<String> planSteps) { } else { explainSkipScan(buf); } buf.append("OVER ").append(tableRef.getTable().getPhysicalName().getString()); + buf.append("OVER ").append(tableRef.getTable().getPhoenixPhysicalName().getString()); End diff – getPhoenixPhysicalName() is required to get phoenix table name (X.Y) , this is different from getName() as in case of view, it return phoenix name of the data table ,which sometimes required for "Explain" plan or sometimes to get tableRef for data table. I think , we can have different name for this method to avoid confusion. getPhysicalName gives the hbase representation(X:Y) of table/view depending upon isNamespaceMapped flag.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57615307

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java —
          @@ -120,7 +120,7 @@ protected void explain(String prefix, List<String> planSteps) {
          } else

          { explainSkipScan(buf); }
          • buf.append("OVER ").append(tableRef.getTable().getPhysicalName().getString());
            + buf.append("OVER ").append(tableRef.getTable().getPhoenixPhysicalName().getString());
              • End diff –

          I'd rather just have one getPhysicalName that returns X:Y if namespace enabled. Can we get away with that?

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57615307 — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java — @@ -120,7 +120,7 @@ protected void explain(String prefix, List<String> planSteps) { } else { explainSkipScan(buf); } buf.append("OVER ").append(tableRef.getTable().getPhysicalName().getString()); + buf.append("OVER ").append(tableRef.getTable().getPhoenixPhysicalName().getString()); End diff – I'd rather just have one getPhysicalName that returns X:Y if namespace enabled. Can we get away with that?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57615360

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -1562,7 +1678,7 @@ private MetaDataMutationResult doDropTable(byte[] key, byte[] tenantId, byte[] s
          }

          if (tableType != PTableType.VIEW) { // Add to list of HTables to delete, unless it's a view

          • tableNamesToDelete.add(table.getName().getBytes());
            + tableNamesToDelete.add(table.getPhysicalName().getBytes());
              • End diff –

          Ah, ok. Thanks for the bug fix . Please make sure there's a unit test that goes with it.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57615360 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -1562,7 +1678,7 @@ private MetaDataMutationResult doDropTable(byte[] key, byte[] tenantId, byte[] s } if (tableType != PTableType.VIEW) { // Add to list of HTables to delete, unless it's a view tableNamesToDelete.add(table.getName().getBytes()); + tableNamesToDelete.add(table.getPhysicalName().getBytes()); End diff – Ah, ok. Thanks for the bug fix . Please make sure there's a unit test that goes with it.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57615550

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java —
          @@ -74,7 +74,7 @@ public ChunkedResultIteratorFactory(ParallelIteratorFactory

          @Override
          public PeekingResultIterator newIterator(StatementContext context, ResultIterator scanner, Scan scan, String tableName) throws SQLException {

          • if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan)));
            + if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getPhysicalName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan)));
              • End diff –

          Good question. Not sure. Maybe we can do that in a separate JIRA if we want to change it? I'd probably lean toward using HBase physical name everywhere for explain plan and logging.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57615550 — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java — @@ -74,7 +74,7 @@ public ChunkedResultIteratorFactory(ParallelIteratorFactory @Override public PeekingResultIterator newIterator(StatementContext context, ResultIterator scanner, Scan scan, String tableName) throws SQLException { if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan))); + if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getPhysicalName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan))); End diff – Good question. Not sure. Maybe we can do that in a separate JIRA if we want to change it? I'd probably lean toward using HBase physical name everywhere for explain plan and logging.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57677898

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -1562,7 +1678,7 @@ private MetaDataMutationResult doDropTable(byte[] key, byte[] tenantId, byte[] s
          }

          if (tableType != PTableType.VIEW) { // Add to list of HTables to delete, unless it's a view

          • tableNamesToDelete.add(table.getName().getBytes());
            + tableNamesToDelete.add(table.getPhysicalName().getBytes());
              • End diff –

          Yeah , LocalIndexIT and ViewIT covers dropTable tests.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57677898 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -1562,7 +1678,7 @@ private MetaDataMutationResult doDropTable(byte[] key, byte[] tenantId, byte[] s } if (tableType != PTableType.VIEW) { // Add to list of HTables to delete, unless it's a view tableNamesToDelete.add(table.getName().getBytes()); + tableNamesToDelete.add(table.getPhysicalName().getBytes()); End diff – Yeah , LocalIndexIT and ViewIT covers dropTable tests.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57678114

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java —
          @@ -74,7 +74,7 @@ public ChunkedResultIteratorFactory(ParallelIteratorFactory

          @Override
          public PeekingResultIterator newIterator(StatementContext context, ResultIterator scanner, Scan scan, String tableName) throws SQLException {

          • if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan)));
            + if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getPhysicalName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan)));
              • End diff –

          https://issues.apache.org/jira/browse/PHOENIX-2807
          Created a jira for the same for now.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57678114 — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java — @@ -74,7 +74,7 @@ public ChunkedResultIteratorFactory(ParallelIteratorFactory @Override public PeekingResultIterator newIterator(StatementContext context, ResultIterator scanner, Scan scan, String tableName) throws SQLException { if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan))); + if (logger.isDebugEnabled()) logger.debug(LogUtil.addCustomAnnotations("ChunkedResultIteratorFactory.newIterator over " + tableRef.getTable().getPhysicalName().getString() + " with " + scan, ScanUtil.getCustomAnnotations(scan))); End diff – https://issues.apache.org/jira/browse/PHOENIX-2807 Created a jira for the same for now.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r57725270

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java —
          @@ -120,7 +120,7 @@ protected void explain(String prefix, List<String> planSteps) {
          } else

          { explainSkipScan(buf); }
          • buf.append("OVER ").append(tableRef.getTable().getPhysicalName().getString());
            + buf.append("OVER ").append(tableRef.getTable().getPhoenixPhysicalName().getString());
              • End diff –

          yeah.. is possible. did it in my last commit.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r57725270 — Diff: phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java — @@ -120,7 +120,7 @@ protected void explain(String prefix, List<String> planSteps) { } else { explainSkipScan(buf); } buf.append("OVER ").append(tableRef.getTable().getPhysicalName().getString()); + buf.append("OVER ").append(tableRef.getTable().getPhoenixPhysicalName().getString()); End diff – yeah.. is possible. did it in my last commit.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-204088945

          @JamesRTaylor @samarthjain , would you mind reviewing
          – Upgrade utility to map existing table to namespace
          – changes in bulload tool
          – changes in IndexFailurePolicy
          – let me know if you need more test for coverage.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-204088945 @JamesRTaylor @samarthjain , would you mind reviewing – Upgrade utility to map existing table to namespace – changes in bulload tool – changes in IndexFailurePolicy – let me know if you need more test for coverage.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58116856

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]>
          return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND,
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }
          +
          + @Override
          + public void createSchema(RpcController controller, CreateSchemaRequest request,
          + RpcCallback<MetaDataResponse> done) {
          + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations);
          +
          + byte[] lockKey = m.getRow();
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + try {
          + acquireLock(region, lockKey, locks);
          + // Get as of latest timestamp so we can detect if we have a
          + // newer function that already
          + // exists without making an additional query
          + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey);
          + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp);
          + if (schema != null) {
          + if (schema.getTimeStamp() < clientTimeStamp)

          { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }

          else

          { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }

          + }
          + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          +
          + // Invalidate the cache - the next getTable call will add it
          + // TODO: consider loading the table that was just created here,
          + // patching up the parent table, and updating the cache
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + if (cacheKey != null)

          { + metaDataCache.invalidate(cacheKey); + }

          +
          + // Get timeStamp from mutations - the above method sets it if
          + // it's unset
          + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND);
          + builder.setMutationTime(currentTimeStamp);
          + done.run(builder.build());
          + return;
          + } finally

          { + region.releaseRowLocks(locks); + }

          + } catch (Throwable t) {
          + logger.error("createFunction failed", t);
          — End diff –

          Change the message to:
          "Creating the schema " + schemaName + " failed."

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58116856 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]> return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } + + @Override + public void createSchema(RpcController controller, CreateSchemaRequest request, + RpcCallback<MetaDataResponse> done) { + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder(); + String schemaName = null; + try { + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations); + + byte[] lockKey = m.getRow(); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + try { + acquireLock(region, lockKey, locks); + // Get as of latest timestamp so we can detect if we have a + // newer function that already + // exists without making an additional query + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey); + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp); + if (schema != null) { + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } + } + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + + // Invalidate the cache - the next getTable call will add it + // TODO: consider loading the table that was just created here, + // patching up the parent table, and updating the cache + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + } + + // Get timeStamp from mutations - the above method sets it if + // it's unset + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND); + builder.setMutationTime(currentTimeStamp); + done.run(builder.build()); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("createFunction failed", t); — End diff – Change the message to: "Creating the schema " + schemaName + " failed."
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58117457

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -2910,6 +3026,16 @@ private static MetaDataMutationResult checkTableKeyInRegion(byte[] key, Region r
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }

          + private static MetaDataMutationResult checkSchemaKeyInRegion(byte[] key, Region region) {
          — End diff –

          You should reuse checkTableKeyInRegion and rename it to checkKeyInRegion. You would pass MutationCode to include in the result an extra parameter when the key is not in region. While you are it, also please remove the method checkFunctionKeyInRegion.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58117457 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -2910,6 +3026,16 @@ private static MetaDataMutationResult checkTableKeyInRegion(byte[] key, Region r EnvironmentEdgeManager.currentTimeMillis(), null); } + private static MetaDataMutationResult checkSchemaKeyInRegion(byte[] key, Region region) { — End diff – You should reuse checkTableKeyInRegion and rename it to checkKeyInRegion. You would pass MutationCode to include in the result an extra parameter when the key is not in region. While you are it, also please remove the method checkFunctionKeyInRegion.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58117686

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]>
          return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND,
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }
          +
          + @Override
          + public void createSchema(RpcController controller, CreateSchemaRequest request,
          + RpcCallback<MetaDataResponse> done) {
          + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations);
          +
          + byte[] lockKey = m.getRow();
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + try {
          + acquireLock(region, lockKey, locks);
          + // Get as of latest timestamp so we can detect if we have a
          + // newer function that already
          — End diff –

          newer schema. Also, please format the comments.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58117686 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]> return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } + + @Override + public void createSchema(RpcController controller, CreateSchemaRequest request, + RpcCallback<MetaDataResponse> done) { + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder(); + String schemaName = null; + try { + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations); + + byte[] lockKey = m.getRow(); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + try { + acquireLock(region, lockKey, locks); + // Get as of latest timestamp so we can detect if we have a + // newer function that already — End diff – newer schema. Also, please format the comments.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58117768

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]>
          return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND,
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }
          +
          + @Override
          + public void createSchema(RpcController controller, CreateSchemaRequest request,
          + RpcCallback<MetaDataResponse> done) {
          + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations);
          +
          + byte[] lockKey = m.getRow();
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + try {
          + acquireLock(region, lockKey, locks);
          + // Get as of latest timestamp so we can detect if we have a
          + // newer function that already
          + // exists without making an additional query
          + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey);
          + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp);
          + if (schema != null) {
          + if (schema.getTimeStamp() < clientTimeStamp)

          { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }

          else

          { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }

          + }
          + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          +
          + // Invalidate the cache - the next getTable call will add it
          — End diff –

          Please update these comments.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58117768 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]> return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } + + @Override + public void createSchema(RpcController controller, CreateSchemaRequest request, + RpcCallback<MetaDataResponse> done) { + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder(); + String schemaName = null; + try { + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations); + + byte[] lockKey = m.getRow(); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + try { + acquireLock(region, lockKey, locks); + // Get as of latest timestamp so we can detect if we have a + // newer function that already + // exists without making an additional query + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey); + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp); + if (schema != null) { + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } + } + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + + // Invalidate the cache - the next getTable call will add it — End diff – Please update these comments.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58118513

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]>
          return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND,
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }
          +
          + @Override
          + public void createSchema(RpcController controller, CreateSchemaRequest request,
          + RpcCallback<MetaDataResponse> done) {
          + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations);
          +
          + byte[] lockKey = m.getRow();
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }
          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + try {
          + acquireLock(region, lockKey, locks);
          + // Get as of latest timestamp so we can detect if we have a
          + // newer function that already
          + // exists without making an additional query
          + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey);
          + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp);
          + if (schema != null) {
          + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }
          + }
          + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          +
          + // Invalidate the cache - the next getTable call will add it
          + // TODO: consider loading the table that was just created here,
          + // patching up the parent table, and updating the cache
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + }
          +
          + // Get timeStamp from mutations - the above method sets it if
          + // it's unset
          + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND);
          + builder.setMutationTime(currentTimeStamp);
          + done.run(builder.build());
          + return;
          + } finally { + region.releaseRowLocks(locks); + }
          + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }
          + }
          +
          + @Override
          + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) {
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName);
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + try {
          + acquireLock(region, lockKey, locks);
          + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1);
          + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList);
          + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + for (ImmutableBytesPtr ptr : invalidateList)

          { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + }

          + done.run(MetaDataMutationResult.toProto(result));
          + return;
          + } finally

          { + region.releaseRowLocks(locks); + }

          + } catch (Throwable t)

          { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }

          + }
          +
          + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key,
          + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException {
          + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp);
          + boolean areTablesExists = false;
          + if (schema == null)

          { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); }

          + if (schema.getTimeStamp() < clientTimeStamp) {
          + Region region = env.getRegion();
          + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP,
          + clientTimeStamp);
          + List<Cell> results = Lists.newArrayList();
          + try (RegionScanner scanner = region.getScanner(scan) {
          + scanner.next(results);
          + if (results.isEmpty())

          { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + }

          + do {
          + Cell kv = results.get(0);
          + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0,
          + key.length) != 0)

          { + areTablesExists = true; + break; + }

          + results.clear();
          + scanner.next(results);
          + } while (!results.isEmpty());
          + }
          + if (areTablesExists) { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema,
          — End diff –

          I think it would be better to throw a more specific mutation code here. Something like TABLES_EXIST_ON_SCHEMA and then have proper handling in MetadataClient.dropSchema to throw the right SQLExceptionCode with appropriate message.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58118513 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]> return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } + + @Override + public void createSchema(RpcController controller, CreateSchemaRequest request, + RpcCallback<MetaDataResponse> done) { + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder(); + String schemaName = null; + try { + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations); + + byte[] lockKey = m.getRow(); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + try { + acquireLock(region, lockKey, locks); + // Get as of latest timestamp so we can detect if we have a + // newer function that already + // exists without making an additional query + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey); + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp); + if (schema != null) { + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } + } + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + + // Invalidate the cache - the next getTable call will add it + // TODO: consider loading the table that was just created here, + // patching up the parent table, and updating the cache + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + } + + // Get timeStamp from mutations - the above method sets it if + // it's unset + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND); + builder.setMutationTime(currentTimeStamp); + done.run(builder.build()); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + @Override + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) { + String schemaName = null; + try { + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData); + try { + acquireLock(region, lockKey, locks); + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1); + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList); + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData); + for (ImmutableBytesPtr ptr : invalidateList) { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + } + done.run(MetaDataMutationResult.toProto(result)); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key, + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException { + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp); + boolean areTablesExists = false; + if (schema == null) { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); } + if (schema.getTimeStamp() < clientTimeStamp) { + Region region = env.getRegion(); + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP, + clientTimeStamp); + List<Cell> results = Lists.newArrayList(); + try (RegionScanner scanner = region.getScanner(scan) { + scanner.next(results); + if (results.isEmpty()) { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + } + do { + Cell kv = results.get(0); + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0, + key.length) != 0) { + areTablesExists = true; + break; + } + results.clear(); + scanner.next(results); + } while (!results.isEmpty()); + } + if (areTablesExists) { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema, — End diff – I think it would be better to throw a more specific mutation code here. Something like TABLES_EXIST_ON_SCHEMA and then have proper handling in MetadataClient.dropSchema to throw the right SQLExceptionCode with appropriate message.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58118836

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]>
          return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND,
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }
          +
          + @Override
          + public void createSchema(RpcController controller, CreateSchemaRequest request,
          + RpcCallback<MetaDataResponse> done) {
          + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations);
          +
          + byte[] lockKey = m.getRow();
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }
          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + try {
          + acquireLock(region, lockKey, locks);
          + // Get as of latest timestamp so we can detect if we have a
          + // newer function that already
          + // exists without making an additional query
          + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey);
          + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp);
          + if (schema != null) {
          + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }
          + }
          + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          +
          + // Invalidate the cache - the next getTable call will add it
          + // TODO: consider loading the table that was just created here,
          + // patching up the parent table, and updating the cache
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + }
          +
          + // Get timeStamp from mutations - the above method sets it if
          + // it's unset
          + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND);
          + builder.setMutationTime(currentTimeStamp);
          + done.run(builder.build());
          + return;
          + } finally { + region.releaseRowLocks(locks); + }
          + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }
          + }
          +
          + @Override
          + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) {
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName);
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + try {
          + acquireLock(region, lockKey, locks);
          + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1);
          + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList);
          + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + for (ImmutableBytesPtr ptr : invalidateList)

          { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + }

          + done.run(MetaDataMutationResult.toProto(result));
          + return;
          + } finally

          { + region.releaseRowLocks(locks); + }

          + } catch (Throwable t)

          { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }

          + }
          +
          + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key,
          + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException {
          + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp);
          + boolean areTablesExists = false;
          + if (schema == null)

          { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); }

          + if (schema.getTimeStamp() < clientTimeStamp) {
          + Region region = env.getRegion();
          + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP,
          + clientTimeStamp);
          + List<Cell> results = Lists.newArrayList();
          + try (RegionScanner scanner = region.getScanner(scan) {
          + scanner.next(results);
          + if (results.isEmpty())

          { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + }

          + do {
          + Cell kv = results.get(0);
          + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0,
          + key.length) != 0)

          { + areTablesExists = true; + break; + }

          + results.clear();
          + scanner.next(results);
          + } while (!results.isEmpty());
          + }
          + if (areTablesExists)

          { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema, + EnvironmentEdgeManager.currentTimeMillis()); }

          +
          + return new MetaDataMutationResult(MutationCode.SCHEMA_ALREADY_EXISTS, schema,
          + EnvironmentEdgeManager.currentTimeMillis());
          + }
          + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(),
          — End diff –

          This is already returned over here:
          if (schema == null)

          { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); }

          May be move the SCHEMA_ALREADY_EXISTS out of the if block or some other way?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58118836 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]> return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } + + @Override + public void createSchema(RpcController controller, CreateSchemaRequest request, + RpcCallback<MetaDataResponse> done) { + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder(); + String schemaName = null; + try { + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations); + + byte[] lockKey = m.getRow(); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + try { + acquireLock(region, lockKey, locks); + // Get as of latest timestamp so we can detect if we have a + // newer function that already + // exists without making an additional query + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey); + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp); + if (schema != null) { + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } + } + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + + // Invalidate the cache - the next getTable call will add it + // TODO: consider loading the table that was just created here, + // patching up the parent table, and updating the cache + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + } + + // Get timeStamp from mutations - the above method sets it if + // it's unset + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND); + builder.setMutationTime(currentTimeStamp); + done.run(builder.build()); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + @Override + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) { + String schemaName = null; + try { + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData); + try { + acquireLock(region, lockKey, locks); + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1); + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList); + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData); + for (ImmutableBytesPtr ptr : invalidateList) { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + } + done.run(MetaDataMutationResult.toProto(result)); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key, + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException { + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp); + boolean areTablesExists = false; + if (schema == null) { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); } + if (schema.getTimeStamp() < clientTimeStamp) { + Region region = env.getRegion(); + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP, + clientTimeStamp); + List<Cell> results = Lists.newArrayList(); + try (RegionScanner scanner = region.getScanner(scan) { + scanner.next(results); + if (results.isEmpty()) { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + } + do { + Cell kv = results.get(0); + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0, + key.length) != 0) { + areTablesExists = true; + break; + } + results.clear(); + scanner.next(results); + } while (!results.isEmpty()); + } + if (areTablesExists) { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema, + EnvironmentEdgeManager.currentTimeMillis()); } + + return new MetaDataMutationResult(MutationCode.SCHEMA_ALREADY_EXISTS, schema, + EnvironmentEdgeManager.currentTimeMillis()); + } + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), — End diff – This is already returned over here: if (schema == null) { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } May be move the SCHEMA_ALREADY_EXISTS out of the if block or some other way?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58131738

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]>
          return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND,
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }
          +
          + @Override
          + public void createSchema(RpcController controller, CreateSchemaRequest request,
          + RpcCallback<MetaDataResponse> done) {
          + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations);
          +
          + byte[] lockKey = m.getRow();
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }
          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + try {
          + acquireLock(region, lockKey, locks);
          + // Get as of latest timestamp so we can detect if we have a
          + // newer function that already
          + // exists without making an additional query
          + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey);
          + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp);
          + if (schema != null) {
          + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }
          + }
          + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          +
          + // Invalidate the cache - the next getTable call will add it
          + // TODO: consider loading the table that was just created here,
          + // patching up the parent table, and updating the cache
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + }
          +
          + // Get timeStamp from mutations - the above method sets it if
          + // it's unset
          + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND);
          + builder.setMutationTime(currentTimeStamp);
          + done.run(builder.build());
          + return;
          + } finally { + region.releaseRowLocks(locks); + }
          + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }
          + }
          +
          + @Override
          + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) {
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName);
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + try {
          + acquireLock(region, lockKey, locks);
          + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1);
          + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList);
          + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + for (ImmutableBytesPtr ptr : invalidateList)

          { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + }

          + done.run(MetaDataMutationResult.toProto(result));
          + return;
          + } finally

          { + region.releaseRowLocks(locks); + }

          + } catch (Throwable t)

          { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }

          + }
          +
          + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key,
          + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException {
          + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp);
          + boolean areTablesExists = false;
          + if (schema == null)

          { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); }

          + if (schema.getTimeStamp() < clientTimeStamp) {
          + Region region = env.getRegion();
          + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP,
          + clientTimeStamp);
          + List<Cell> results = Lists.newArrayList();
          + try (RegionScanner scanner = region.getScanner(scan) {
          + scanner.next(results);
          + if (results.isEmpty())

          { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + }

          + do {
          + Cell kv = results.get(0);
          + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0,
          + key.length) != 0)

          { + areTablesExists = true; + break; + }

          + results.clear();
          + scanner.next(results);
          + } while (!results.isEmpty());
          + }
          + if (areTablesExists) { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema,
          — End diff –

          I think it'd be best if MutationCode returned an int error code as we can go directly from that to a SQLExceptionCode on the client. We can use negative numbers for non exception cases.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58131738 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]> return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } + + @Override + public void createSchema(RpcController controller, CreateSchemaRequest request, + RpcCallback<MetaDataResponse> done) { + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder(); + String schemaName = null; + try { + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations); + + byte[] lockKey = m.getRow(); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + try { + acquireLock(region, lockKey, locks); + // Get as of latest timestamp so we can detect if we have a + // newer function that already + // exists without making an additional query + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey); + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp); + if (schema != null) { + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } + } + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + + // Invalidate the cache - the next getTable call will add it + // TODO: consider loading the table that was just created here, + // patching up the parent table, and updating the cache + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + } + + // Get timeStamp from mutations - the above method sets it if + // it's unset + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND); + builder.setMutationTime(currentTimeStamp); + done.run(builder.build()); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + @Override + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) { + String schemaName = null; + try { + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData); + try { + acquireLock(region, lockKey, locks); + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1); + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList); + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData); + for (ImmutableBytesPtr ptr : invalidateList) { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + } + done.run(MetaDataMutationResult.toProto(result)); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key, + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException { + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp); + boolean areTablesExists = false; + if (schema == null) { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); } + if (schema.getTimeStamp() < clientTimeStamp) { + Region region = env.getRegion(); + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP, + clientTimeStamp); + List<Cell> results = Lists.newArrayList(); + try (RegionScanner scanner = region.getScanner(scan) { + scanner.next(results); + if (results.isEmpty()) { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + } + do { + Cell kv = results.get(0); + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0, + key.length) != 0) { + areTablesExists = true; + break; + } + results.clear(); + scanner.next(results); + } while (!results.isEmpty()); + } + if (areTablesExists) { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema, — End diff – I think it'd be best if MutationCode returned an int error code as we can go directly from that to a SQLExceptionCode on the client. We can use negative numbers for non exception cases.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58166396

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]>
          return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND,
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }
          +
          + @Override
          + public void createSchema(RpcController controller, CreateSchemaRequest request,
          + RpcCallback<MetaDataResponse> done) {
          + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations);
          +
          + byte[] lockKey = m.getRow();
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }
          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + try {
          + acquireLock(region, lockKey, locks);
          + // Get as of latest timestamp so we can detect if we have a
          + // newer function that already
          + // exists without making an additional query
          + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey);
          + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp);
          + if (schema != null) {
          + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }
          + }
          + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          +
          + // Invalidate the cache - the next getTable call will add it
          + // TODO: consider loading the table that was just created here,
          + // patching up the parent table, and updating the cache
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + }
          +
          + // Get timeStamp from mutations - the above method sets it if
          + // it's unset
          + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND);
          + builder.setMutationTime(currentTimeStamp);
          + done.run(builder.build());
          + return;
          + } finally { + region.releaseRowLocks(locks); + }
          + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }
          + }
          +
          + @Override
          + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) {
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName);
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + try {
          + acquireLock(region, lockKey, locks);
          + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1);
          + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList);
          + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + for (ImmutableBytesPtr ptr : invalidateList)

          { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + }

          + done.run(MetaDataMutationResult.toProto(result));
          + return;
          + } finally

          { + region.releaseRowLocks(locks); + }

          + } catch (Throwable t)

          { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }

          + }
          +
          + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key,
          + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException {
          + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp);
          + boolean areTablesExists = false;
          + if (schema == null)

          { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); }

          + if (schema.getTimeStamp() < clientTimeStamp) {
          + Region region = env.getRegion();
          + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP,
          + clientTimeStamp);
          + List<Cell> results = Lists.newArrayList();
          + try (RegionScanner scanner = region.getScanner(scan) {
          + scanner.next(results);
          + if (results.isEmpty())

          { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + }

          + do {
          + Cell kv = results.get(0);
          + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0,
          + key.length) != 0)

          { + areTablesExists = true; + break; + }

          + results.clear();
          + scanner.next(results);
          + } while (!results.isEmpty());
          + }
          + if (areTablesExists) { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema,
          — End diff –

          These changes can be done in a separate pull request, though IMO.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58166396 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]> return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } + + @Override + public void createSchema(RpcController controller, CreateSchemaRequest request, + RpcCallback<MetaDataResponse> done) { + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder(); + String schemaName = null; + try { + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations); + + byte[] lockKey = m.getRow(); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + try { + acquireLock(region, lockKey, locks); + // Get as of latest timestamp so we can detect if we have a + // newer function that already + // exists without making an additional query + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey); + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp); + if (schema != null) { + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } + } + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + + // Invalidate the cache - the next getTable call will add it + // TODO: consider loading the table that was just created here, + // patching up the parent table, and updating the cache + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + } + + // Get timeStamp from mutations - the above method sets it if + // it's unset + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND); + builder.setMutationTime(currentTimeStamp); + done.run(builder.build()); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + @Override + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) { + String schemaName = null; + try { + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData); + try { + acquireLock(region, lockKey, locks); + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1); + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList); + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData); + for (ImmutableBytesPtr ptr : invalidateList) { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + } + done.run(MetaDataMutationResult.toProto(result)); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key, + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException { + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp); + boolean areTablesExists = false; + if (schema == null) { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); } + if (schema.getTimeStamp() < clientTimeStamp) { + Region region = env.getRegion(); + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP, + clientTimeStamp); + List<Cell> results = Lists.newArrayList(); + try (RegionScanner scanner = region.getScanner(scan) { + scanner.next(results); + if (results.isEmpty()) { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + } + do { + Cell kv = results.get(0); + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0, + key.length) != 0) { + areTablesExists = true; + break; + } + results.clear(); + scanner.next(results); + } while (!results.isEmpty()); + } + if (areTablesExists) { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema, — End diff – These changes can be done in a separate pull request, though IMO.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58187253

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java —
          @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]>
          return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND,
          EnvironmentEdgeManager.currentTimeMillis(), null);
          }
          +
          + @Override
          + public void createSchema(RpcController controller, CreateSchemaRequest request,
          + RpcCallback<MetaDataResponse> done) {
          + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations);
          +
          + byte[] lockKey = m.getRow();
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }
          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + try {
          + acquireLock(region, lockKey, locks);
          + // Get as of latest timestamp so we can detect if we have a
          + // newer function that already
          + // exists without making an additional query
          + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey);
          + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp);
          + if (schema != null) {
          + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + }
          + }
          + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          +
          + // Invalidate the cache - the next getTable call will add it
          + // TODO: consider loading the table that was just created here,
          + // patching up the parent table, and updating the cache
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + }
          +
          + // Get timeStamp from mutations - the above method sets it if
          + // it's unset
          + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations);
          + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND);
          + builder.setMutationTime(currentTimeStamp);
          + done.run(builder.build());
          + return;
          + } finally { + region.releaseRowLocks(locks); + }
          + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }
          + }
          +
          + @Override
          + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) {
          + String schemaName = null;
          + try {
          + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request);
          + schemaName = request.getSchemaName();
          + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName);
          + Region region = env.getRegion();
          + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region);
          + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + List<RowLock> locks = Lists.newArrayList();
          + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + try {
          + acquireLock(region, lockKey, locks);
          + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1);
          + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList);
          + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS)

          { + done.run(MetaDataMutationResult.toProto(result)); + return; + }

          + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE,
          + HConstants.NO_NONCE);
          + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env)
          + .getMetaDataCache();
          + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData);
          + for (ImmutableBytesPtr ptr : invalidateList)

          { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + }

          + done.run(MetaDataMutationResult.toProto(result));
          + return;
          + } finally

          { + region.releaseRowLocks(locks); + }

          + } catch (Throwable t)

          { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + }

          + }
          +
          + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key,
          + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException {
          + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp);
          + boolean areTablesExists = false;
          + if (schema == null)

          { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); }

          + if (schema.getTimeStamp() < clientTimeStamp) {
          + Region region = env.getRegion();
          + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP,
          + clientTimeStamp);
          + List<Cell> results = Lists.newArrayList();
          + try (RegionScanner scanner = region.getScanner(scan) {
          + scanner.next(results);
          + if (results.isEmpty())

          { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + }

          + do {
          + Cell kv = results.get(0);
          + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0,
          + key.length) != 0)

          { + areTablesExists = true; + break; + }

          + results.clear();
          + scanner.next(results);
          + } while (!results.isEmpty());
          + }
          + if (areTablesExists)

          { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema, + EnvironmentEdgeManager.currentTimeMillis()); }

          +
          + return new MetaDataMutationResult(MutationCode.SCHEMA_ALREADY_EXISTS, schema,
          + EnvironmentEdgeManager.currentTimeMillis());
          + }
          + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(),
          — End diff –

          I think SCHEMA_ALREADY_EXISTS is needed in if block , because we are checking if the schema timestamp is less than the clientstamp otherwise we are returning MutationCode.SCHEMA_NOT_FOUND

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58187253 — Diff: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java — @@ -3201,4 +3367,155 @@ private MetaDataMutationResult doDropFunction(long clientTimeStamp, List<byte[]> return new MetaDataMutationResult(MutationCode.FUNCTION_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), null); } + + @Override + public void createSchema(RpcController controller, CreateSchemaRequest request, + RpcCallback<MetaDataResponse> done) { + MetaDataResponse.Builder builder = MetaDataResponse.newBuilder(); + String schemaName = null; + try { + List<Mutation> schemaMutations = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + Mutation m = MetaDataUtil.getPutOnlyTableHeaderRow(schemaMutations); + + byte[] lockKey = m.getRow(); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + try { + acquireLock(region, lockKey, locks); + // Get as of latest timestamp so we can detect if we have a + // newer function that already + // exists without making an additional query + ImmutableBytesPtr cacheKey = new ImmutableBytesPtr(lockKey); + PSchema schema = loadSchema(env, lockKey, cacheKey, clientTimeStamp, clientTimeStamp); + if (schema != null) { + if (schema.getTimeStamp() < clientTimeStamp) { + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_ALREADY_EXISTS); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } else { + builder.setReturnCode(MetaDataProtos.MutationCode.NEWER_SCHEMA_FOUND); + builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis()); + builder.setSchema(PSchema.toProto(schema)); + done.run(builder.build()); + return; + } + } + region.mutateRowsWithLocks(schemaMutations, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + + // Invalidate the cache - the next getTable call will add it + // TODO: consider loading the table that was just created here, + // patching up the parent table, and updating the cache + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + if (cacheKey != null) { + metaDataCache.invalidate(cacheKey); + } + + // Get timeStamp from mutations - the above method sets it if + // it's unset + long currentTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMutations); + builder.setReturnCode(MetaDataProtos.MutationCode.SCHEMA_NOT_FOUND); + builder.setMutationTime(currentTimeStamp); + done.run(builder.build()); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("createFunction failed", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + @Override + public void dropSchema(RpcController controller, DropSchemaRequest request, RpcCallback<MetaDataResponse> done) { + String schemaName = null; + try { + List<Mutation> schemaMetaData = ProtobufUtil.getMutations(request); + schemaName = request.getSchemaName(); + byte[] lockKey = SchemaUtil.getSchemaKey(schemaName); + Region region = env.getRegion(); + MetaDataMutationResult result = checkSchemaKeyInRegion(lockKey, region); + if (result != null) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + List<RowLock> locks = Lists.newArrayList(); + long clientTimeStamp = MetaDataUtil.getClientTimeStamp(schemaMetaData); + try { + acquireLock(region, lockKey, locks); + List<ImmutableBytesPtr> invalidateList = new ArrayList<ImmutableBytesPtr>(1); + result = doDropSchema(clientTimeStamp, schemaName, lockKey, schemaMetaData, invalidateList); + if (result.getMutationCode() != MutationCode.SCHEMA_ALREADY_EXISTS) { + done.run(MetaDataMutationResult.toProto(result)); + return; + } + region.mutateRowsWithLocks(schemaMetaData, Collections.<byte[]> emptySet(), HConstants.NO_NONCE, + HConstants.NO_NONCE); + Cache<ImmutableBytesPtr, PMetaDataEntity> metaDataCache = GlobalCache.getInstance(this.env) + .getMetaDataCache(); + long currentTime = MetaDataUtil.getClientTimeStamp(schemaMetaData); + for (ImmutableBytesPtr ptr : invalidateList) { + metaDataCache.invalidate(ptr); + metaDataCache.put(ptr, newDeletedSchemaMarker(currentTime)); + } + done.run(MetaDataMutationResult.toProto(result)); + return; + } finally { + region.releaseRowLocks(locks); + } + } catch (Throwable t) { + logger.error("drop schema failed:", t); + ProtobufUtil.setControllerException(controller, ServerUtil.createIOException(schemaName, t)); + } + } + + private MetaDataMutationResult doDropSchema(long clientTimeStamp, String schemaName, byte[] key, + List<Mutation> schemaMutations, List<ImmutableBytesPtr> invalidateList) throws IOException, SQLException { + PSchema schema = loadSchema(env, key, new ImmutableBytesPtr(key), clientTimeStamp, clientTimeStamp); + boolean areTablesExists = false; + if (schema == null) { return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); } + if (schema.getTimeStamp() < clientTimeStamp) { + Region region = env.getRegion(); + Scan scan = MetaDataUtil.newTableRowsScan(SchemaUtil.getKeyForSchema(null, schemaName), MIN_TABLE_TIMESTAMP, + clientTimeStamp); + List<Cell> results = Lists.newArrayList(); + try (RegionScanner scanner = region.getScanner(scan) { + scanner.next(results); + if (results.isEmpty()) { // Should not be possible + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, + EnvironmentEdgeManager.currentTimeMillis(), null); + } + do { + Cell kv = results.get(0); + if (Bytes.compareTo(kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), key, 0, + key.length) != 0) { + areTablesExists = true; + break; + } + results.clear(); + scanner.next(results); + } while (!results.isEmpty()); + } + if (areTablesExists) { return new MetaDataMutationResult(MutationCode.UNALLOWED_SCHEMA_MUTATION, schema, + EnvironmentEdgeManager.currentTimeMillis()); } + + return new MetaDataMutationResult(MutationCode.SCHEMA_ALREADY_EXISTS, schema, + EnvironmentEdgeManager.currentTimeMillis()); + } + return new MetaDataMutationResult(MutationCode.SCHEMA_NOT_FOUND, EnvironmentEdgeManager.currentTimeMillis(), — End diff – I think SCHEMA_ALREADY_EXISTS is needed in if block , because we are checking if the schema timestamp is less than the clientstamp otherwise we are returning MutationCode.SCHEMA_NOT_FOUND
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58230276

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St

          return result;
          }

          • +
            + public MetaDataMutationResult updateCache(String schemaName) throws SQLException

            { + return updateCache(schemaName, false); + }

            +
            + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException {
            + long clientTimeStamp = getClientTimeStamp();
            + PSchema schema = null;
            + try {
            + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName));

              • End diff –

          I see - so regardless of the schema used for a view, the physical data will still be stored in the one physical table. The schema in this case for the view does not map to an HBase namespace. Is that correct?

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58230276 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St return result; } + + public MetaDataMutationResult updateCache(String schemaName) throws SQLException { + return updateCache(schemaName, false); + } + + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException { + long clientTimeStamp = getClientTimeStamp(); + PSchema schema = null; + try { + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName)); End diff – I see - so regardless of the schema used for a view, the physical data will still be stored in the one physical table. The schema in this case for the view does not map to an HBase namespace. Is that correct?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58234962

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St

          return result;
          }

          • +
            + public MetaDataMutationResult updateCache(String schemaName) throws SQLException

            { + return updateCache(schemaName, false); + }

            +
            + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException {
            + long clientTimeStamp = getClientTimeStamp();
            + PSchema schema = null;
            + try {
            + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName));

              • End diff –

          Yes @JamesRTaylor .. Absolutely correct !!

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58234962 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -544,7 +556,28 @@ private MetaDataMutationResult updateCache(PName tenantId, String schemaName, St return result; } + + public MetaDataMutationResult updateCache(String schemaName) throws SQLException { + return updateCache(schemaName, false); + } + + public MetaDataMutationResult updateCache(String schemaName, boolean alwaysHitServer) throws SQLException { + long clientTimeStamp = getClientTimeStamp(); + PSchema schema = null; + try { + schema = connection.getMetaDataCache().getSchema(new PTableKey(null, schemaName)); End diff – Yes @JamesRTaylor .. Absolutely correct !!
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-205446807

          @samarthjain @JamesRTaylor , any update on further review?

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-205446807 @samarthjain @JamesRTaylor , any update on further review?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58465248

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java —
          @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta
          if (def.getColumnDefName().getFamilyName() != null)

          { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); }
          • }
          • Long scn = connection.getSCN();
          • PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families);
            + }
            + Long scn = connection.getSCN();
            + String schema = table.getName().getSchemaName();
            + if (connection.getSchema() != null) {
              • End diff –

          Not sure if functionally this is the right thing to do here. I would expect Phoenix to thrown an exception when the connection's setting of schema/namespace is different from the the table's namespace/schema. Or is my understanding not correct here? Would be good to add a test case for this too in UseSchemaIT.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58465248 — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java — @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta if (def.getColumnDefName().getFamilyName() != null) { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); } } Long scn = connection.getSCN(); PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families); + } + Long scn = connection.getSCN(); + String schema = table.getName().getSchemaName(); + if (connection.getSchema() != null) { End diff – Not sure if functionally this is the right thing to do here. I would expect Phoenix to thrown an exception when the connection's setting of schema/namespace is different from the the table's namespace/schema. Or is my understanding not correct here? Would be good to add a test case for this too in UseSchemaIT.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58465875

          — Diff: phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java —
          @@ -825,9 +825,25 @@ protected static void ensureTableCreated(String url, String tableName, Long ts)

          protected static void ensureTableCreated(String url, String tableName, byte[][] splits, Long ts) throws SQLException

          { String ddl = tableDDLMap.get(tableName); + createSchema(url,tableName, ts); createTestTable(url, ddl, splits, ts); }

          + public static void createSchema(String url, String tableName, Long ts) throws SQLException {
          + if (tableName.contains(".")) {
          — End diff –

          Use SchemaUtil.getSchemaNameFromFullName and related methods here to parse the schema name and table name.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58465875 — Diff: phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java — @@ -825,9 +825,25 @@ protected static void ensureTableCreated(String url, String tableName, Long ts) protected static void ensureTableCreated(String url, String tableName, byte[][] splits, Long ts) throws SQLException { String ddl = tableDDLMap.get(tableName); + createSchema(url,tableName, ts); createTestTable(url, ddl, splits, ts); } + public static void createSchema(String url, String tableName, Long ts) throws SQLException { + if (tableName.contains(".")) { — End diff – Use SchemaUtil.getSchemaNameFromFullName and related methods here to parse the schema name and table name.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58467374

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java —
          @@ -897,4 +941,86 @@ public static boolean hasRowTimestampColumn(PTable table)

          { PName schemaName = dataTable.getSchemaName(); return getTableKey(tenantId == null ? ByteUtil.EMPTY_BYTE_ARRAY : tenantId.getBytes(), schemaName == null ? ByteUtil.EMPTY_BYTE_ARRAY : schemaName.getBytes(), dataTable.getTableName().getBytes()); }

          +
          + public static byte[] getSchemaKey(String schemaName)

          { + return SchemaUtil.getTableKey(null, schemaName, MetaDataClient.EMPTY_TABLE); + }

          +
          + public static PName getPhysicalHBaseTableName(PName pName, boolean isNamespaceMapped, PTableType type)

          { + return getPhysicalHBaseTableName(pName.toString(), isNamespaceMapped, type); + }

          +
          + public static PName getPhysicalHBaseTableName(byte[] tableName, boolean isNamespaceMapped, PTableType type)

          { + return getPhysicalHBaseTableName(Bytes.toString(tableName), isNamespaceMapped, type); + }

          +
          + public static TableName getPhysicalTableName(String fullTableName, ReadOnlyProps readOnlyProps)

          { + return getPhysicalName(Bytes.toBytes(fullTableName), readOnlyProps); + }

          +
          + public static TableName getPhysicalTableName(byte[] fullTableName, Configuration conf)

          { + return getPhysicalTableName(fullTableName, isNamespaceMappingEnabled( + isSystemTable(fullTableName) ? PTableType.SYSTEM : null, new ReadOnlyProps(conf.iterator()))); + }

          +
          + public static TableName getPhysicalName(byte[] fullTableName, ReadOnlyProps readOnlyProps)

          { + return getPhysicalTableName(fullTableName, + isNamespaceMappingEnabled(isSystemTable(fullTableName) ? PTableType.SYSTEM : null, readOnlyProps)); + }

          +
          + public static TableName getPhysicalTableName(byte[] fullTableName, boolean isNamespaceMappingEnabled) {
          + if (!isNamespaceMappingEnabled)

          { return TableName.valueOf(fullTableName); }

          + String tableName = getTableNameFromFullName(fullTableName);
          + String schemaName = getSchemaNameFromFullName(fullTableName);
          + return TableName.valueOf(schemaName, tableName);
          + }
          +
          + public static String getSchemaNameFromHBaseFullName(byte[] tableName, ReadOnlyProps props) {
          + if (tableName == null)

          { return null; }

          + int index = isNamespaceMappingEnabled(null, props) ? indexOf(tableName, QueryConstants.NAMESPACE_SEPARATOR_BYTE)
          + : indexOf(tableName, QueryConstants.NAME_SEPARATOR_BYTE);
          + if (index < 0)

          { return StringUtil.EMPTY_STRING; }

          + return Bytes.toString(tableName, 0, index);
          + }
          +
          + public static PName getPhysicalHBaseTableName(String tableName, boolean isNamespaceMapped, PTableType type) {
          + if (!isNamespaceMapped)

          { return PNameFactory.newName(tableName); }

          + return PNameFactory
          + .newName(tableName.replace(QueryConstants.NAME_SEPARATOR, QueryConstants.NAMESPACE_SEPARATOR));
          + }
          +
          + public static boolean isSchemaCheckRequired(PTableType tableType, ReadOnlyProps props) {
          + if (PTableType.TABLE.equals(tableType) && isNamespaceMappingEnabled(tableType, props))

          { return true; }

          + return false;
          + }
          +
          + public static boolean isNamespaceMappingEnabled(PTableType type, ReadOnlyProps readOnlyProps) {
          + return readOnlyProps.getBoolean(QueryServices.IS_NAMESPACE_MAPPING_ENABLED,
          + QueryServicesOptions.DEFAULT_IS_NAMESPACE_MAPPING_ENABLED)
          + && (type == null || !PTableType.SYSTEM.equals(type)
          — End diff –

          Can the type be null here. If not, I would annotate the argument as @Nonnull and have a Preconditions.checkNotNull(type) check at the beginning of the method.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58467374 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java — @@ -897,4 +941,86 @@ public static boolean hasRowTimestampColumn(PTable table) { PName schemaName = dataTable.getSchemaName(); return getTableKey(tenantId == null ? ByteUtil.EMPTY_BYTE_ARRAY : tenantId.getBytes(), schemaName == null ? ByteUtil.EMPTY_BYTE_ARRAY : schemaName.getBytes(), dataTable.getTableName().getBytes()); } + + public static byte[] getSchemaKey(String schemaName) { + return SchemaUtil.getTableKey(null, schemaName, MetaDataClient.EMPTY_TABLE); + } + + public static PName getPhysicalHBaseTableName(PName pName, boolean isNamespaceMapped, PTableType type) { + return getPhysicalHBaseTableName(pName.toString(), isNamespaceMapped, type); + } + + public static PName getPhysicalHBaseTableName(byte[] tableName, boolean isNamespaceMapped, PTableType type) { + return getPhysicalHBaseTableName(Bytes.toString(tableName), isNamespaceMapped, type); + } + + public static TableName getPhysicalTableName(String fullTableName, ReadOnlyProps readOnlyProps) { + return getPhysicalName(Bytes.toBytes(fullTableName), readOnlyProps); + } + + public static TableName getPhysicalTableName(byte[] fullTableName, Configuration conf) { + return getPhysicalTableName(fullTableName, isNamespaceMappingEnabled( + isSystemTable(fullTableName) ? PTableType.SYSTEM : null, new ReadOnlyProps(conf.iterator()))); + } + + public static TableName getPhysicalName(byte[] fullTableName, ReadOnlyProps readOnlyProps) { + return getPhysicalTableName(fullTableName, + isNamespaceMappingEnabled(isSystemTable(fullTableName) ? PTableType.SYSTEM : null, readOnlyProps)); + } + + public static TableName getPhysicalTableName(byte[] fullTableName, boolean isNamespaceMappingEnabled) { + if (!isNamespaceMappingEnabled) { return TableName.valueOf(fullTableName); } + String tableName = getTableNameFromFullName(fullTableName); + String schemaName = getSchemaNameFromFullName(fullTableName); + return TableName.valueOf(schemaName, tableName); + } + + public static String getSchemaNameFromHBaseFullName(byte[] tableName, ReadOnlyProps props) { + if (tableName == null) { return null; } + int index = isNamespaceMappingEnabled(null, props) ? indexOf(tableName, QueryConstants.NAMESPACE_SEPARATOR_BYTE) + : indexOf(tableName, QueryConstants.NAME_SEPARATOR_BYTE); + if (index < 0) { return StringUtil.EMPTY_STRING; } + return Bytes.toString(tableName, 0, index); + } + + public static PName getPhysicalHBaseTableName(String tableName, boolean isNamespaceMapped, PTableType type) { + if (!isNamespaceMapped) { return PNameFactory.newName(tableName); } + return PNameFactory + .newName(tableName.replace(QueryConstants.NAME_SEPARATOR, QueryConstants.NAMESPACE_SEPARATOR)); + } + + public static boolean isSchemaCheckRequired(PTableType tableType, ReadOnlyProps props) { + if (PTableType.TABLE.equals(tableType) && isNamespaceMappingEnabled(tableType, props)) { return true; } + return false; + } + + public static boolean isNamespaceMappingEnabled(PTableType type, ReadOnlyProps readOnlyProps) { + return readOnlyProps.getBoolean(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, + QueryServicesOptions.DEFAULT_IS_NAMESPACE_MAPPING_ENABLED) + && (type == null || !PTableType.SYSTEM.equals(type) — End diff – Can the type be null here. If not, I would annotate the argument as @Nonnull and have a Preconditions.checkNotNull(type) check at the beginning of the method.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58467711

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java —
          @@ -897,4 +941,86 @@ public static boolean hasRowTimestampColumn(PTable table)

          { PName schemaName = dataTable.getSchemaName(); return getTableKey(tenantId == null ? ByteUtil.EMPTY_BYTE_ARRAY : tenantId.getBytes(), schemaName == null ? ByteUtil.EMPTY_BYTE_ARRAY : schemaName.getBytes(), dataTable.getTableName().getBytes()); }

          +
          + public static byte[] getSchemaKey(String schemaName)

          { + return SchemaUtil.getTableKey(null, schemaName, MetaDataClient.EMPTY_TABLE); + }

          +
          + public static PName getPhysicalHBaseTableName(PName pName, boolean isNamespaceMapped, PTableType type)

          { + return getPhysicalHBaseTableName(pName.toString(), isNamespaceMapped, type); + }

          +
          + public static PName getPhysicalHBaseTableName(byte[] tableName, boolean isNamespaceMapped, PTableType type)

          { + return getPhysicalHBaseTableName(Bytes.toString(tableName), isNamespaceMapped, type); + }

          +
          + public static TableName getPhysicalTableName(String fullTableName, ReadOnlyProps readOnlyProps)

          { + return getPhysicalName(Bytes.toBytes(fullTableName), readOnlyProps); + }

          +
          + public static TableName getPhysicalTableName(byte[] fullTableName, Configuration conf)

          { + return getPhysicalTableName(fullTableName, isNamespaceMappingEnabled( + isSystemTable(fullTableName) ? PTableType.SYSTEM : null, new ReadOnlyProps(conf.iterator()))); + }

          +
          + public static TableName getPhysicalName(byte[] fullTableName, ReadOnlyProps readOnlyProps)

          { + return getPhysicalTableName(fullTableName, + isNamespaceMappingEnabled(isSystemTable(fullTableName) ? PTableType.SYSTEM : null, readOnlyProps)); + }

          +
          + public static TableName getPhysicalTableName(byte[] fullTableName, boolean isNamespaceMappingEnabled) {
          + if (!isNamespaceMappingEnabled)

          { return TableName.valueOf(fullTableName); }

          + String tableName = getTableNameFromFullName(fullTableName);
          + String schemaName = getSchemaNameFromFullName(fullTableName);
          + return TableName.valueOf(schemaName, tableName);
          + }
          +
          + public static String getSchemaNameFromHBaseFullName(byte[] tableName, ReadOnlyProps props) {
          + if (tableName == null)

          { return null; }

          + int index = isNamespaceMappingEnabled(null, props) ? indexOf(tableName, QueryConstants.NAMESPACE_SEPARATOR_BYTE)
          + : indexOf(tableName, QueryConstants.NAME_SEPARATOR_BYTE);
          + if (index < 0)

          { return StringUtil.EMPTY_STRING; }

          + return Bytes.toString(tableName, 0, index);
          + }
          +
          + public static PName getPhysicalHBaseTableName(String tableName, boolean isNamespaceMapped, PTableType type) {
          + if (!isNamespaceMapped)

          { return PNameFactory.newName(tableName); }

          + return PNameFactory
          + .newName(tableName.replace(QueryConstants.NAME_SEPARATOR, QueryConstants.NAMESPACE_SEPARATOR));
          + }
          +
          + public static boolean isSchemaCheckRequired(PTableType tableType, ReadOnlyProps props) {
          + if (PTableType.TABLE.equals(tableType) && isNamespaceMappingEnabled(tableType, props))

          { return true; }

          — End diff –

          nit: how about
          return PTableType.TABLE.equals(tableType) && isNamespaceMappingEnabled(tableType, props);

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58467711 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java — @@ -897,4 +941,86 @@ public static boolean hasRowTimestampColumn(PTable table) { PName schemaName = dataTable.getSchemaName(); return getTableKey(tenantId == null ? ByteUtil.EMPTY_BYTE_ARRAY : tenantId.getBytes(), schemaName == null ? ByteUtil.EMPTY_BYTE_ARRAY : schemaName.getBytes(), dataTable.getTableName().getBytes()); } + + public static byte[] getSchemaKey(String schemaName) { + return SchemaUtil.getTableKey(null, schemaName, MetaDataClient.EMPTY_TABLE); + } + + public static PName getPhysicalHBaseTableName(PName pName, boolean isNamespaceMapped, PTableType type) { + return getPhysicalHBaseTableName(pName.toString(), isNamespaceMapped, type); + } + + public static PName getPhysicalHBaseTableName(byte[] tableName, boolean isNamespaceMapped, PTableType type) { + return getPhysicalHBaseTableName(Bytes.toString(tableName), isNamespaceMapped, type); + } + + public static TableName getPhysicalTableName(String fullTableName, ReadOnlyProps readOnlyProps) { + return getPhysicalName(Bytes.toBytes(fullTableName), readOnlyProps); + } + + public static TableName getPhysicalTableName(byte[] fullTableName, Configuration conf) { + return getPhysicalTableName(fullTableName, isNamespaceMappingEnabled( + isSystemTable(fullTableName) ? PTableType.SYSTEM : null, new ReadOnlyProps(conf.iterator()))); + } + + public static TableName getPhysicalName(byte[] fullTableName, ReadOnlyProps readOnlyProps) { + return getPhysicalTableName(fullTableName, + isNamespaceMappingEnabled(isSystemTable(fullTableName) ? PTableType.SYSTEM : null, readOnlyProps)); + } + + public static TableName getPhysicalTableName(byte[] fullTableName, boolean isNamespaceMappingEnabled) { + if (!isNamespaceMappingEnabled) { return TableName.valueOf(fullTableName); } + String tableName = getTableNameFromFullName(fullTableName); + String schemaName = getSchemaNameFromFullName(fullTableName); + return TableName.valueOf(schemaName, tableName); + } + + public static String getSchemaNameFromHBaseFullName(byte[] tableName, ReadOnlyProps props) { + if (tableName == null) { return null; } + int index = isNamespaceMappingEnabled(null, props) ? indexOf(tableName, QueryConstants.NAMESPACE_SEPARATOR_BYTE) + : indexOf(tableName, QueryConstants.NAME_SEPARATOR_BYTE); + if (index < 0) { return StringUtil.EMPTY_STRING; } + return Bytes.toString(tableName, 0, index); + } + + public static PName getPhysicalHBaseTableName(String tableName, boolean isNamespaceMapped, PTableType type) { + if (!isNamespaceMapped) { return PNameFactory.newName(tableName); } + return PNameFactory + .newName(tableName.replace(QueryConstants.NAME_SEPARATOR, QueryConstants.NAMESPACE_SEPARATOR)); + } + + public static boolean isSchemaCheckRequired(PTableType tableType, ReadOnlyProps props) { + if (PTableType.TABLE.equals(tableType) && isNamespaceMappingEnabled(tableType, props)) { return true; } — End diff – nit: how about return PTableType.TABLE.equals(tableType) && isNamespaceMappingEnabled(tableType, props);
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58468008

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          — End diff –

          Can you not just pass the pTableType argument here?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58468008 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( — End diff – Can you not just pass the pTableType argument here?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58468394

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null) {
          — End diff –

          This looks a bit hacky/unclear to me. When can phoenixTableName be null here? And when is it ok to use the srcTableName. At a minimum some method comments would help.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58468394 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { — End diff – This looks a bit hacky/unclear to me. When can phoenixTableName be null here? And when is it ok to use the srcTableName. At a minimum some method comments would help.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58468877

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null)

          { + phoenixTableName = srcTableName; + }

          + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName),
          + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts);
          + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES,
          + PBoolean.INSTANCE.toBytes(Boolean.TRUE));
          + metatable.put(put);
          + }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName,
          + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException,
          + InterruptedException, SQLException

          { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + }

          +
          + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException,
          + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException {
          + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps();
          + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException(
          — End diff –

          What if the connection has schema property set? Should we report an error. Looks like currently we are ignoring it.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58468877 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { + phoenixTableName = srcTableName; + } + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName), + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts); + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES, + PBoolean.INSTANCE.toBytes(Boolean.TRUE)); + metatable.put(put); + } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName, + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException, + InterruptedException, SQLException { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + } + + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException, + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException { + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps(); + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException( — End diff – What if the connection has schema property set? Should we report an error. Looks like currently we are ignoring it.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58469055

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null)

          { + phoenixTableName = srcTableName; + }

          + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName),
          + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts);
          + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES,
          + PBoolean.INSTANCE.toBytes(Boolean.TRUE));
          + metatable.put(put);
          + }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName,
          + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException,
          + InterruptedException, SQLException

          { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + }

          +
          + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException,
          + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException {
          + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps();
          + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null)

          { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); }

          + try (HBaseAdmin admin = conn.getQueryServices().getAdmin();
          + HTableInterface metatable = conn.getQueryServices()
          + .getTable(SchemaUtil
          + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps)
          + .getName()) {
          + String tableName = SchemaUtil.normalizeIdentifier(srcTable);
          + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName);
          +
          + // Upgrade is not required if schemaName is not present.
          + if (schemaName.equals(""))

          { throw new IllegalArgumentException("Table doesn't have schema name"); }

          +
          + // Confirm table is not already upgraded
          + PTable table = PhoenixRuntime.getTable(conn, tableName);
          + if (table.isNamespaceMapped())

          { throw new IllegalArgumentException("Table is already upgraded"); }

          + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName);
          + String newPhysicalTablename = SchemaUtil
          + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString());
          +
          + // Upgrade the data or main table
          + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps,
          + PhoenixRuntime.getCurrentScn(readOnlyProps), tableName, table.getType());
          +
          + // clear the cache and get new table
          + conn.getQueryServices().clearCache();
          + MetaDataMutationResult result = new MetaDataClient(conn).updateCache(schemaName,
          + SchemaUtil.getTableNameFromFullName(tableName));
          + if (result.getMutationCode() != MutationCode.TABLE_ALREADY_EXISTS)

          { throw new TableNotFoundException( + tableName); }

          + table = result.getTable();
          + // check whether table is properly upgraded before upgrading indexes
          + if (table.isNamespaceMapped()) {
          + for (PTable index : table.getIndexes()) {
          + String srcTableName = index.getPhysicalName().getString();
          + if(srcTableName.contains(QueryConstants.NAMESPACE_SEPARATOR))

          { + //this condition occurs in case of multiple views on table + //skip already migrated tables + continue; + }

          + String destTableName = null;
          + String phoenixTableName = index.getName().getString();
          + boolean updateLink = false;
          + if (srcTableName.startsWith(MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX)) {
          — End diff –

          Would be good to add a method like isLocalIndex in SchemaUtil

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58469055 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { + phoenixTableName = srcTableName; + } + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName), + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts); + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES, + PBoolean.INSTANCE.toBytes(Boolean.TRUE)); + metatable.put(put); + } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName, + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException, + InterruptedException, SQLException { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + } + + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException, + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException { + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps(); + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); } + try (HBaseAdmin admin = conn.getQueryServices().getAdmin(); + HTableInterface metatable = conn.getQueryServices() + .getTable(SchemaUtil + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps) + .getName()) { + String tableName = SchemaUtil.normalizeIdentifier(srcTable); + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName); + + // Upgrade is not required if schemaName is not present. + if (schemaName.equals("")) { throw new IllegalArgumentException("Table doesn't have schema name"); } + + // Confirm table is not already upgraded + PTable table = PhoenixRuntime.getTable(conn, tableName); + if (table.isNamespaceMapped()) { throw new IllegalArgumentException("Table is already upgraded"); } + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName); + String newPhysicalTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString()); + + // Upgrade the data or main table + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps, + PhoenixRuntime.getCurrentScn(readOnlyProps), tableName, table.getType()); + + // clear the cache and get new table + conn.getQueryServices().clearCache(); + MetaDataMutationResult result = new MetaDataClient(conn).updateCache(schemaName, + SchemaUtil.getTableNameFromFullName(tableName)); + if (result.getMutationCode() != MutationCode.TABLE_ALREADY_EXISTS) { throw new TableNotFoundException( + tableName); } + table = result.getTable(); + // check whether table is properly upgraded before upgrading indexes + if (table.isNamespaceMapped()) { + for (PTable index : table.getIndexes()) { + String srcTableName = index.getPhysicalName().getString(); + if(srcTableName.contains(QueryConstants.NAMESPACE_SEPARATOR)) { + //this condition occurs in case of multiple views on table + //skip already migrated tables + continue; + } + String destTableName = null; + String phoenixTableName = index.getName().getString(); + boolean updateLink = false; + if (srcTableName.startsWith(MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX)) { — End diff – Would be good to add a method like isLocalIndex in SchemaUtil
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58469125

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null)

          { + phoenixTableName = srcTableName; + }

          + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName),
          + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts);
          + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES,
          + PBoolean.INSTANCE.toBytes(Boolean.TRUE));
          + metatable.put(put);
          + }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName,
          + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException,
          + InterruptedException, SQLException

          { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + }

          +
          + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException,
          + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException {
          + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps();
          + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null)

          { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); }

          + try (HBaseAdmin admin = conn.getQueryServices().getAdmin();
          + HTableInterface metatable = conn.getQueryServices()
          + .getTable(SchemaUtil
          + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps)
          + .getName()) {
          + String tableName = SchemaUtil.normalizeIdentifier(srcTable);
          + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName);
          +
          + // Upgrade is not required if schemaName is not present.
          + if (schemaName.equals(""))

          { throw new IllegalArgumentException("Table doesn't have schema name"); }

          +
          + // Confirm table is not already upgraded
          + PTable table = PhoenixRuntime.getTable(conn, tableName);
          + if (table.isNamespaceMapped())

          { throw new IllegalArgumentException("Table is already upgraded"); }

          + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName);
          + String newPhysicalTablename = SchemaUtil
          + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString());
          +
          + // Upgrade the data or main table
          + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps,
          + PhoenixRuntime.getCurrentScn(readOnlyProps), tableName, table.getType());
          +
          + // clear the cache and get new table
          + conn.getQueryServices().clearCache();
          + MetaDataMutationResult result = new MetaDataClient(conn).updateCache(schemaName,
          + SchemaUtil.getTableNameFromFullName(tableName));
          + if (result.getMutationCode() != MutationCode.TABLE_ALREADY_EXISTS)

          { throw new TableNotFoundException( + tableName); }

          + table = result.getTable();
          + // check whether table is properly upgraded before upgrading indexes
          + if (table.isNamespaceMapped()) {
          + for (PTable index : table.getIndexes()) {
          + String srcTableName = index.getPhysicalName().getString();
          + if(srcTableName.contains(QueryConstants.NAMESPACE_SEPARATOR)){
          + //this condition occurs in case of multiple views on table
          — End diff –

          Not sure I follow the comment here. Can you please elaborate?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58469125 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { + phoenixTableName = srcTableName; + } + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName), + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts); + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES, + PBoolean.INSTANCE.toBytes(Boolean.TRUE)); + metatable.put(put); + } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName, + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException, + InterruptedException, SQLException { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + } + + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException, + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException { + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps(); + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); } + try (HBaseAdmin admin = conn.getQueryServices().getAdmin(); + HTableInterface metatable = conn.getQueryServices() + .getTable(SchemaUtil + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps) + .getName()) { + String tableName = SchemaUtil.normalizeIdentifier(srcTable); + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName); + + // Upgrade is not required if schemaName is not present. + if (schemaName.equals("")) { throw new IllegalArgumentException("Table doesn't have schema name"); } + + // Confirm table is not already upgraded + PTable table = PhoenixRuntime.getTable(conn, tableName); + if (table.isNamespaceMapped()) { throw new IllegalArgumentException("Table is already upgraded"); } + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName); + String newPhysicalTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString()); + + // Upgrade the data or main table + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps, + PhoenixRuntime.getCurrentScn(readOnlyProps), tableName, table.getType()); + + // clear the cache and get new table + conn.getQueryServices().clearCache(); + MetaDataMutationResult result = new MetaDataClient(conn).updateCache(schemaName, + SchemaUtil.getTableNameFromFullName(tableName)); + if (result.getMutationCode() != MutationCode.TABLE_ALREADY_EXISTS) { throw new TableNotFoundException( + tableName); } + table = result.getTable(); + // check whether table is properly upgraded before upgrading indexes + if (table.isNamespaceMapped()) { + for (PTable index : table.getIndexes()) { + String srcTableName = index.getPhysicalName().getString(); + if(srcTableName.contains(QueryConstants.NAMESPACE_SEPARATOR)){ + //this condition occurs in case of multiple views on table — End diff – Not sure I follow the comment here. Can you please elaborate?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58469290

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null)

          { + phoenixTableName = srcTableName; + }

          + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName),
          + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts);
          + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES,
          + PBoolean.INSTANCE.toBytes(Boolean.TRUE));
          + metatable.put(put);
          + }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName,
          + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException,
          + InterruptedException, SQLException

          { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + }

          +
          + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException,
          + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException {
          + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps();
          + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null)

          { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); }

          + try (HBaseAdmin admin = conn.getQueryServices().getAdmin();
          + HTableInterface metatable = conn.getQueryServices()
          + .getTable(SchemaUtil
          + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps)
          + .getName()) {
          + String tableName = SchemaUtil.normalizeIdentifier(srcTable);
          + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName);
          +
          + // Upgrade is not required if schemaName is not present.
          + if (schemaName.equals(""))

          { throw new IllegalArgumentException("Table doesn't have schema name"); }

          +
          + // Confirm table is not already upgraded
          + PTable table = PhoenixRuntime.getTable(conn, tableName);
          + if (table.isNamespaceMapped())

          { throw new IllegalArgumentException("Table is already upgraded"); }

          + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName);
          + String newPhysicalTablename = SchemaUtil
          + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString());
          +
          + // Upgrade the data or main table
          + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps,
          + PhoenixRuntime.getCurrentScn(readOnlyProps), tableName, table.getType());
          +
          + // clear the cache and get new table
          + conn.getQueryServices().clearCache();
          — End diff –

          I think you should be calling clearTableFromCache() here instead.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58469290 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { + phoenixTableName = srcTableName; + } + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName), + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts); + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES, + PBoolean.INSTANCE.toBytes(Boolean.TRUE)); + metatable.put(put); + } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName, + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException, + InterruptedException, SQLException { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + } + + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException, + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException { + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps(); + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); } + try (HBaseAdmin admin = conn.getQueryServices().getAdmin(); + HTableInterface metatable = conn.getQueryServices() + .getTable(SchemaUtil + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps) + .getName()) { + String tableName = SchemaUtil.normalizeIdentifier(srcTable); + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName); + + // Upgrade is not required if schemaName is not present. + if (schemaName.equals("")) { throw new IllegalArgumentException("Table doesn't have schema name"); } + + // Confirm table is not already upgraded + PTable table = PhoenixRuntime.getTable(conn, tableName); + if (table.isNamespaceMapped()) { throw new IllegalArgumentException("Table is already upgraded"); } + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName); + String newPhysicalTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString()); + + // Upgrade the data or main table + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps, + PhoenixRuntime.getCurrentScn(readOnlyProps), tableName, table.getType()); + + // clear the cache and get new table + conn.getQueryServices().clearCache(); — End diff – I think you should be calling clearTableFromCache() here instead.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58469369

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null)

          { + phoenixTableName = srcTableName; + }

          + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName),
          + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts);
          + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES,
          + PBoolean.INSTANCE.toBytes(Boolean.TRUE));
          + metatable.put(put);
          + }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName,
          + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException,
          + InterruptedException, SQLException

          { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + }

          +
          + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException,
          + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException {
          + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps();
          + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null)

          { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); }

          + try (HBaseAdmin admin = conn.getQueryServices().getAdmin();
          + HTableInterface metatable = conn.getQueryServices()
          + .getTable(SchemaUtil
          + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps)
          + .getName()) {
          + String tableName = SchemaUtil.normalizeIdentifier(srcTable);
          + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName);
          +
          + // Upgrade is not required if schemaName is not present.
          + if (schemaName.equals(""))

          { throw new IllegalArgumentException("Table doesn't have schema name"); }

          +
          + // Confirm table is not already upgraded
          + PTable table = PhoenixRuntime.getTable(conn, tableName);
          + if (table.isNamespaceMapped())

          { throw new IllegalArgumentException("Table is already upgraded"); }

          + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName);
          + String newPhysicalTablename = SchemaUtil
          + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString());
          +
          + // Upgrade the data or main table
          + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps,
          + PhoenixRuntime.getCurrentScn(readOnlyProps), tableName, table.getType());
          +
          + // clear the cache and get new table
          + conn.getQueryServices().clearCache();
          + MetaDataMutationResult result = new MetaDataClient(conn).updateCache(schemaName,
          + SchemaUtil.getTableNameFromFullName(tableName));
          + if (result.getMutationCode() != MutationCode.TABLE_ALREADY_EXISTS)

          { throw new TableNotFoundException( + tableName); }

          + table = result.getTable();
          + // check whether table is properly upgraded before upgrading indexes
          + if (table.isNamespaceMapped()) {
          + for (PTable index : table.getIndexes()) {
          + String srcTableName = index.getPhysicalName().getString();
          + if(srcTableName.contains(QueryConstants.NAMESPACE_SEPARATOR))

          { + //this condition occurs in case of multiple views on table + //skip already migrated tables + continue; + }

          + String destTableName = null;
          + String phoenixTableName = index.getName().getString();
          + boolean updateLink = false;
          + if (srcTableName.startsWith(MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX))

          { + destTableName = Bytes + .toString(MetaDataUtil.getLocalIndexPhysicalName(newPhysicalTablename.getBytes())); + //update parent_table property in local index table descriptor + conn.createStatement() + .execute(String.format("ALTER TABLE %s set " + MetaDataUtil.PARENT_TABLE_KEY + "='%s'", + phoenixTableName, table.getPhysicalName())); + updateLink = true; + }

          else if (srcTableName.startsWith(MetaDataUtil.VIEW_INDEX_TABLE_PREFIX))

          { + destTableName = Bytes + .toString(MetaDataUtil.getViewIndexPhysicalName(newPhysicalTablename.getBytes())); + updateLink = true; + }

          else

          { + destTableName = SchemaUtil + .getPhysicalTableName(index.getPhysicalName().getString(), readOnlyProps) + .getNameAsString(); + }

          + if (updateLink)

          { + updateLink(conn, srcTableName, destTableName); + }

          + UpgradeUtil.mapTableToNamespace(admin, metatable, srcTableName, destTableName, readOnlyProps,
          + PhoenixRuntime.getCurrentScn(readOnlyProps), phoenixTableName, index.getType());
          + }
          + conn.getQueryServices().clearCache();
          — End diff –

          Similarly here. Call clearTableFromCache.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58469369 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { + phoenixTableName = srcTableName; + } + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName), + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts); + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES, + PBoolean.INSTANCE.toBytes(Boolean.TRUE)); + metatable.put(put); + } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName, + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException, + InterruptedException, SQLException { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + } + + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException, + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException { + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps(); + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); } + try (HBaseAdmin admin = conn.getQueryServices().getAdmin(); + HTableInterface metatable = conn.getQueryServices() + .getTable(SchemaUtil + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps) + .getName()) { + String tableName = SchemaUtil.normalizeIdentifier(srcTable); + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName); + + // Upgrade is not required if schemaName is not present. + if (schemaName.equals("")) { throw new IllegalArgumentException("Table doesn't have schema name"); } + + // Confirm table is not already upgraded + PTable table = PhoenixRuntime.getTable(conn, tableName); + if (table.isNamespaceMapped()) { throw new IllegalArgumentException("Table is already upgraded"); } + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName); + String newPhysicalTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString()); + + // Upgrade the data or main table + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps, + PhoenixRuntime.getCurrentScn(readOnlyProps), tableName, table.getType()); + + // clear the cache and get new table + conn.getQueryServices().clearCache(); + MetaDataMutationResult result = new MetaDataClient(conn).updateCache(schemaName, + SchemaUtil.getTableNameFromFullName(tableName)); + if (result.getMutationCode() != MutationCode.TABLE_ALREADY_EXISTS) { throw new TableNotFoundException( + tableName); } + table = result.getTable(); + // check whether table is properly upgraded before upgrading indexes + if (table.isNamespaceMapped()) { + for (PTable index : table.getIndexes()) { + String srcTableName = index.getPhysicalName().getString(); + if(srcTableName.contains(QueryConstants.NAMESPACE_SEPARATOR)) { + //this condition occurs in case of multiple views on table + //skip already migrated tables + continue; + } + String destTableName = null; + String phoenixTableName = index.getName().getString(); + boolean updateLink = false; + if (srcTableName.startsWith(MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX)) { + destTableName = Bytes + .toString(MetaDataUtil.getLocalIndexPhysicalName(newPhysicalTablename.getBytes())); + //update parent_table property in local index table descriptor + conn.createStatement() + .execute(String.format("ALTER TABLE %s set " + MetaDataUtil.PARENT_TABLE_KEY + "='%s'", + phoenixTableName, table.getPhysicalName())); + updateLink = true; + } else if (srcTableName.startsWith(MetaDataUtil.VIEW_INDEX_TABLE_PREFIX)) { + destTableName = Bytes + .toString(MetaDataUtil.getViewIndexPhysicalName(newPhysicalTablename.getBytes())); + updateLink = true; + } else { + destTableName = SchemaUtil + .getPhysicalTableName(index.getPhysicalName().getString(), readOnlyProps) + .getNameAsString(); + } + if (updateLink) { + updateLink(conn, srcTableName, destTableName); + } + UpgradeUtil.mapTableToNamespace(admin, metatable, srcTableName, destTableName, readOnlyProps, + PhoenixRuntime.getCurrentScn(readOnlyProps), phoenixTableName, index.getType()); + } + conn.getQueryServices().clearCache(); — End diff – Similarly here. Call clearTableFromCache.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58594707

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null)

          { + phoenixTableName = srcTableName; + }

          + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName),
          + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts);
          + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES,
          + PBoolean.INSTANCE.toBytes(Boolean.TRUE));
          + metatable.put(put);
          + }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName,
          + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException,
          + InterruptedException, SQLException

          { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + }

          +
          + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException,
          + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException {
          + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps();
          + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null)

          { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); }

          + try (HBaseAdmin admin = conn.getQueryServices().getAdmin();
          + HTableInterface metatable = conn.getQueryServices()
          + .getTable(SchemaUtil
          + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps)
          + .getName()) {
          + String tableName = SchemaUtil.normalizeIdentifier(srcTable);
          + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName);
          +
          + // Upgrade is not required if schemaName is not present.
          + if (schemaName.equals(""))

          { throw new IllegalArgumentException("Table doesn't have schema name"); }

          +
          + // Confirm table is not already upgraded
          + PTable table = PhoenixRuntime.getTable(conn, tableName);
          + if (table.isNamespaceMapped())

          { throw new IllegalArgumentException("Table is already upgraded"); }

          + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName);
          + String newPhysicalTablename = SchemaUtil
          + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString());
          +
          + // Upgrade the data or main table
          + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps,
          — End diff –

          minor nit: no need to use UpgradeUtil. here

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58594707 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { + phoenixTableName = srcTableName; + } + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName), + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts); + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES, + PBoolean.INSTANCE.toBytes(Boolean.TRUE)); + metatable.put(put); + } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName, + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException, + InterruptedException, SQLException { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + } + + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException, + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException { + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps(); + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException( + "May not specify the TENANT_ID_ATTRIB property when upgrading"); } + try (HBaseAdmin admin = conn.getQueryServices().getAdmin(); + HTableInterface metatable = conn.getQueryServices() + .getTable(SchemaUtil + .getPhysicalName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, readOnlyProps) + .getName()) { + String tableName = SchemaUtil.normalizeIdentifier(srcTable); + String schemaName = SchemaUtil.getSchemaNameFromFullName(tableName); + + // Upgrade is not required if schemaName is not present. + if (schemaName.equals("")) { throw new IllegalArgumentException("Table doesn't have schema name"); } + + // Confirm table is not already upgraded + PTable table = PhoenixRuntime.getTable(conn, tableName); + if (table.isNamespaceMapped()) { throw new IllegalArgumentException("Table is already upgraded"); } + conn.createStatement().execute("CREATE SCHEMA IF NOT EXISTS " + schemaName); + String newPhysicalTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(table.getPhysicalName().getString(), readOnlyProps).getNameAsString()); + + // Upgrade the data or main table + UpgradeUtil.mapTableToNamespace(admin, metatable, tableName, newPhysicalTablename, readOnlyProps, — End diff – minor nit: no need to use UpgradeUtil. here
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58594903

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          — End diff –

          Is it possible to add any tests around the upgrade code here?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58594903 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, — End diff – Is it possible to add any tests around the upgrade code here?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58597202

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) {
          + admin.snapshot(srcTableName, srcTableName);
          + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes());
          + admin.disableTable(srcTableName);
          — End diff –

          Does disabling and deleting the srcTable here also delete the snapshot you created above. If not, would it make sense to delete the snapshot? This is also making me think of failure scenarios. What if any of the steps here fails? Do we restore the table using the snapshot?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58597202 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); — End diff – Does disabling and deleting the srcTable here also delete the snapshot you created above. If not, would it make sense to delete the snapshot? This is also making me think of failure scenarios. What if any of the steps here fails? Do we restore the table using the snapshot?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58687494

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java —
          @@ -897,4 +941,86 @@ public static boolean hasRowTimestampColumn(PTable table)

          { PName schemaName = dataTable.getSchemaName(); return getTableKey(tenantId == null ? ByteUtil.EMPTY_BYTE_ARRAY : tenantId.getBytes(), schemaName == null ? ByteUtil.EMPTY_BYTE_ARRAY : schemaName.getBytes(), dataTable.getTableName().getBytes()); }

          +
          + public static byte[] getSchemaKey(String schemaName)

          { + return SchemaUtil.getTableKey(null, schemaName, MetaDataClient.EMPTY_TABLE); + }

          +
          + public static PName getPhysicalHBaseTableName(PName pName, boolean isNamespaceMapped, PTableType type)

          { + return getPhysicalHBaseTableName(pName.toString(), isNamespaceMapped, type); + }

          +
          + public static PName getPhysicalHBaseTableName(byte[] tableName, boolean isNamespaceMapped, PTableType type)

          { + return getPhysicalHBaseTableName(Bytes.toString(tableName), isNamespaceMapped, type); + }

          +
          + public static TableName getPhysicalTableName(String fullTableName, ReadOnlyProps readOnlyProps)

          { + return getPhysicalName(Bytes.toBytes(fullTableName), readOnlyProps); + }

          +
          + public static TableName getPhysicalTableName(byte[] fullTableName, Configuration conf)

          { + return getPhysicalTableName(fullTableName, isNamespaceMappingEnabled( + isSystemTable(fullTableName) ? PTableType.SYSTEM : null, new ReadOnlyProps(conf.iterator()))); + }

          +
          + public static TableName getPhysicalName(byte[] fullTableName, ReadOnlyProps readOnlyProps)

          { + return getPhysicalTableName(fullTableName, + isNamespaceMappingEnabled(isSystemTable(fullTableName) ? PTableType.SYSTEM : null, readOnlyProps)); + }

          +
          + public static TableName getPhysicalTableName(byte[] fullTableName, boolean isNamespaceMappingEnabled) {
          + if (!isNamespaceMappingEnabled)

          { return TableName.valueOf(fullTableName); }

          + String tableName = getTableNameFromFullName(fullTableName);
          + String schemaName = getSchemaNameFromFullName(fullTableName);
          + return TableName.valueOf(schemaName, tableName);
          + }
          +
          + public static String getSchemaNameFromHBaseFullName(byte[] tableName, ReadOnlyProps props) {
          + if (tableName == null)

          { return null; }

          + int index = isNamespaceMappingEnabled(null, props) ? indexOf(tableName, QueryConstants.NAMESPACE_SEPARATOR_BYTE)
          + : indexOf(tableName, QueryConstants.NAME_SEPARATOR_BYTE);
          + if (index < 0)

          { return StringUtil.EMPTY_STRING; }

          + return Bytes.toString(tableName, 0, index);
          + }
          +
          + public static PName getPhysicalHBaseTableName(String tableName, boolean isNamespaceMapped, PTableType type) {
          + if (!isNamespaceMapped)

          { return PNameFactory.newName(tableName); }

          + return PNameFactory
          + .newName(tableName.replace(QueryConstants.NAME_SEPARATOR, QueryConstants.NAMESPACE_SEPARATOR));
          + }
          +
          + public static boolean isSchemaCheckRequired(PTableType tableType, ReadOnlyProps props) {
          + if (PTableType.TABLE.equals(tableType) && isNamespaceMappingEnabled(tableType, props))

          { return true; }

          + return false;
          + }
          +
          + public static boolean isNamespaceMappingEnabled(PTableType type, ReadOnlyProps readOnlyProps) {
          + return readOnlyProps.getBoolean(QueryServices.IS_NAMESPACE_MAPPING_ENABLED,
          + QueryServicesOptions.DEFAULT_IS_NAMESPACE_MAPPING_ENABLED)
          + && (type == null || !PTableType.SYSTEM.equals(type)
          — End diff –

          Yes, null is allowed for type here.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58687494 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java — @@ -897,4 +941,86 @@ public static boolean hasRowTimestampColumn(PTable table) { PName schemaName = dataTable.getSchemaName(); return getTableKey(tenantId == null ? ByteUtil.EMPTY_BYTE_ARRAY : tenantId.getBytes(), schemaName == null ? ByteUtil.EMPTY_BYTE_ARRAY : schemaName.getBytes(), dataTable.getTableName().getBytes()); } + + public static byte[] getSchemaKey(String schemaName) { + return SchemaUtil.getTableKey(null, schemaName, MetaDataClient.EMPTY_TABLE); + } + + public static PName getPhysicalHBaseTableName(PName pName, boolean isNamespaceMapped, PTableType type) { + return getPhysicalHBaseTableName(pName.toString(), isNamespaceMapped, type); + } + + public static PName getPhysicalHBaseTableName(byte[] tableName, boolean isNamespaceMapped, PTableType type) { + return getPhysicalHBaseTableName(Bytes.toString(tableName), isNamespaceMapped, type); + } + + public static TableName getPhysicalTableName(String fullTableName, ReadOnlyProps readOnlyProps) { + return getPhysicalName(Bytes.toBytes(fullTableName), readOnlyProps); + } + + public static TableName getPhysicalTableName(byte[] fullTableName, Configuration conf) { + return getPhysicalTableName(fullTableName, isNamespaceMappingEnabled( + isSystemTable(fullTableName) ? PTableType.SYSTEM : null, new ReadOnlyProps(conf.iterator()))); + } + + public static TableName getPhysicalName(byte[] fullTableName, ReadOnlyProps readOnlyProps) { + return getPhysicalTableName(fullTableName, + isNamespaceMappingEnabled(isSystemTable(fullTableName) ? PTableType.SYSTEM : null, readOnlyProps)); + } + + public static TableName getPhysicalTableName(byte[] fullTableName, boolean isNamespaceMappingEnabled) { + if (!isNamespaceMappingEnabled) { return TableName.valueOf(fullTableName); } + String tableName = getTableNameFromFullName(fullTableName); + String schemaName = getSchemaNameFromFullName(fullTableName); + return TableName.valueOf(schemaName, tableName); + } + + public static String getSchemaNameFromHBaseFullName(byte[] tableName, ReadOnlyProps props) { + if (tableName == null) { return null; } + int index = isNamespaceMappingEnabled(null, props) ? indexOf(tableName, QueryConstants.NAMESPACE_SEPARATOR_BYTE) + : indexOf(tableName, QueryConstants.NAME_SEPARATOR_BYTE); + if (index < 0) { return StringUtil.EMPTY_STRING; } + return Bytes.toString(tableName, 0, index); + } + + public static PName getPhysicalHBaseTableName(String tableName, boolean isNamespaceMapped, PTableType type) { + if (!isNamespaceMapped) { return PNameFactory.newName(tableName); } + return PNameFactory + .newName(tableName.replace(QueryConstants.NAME_SEPARATOR, QueryConstants.NAMESPACE_SEPARATOR)); + } + + public static boolean isSchemaCheckRequired(PTableType tableType, ReadOnlyProps props) { + if (PTableType.TABLE.equals(tableType) && isNamespaceMappingEnabled(tableType, props)) { return true; } + return false; + } + + public static boolean isNamespaceMappingEnabled(PTableType type, ReadOnlyProps readOnlyProps) { + return readOnlyProps.getBoolean(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, + QueryServicesOptions.DEFAULT_IS_NAMESPACE_MAPPING_ENABLED) + && (type == null || !PTableType.SYSTEM.equals(type) — End diff – Yes, null is allowed for type here.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58692384

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) {
          + admin.snapshot(srcTableName, srcTableName);
          + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes());
          + admin.disableTable(srcTableName);
          — End diff –

          Deleting srcTable will not delete the snapshot.
          And, Yes for restore only ,I thought we should not delete snapshot

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58692384 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); — End diff – Deleting srcTable will not delete the snapshot. And, Yes for restore only ,I thought we should not delete snapshot
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58700415

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null) {
          — End diff –

          phoenixTableName and physical table name are different in case of indexes .. so it is required there .. but now in my latest commit , I have removed the null check and pass the phoenixTableName always.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58700415 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { — End diff – phoenixTableName and physical table name are different in case of indexes .. so it is required there .. but now in my latest commit , I have removed the null check and pass the phoenixTableName always.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58701167

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType))

          { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + }

          + if (phoenixTableName == null)

          { + phoenixTableName = srcTableName; + }

          + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName),
          + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts);
          + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES,
          + PBoolean.INSTANCE.toBytes(Boolean.TRUE));
          + metatable.put(put);
          + }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName,
          + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException,
          + InterruptedException, SQLException

          { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + }

          +
          + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException,
          + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException {
          + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps();
          + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException(
          — End diff –

          yes .. Now, I'm throwing exception if schema is set in connection.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58701167 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); + admin.deleteTable(srcTableName); + } + if (phoenixTableName == null) { + phoenixTableName = srcTableName; + } + Put put = new Put(SchemaUtil.getTableKey(null, SchemaUtil.getSchemaNameFromFullName(phoenixTableName), + SchemaUtil.getTableNameFromFullName(phoenixTableName)), ts); + put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES, + PBoolean.INSTANCE.toBytes(Boolean.TRUE)); + metatable.put(put); + } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String tableName, + ReadOnlyProps props, Long ts) throws SnapshotCreationException, IllegalArgumentException, IOException, + InterruptedException, SQLException { + String destTablename = SchemaUtil + .normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, props).getNameAsString()); + mapTableToNamespace(admin, metatable, tableName, destTablename, props, ts, null, PTableType.TABLE); + } + + public static void upgradeTable(PhoenixConnection conn, String srcTable) throws SQLException, + SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException { + ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps(); + if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { throw new SQLException( — End diff – yes .. Now, I'm throwing exception if schema is set in connection.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58703359

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          — End diff –

          Yeah, I'll add a test for upgrade code.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58703359 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, — End diff – Yeah, I'll add a test for upgrade code.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58715199

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java —
          @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta
          if (def.getColumnDefName().getFamilyName() != null)

          { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); }
          • }
          • Long scn = connection.getSCN();
          • PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families);
            + }
            + Long scn = connection.getSCN();
            + String schema = table.getName().getSchemaName();
            + if (connection.getSchema() != null) {
              • End diff –

          Actually, when the schema for the table

          {T}

          is not present, we try to resolve table with connection schema

          {<connection_schema>.T}

          if set in connection.
          I have added a test case UseSchemaIT#testMappedView, is it you are expecting?

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58715199 — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java — @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta if (def.getColumnDefName().getFamilyName() != null) { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); } } Long scn = connection.getSCN(); PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families); + } + Long scn = connection.getSCN(); + String schema = table.getName().getSchemaName(); + if (connection.getSchema() != null) { End diff – Actually, when the schema for the table {T} is not present, we try to resolve table with connection schema {<connection_schema>.T} if set in connection. I have added a test case UseSchemaIT#testMappedView, is it you are expecting?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-206406674

          Thanks @samarthjain for the review, I have incorporated the review comments in last commit.
          I'll write a test for upgrade code soon though I have tested it on local cluster for tables with views and indexes.
          please proceed with your further review.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-206406674 Thanks @samarthjain for the review, I have incorporated the review comments in last commit. I'll write a test for upgrade code soon though I have tested it on local cluster for tables with views and indexes. please proceed with your further review.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-207058780

          @samarthjain , any more review comments?

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-207058780 @samarthjain , any more review comments?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58986201

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java —
          @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta
          if (def.getColumnDefName().getFamilyName() != null)

          { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); }
          • }
          • Long scn = connection.getSCN();
          • PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families);
            + }
            + Long scn = connection.getSCN();
            + String schema = table.getName().getSchemaName();
            + if (connection.getSchema() != null) {
              • End diff –

          I meant if the connection has a schema present, and the table's schema is different from the schema property in the connection, IMHO we should probably throw an error like SchemaMismatchException. I could think of a few combinations that we should validate and test:
          is namespace mapped connection schema
          yes null
          yes different
          yes same
          no null
          no non-null

          You could probably add a method in SchemaUtil to do the above checks.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58986201 — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java — @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta if (def.getColumnDefName().getFamilyName() != null) { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); } } Long scn = connection.getSCN(); PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families); + } + Long scn = connection.getSCN(); + String schema = table.getName().getSchemaName(); + if (connection.getSchema() != null) { End diff – I meant if the connection has a schema present, and the table's schema is different from the schema property in the connection, IMHO we should probably throw an error like SchemaMismatchException. I could think of a few combinations that we should validate and test: is namespace mapped connection schema yes null yes different yes same no null no non-null You could probably add a method in SchemaUtil to do the above checks.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58987883

          — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java —
          @@ -190,14 +190,8 @@ public void testSchemaMetadataScan() throws SQLException {

          rs = dbmd.getSchemas(null, null);
          assertTrue(rs.next());

          • assertEquals(rs.getString("TABLE_SCHEM"),null);
              • End diff –

          Is there a reason why these lines are removed?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58987883 — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java — @@ -190,14 +190,8 @@ public void testSchemaMetadataScan() throws SQLException { rs = dbmd.getSchemas(null, null); assertTrue(rs.next()); assertEquals(rs.getString("TABLE_SCHEM"),null); End diff – Is there a reason why these lines are removed?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58988872

          — Diff: phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/MetadataRpcController.java —
          @@ -36,7 +37,16 @@
          .add(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME)
          .add(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME)
          .add(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME)

          • .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME).build();
            + .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME)
            + .add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, true)
              • End diff –

          is the namespace mapping enabled always for system tables (which is what I am inferring from true being passed from isNamespaceMapped) ? Looking at QueryServicesOptions.java, looks like it isn't.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58988872 — Diff: phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/MetadataRpcController.java — @@ -36,7 +37,16 @@ .add(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME) .add(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME) .add(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME) .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME).build(); + .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME) + .add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, true) End diff – is the namespace mapping enabled always for system tables (which is what I am inferring from true being passed from isNamespaceMapped) ? Looking at QueryServicesOptions.java, looks like it isn't.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58989392

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java —
          @@ -2387,8 +2452,27 @@ public Void call() throws Exception

          { logger.info("Update of SYSTEM.CATALOG complete"); clearCache(); }
          • +
            + if (currentServerSideTableTimeStamp < MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) {
            + // Add these columns one at a time, each with different timestamps so that if folks

              • End diff –

          Please remove this comment as it doesn't make sense here.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58989392 — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java — @@ -2387,8 +2452,27 @@ public Void call() throws Exception { logger.info("Update of SYSTEM.CATALOG complete"); clearCache(); } + + if (currentServerSideTableTimeStamp < MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) { + // Add these columns one at a time, each with different timestamps so that if folks End diff – Please remove this comment as it doesn't make sense here.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58989498

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java —
          @@ -133,11 +135,13 @@
          // latency and client-side spooling/buffering. Smaller means less initial
          // latency and less parallelization.
          public static final long DEFAULT_SCAN_RESULT_CHUNK_SIZE = 2999;
          + public static final boolean DEFAULT_IS_NAMESPACE_MAPPING_ENABLED = false;
          + public static final boolean DEFAULT_IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE = false;

          //
          // Spillable GroupBy - SPGBY prefix
          //

          • // Enable / disable spillable group by
            + // Enable / disablfalsellable group by
              • End diff –

          Please undo.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58989498 — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java — @@ -133,11 +135,13 @@ // latency and client-side spooling/buffering. Smaller means less initial // latency and less parallelization. public static final long DEFAULT_SCAN_RESULT_CHUNK_SIZE = 2999; + public static final boolean DEFAULT_IS_NAMESPACE_MAPPING_ENABLED = false; + public static final boolean DEFAULT_IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE = false; // // Spillable GroupBy - SPGBY prefix // // Enable / disable spillable group by + // Enable / disablfalsellable group by End diff – Please undo.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58989689

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -631,7 +664,7 @@ private boolean addIndexesFromPhysicalTable(MetaDataMutationResult result, Long
          if (view.getType() != PTableType.VIEW || view.getViewType() == ViewType.MAPPED)

          { return false; }
          • String physicalName = view.getPhysicalName().getString();
            + String physicalName = view.getPhysicalNames().get(0).toString();
              • End diff –

          Please undo this change as we still don't allow creating views over multiple tables.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58989689 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -631,7 +664,7 @@ private boolean addIndexesFromPhysicalTable(MetaDataMutationResult result, Long if (view.getType() != PTableType.VIEW || view.getViewType() == ViewType.MAPPED) { return false; } String physicalName = view.getPhysicalName().getString(); + String physicalName = view.getPhysicalNames().get(0).toString(); End diff – Please undo this change as we still don't allow creating views over multiple tables.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58990540

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java —
          @@ -26,7 +28,8 @@
          public PTableKey(PName tenantId, String name) {
          Preconditions.checkNotNull(name);
          this.tenantId = tenantId;

          • this.name = name;
            + this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? name
              • End diff –

          So if the table name was A:B, the key would become A.B. Can you add some test case around this and make sure creating, query, upserting to a table named like this works fine.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58990540 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java — @@ -26,7 +28,8 @@ public PTableKey(PName tenantId, String name) { Preconditions.checkNotNull(name); this.tenantId = tenantId; this.name = name; + this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? name End diff – So if the table name was A:B, the key would become A.B. Can you add some test case around this and make sure creating, query, upserting to a table named like this works fine.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58990594

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollectorFactory.java —
          @@ -63,6 +64,10 @@ public static StatisticsCollector createStatisticsCollector(
          DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME));
          DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME));
          DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME));
          + DISABLE_STATS.add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,true));
          — End diff –

          Same as in the other place. Is name space mapping always true?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58990594 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollectorFactory.java — @@ -63,6 +64,10 @@ public static StatisticsCollector createStatisticsCollector( DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME)); DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME)); DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME)); + DISABLE_STATS.add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,true)); — End diff – Same as in the other place. Is name space mapping always true?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58990747

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java —
          @@ -632,6 +645,16 @@ public static ExecutionCommand parseArgs(String[] args)

          { return execCmd; }

          + private static String validateTableName(String tableName) {
          — End diff –

          Looking at this method it seems like table names can't have : in them. Does the change in PTableKey.java still make sense then?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58990747 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java — @@ -632,6 +645,16 @@ public static ExecutionCommand parseArgs(String[] args) { return execCmd; } + private static String validateTableName(String tableName) { — End diff – Looking at this method it seems like table names can't have : in them. Does the change in PTableKey.java still make sense then?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58990926

          — Diff: phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java —
          @@ -96,6 +97,7 @@ public void testUpsertSelectDoesntSeeUpsertedData() throws Exception {
          Connection conn = DriverManager.getConnection(getUrl(), props);
          conn.setAutoCommit(true);
          conn.createStatement().execute("CREATE SEQUENCE "+seqName);
          + BaseTest.createSchema(getUrl(), fullTableName, null);
          — End diff –

          Is this change needed?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58990926 — Diff: phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java — @@ -96,6 +97,7 @@ public void testUpsertSelectDoesntSeeUpsertedData() throws Exception { Connection conn = DriverManager.getConnection(getUrl(), props); conn.setAutoCommit(true); conn.createStatement().execute("CREATE SEQUENCE "+seqName); + BaseTest.createSchema(getUrl(), fullTableName, null); — End diff – Is this change needed?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58991938

          — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java —
          @@ -190,14 +190,8 @@ public void testSchemaMetadataScan() throws SQLException {

          rs = dbmd.getSchemas(null, null);
          assertTrue(rs.next());

          • assertEquals(rs.getString("TABLE_SCHEM"),null);
              • End diff –

          As now , we will show only those schemas which are created with "create schema" command and having table as empty string. so null is not expected

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58991938 — Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java — @@ -190,14 +190,8 @@ public void testSchemaMetadataScan() throws SQLException { rs = dbmd.getSchemas(null, null); assertTrue(rs.next()); assertEquals(rs.getString("TABLE_SCHEM"),null); End diff – As now , we will show only those schemas which are created with "create schema" command and having table as empty string. so null is not expected
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58992390

          — Diff: phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/MetadataRpcController.java —
          @@ -36,7 +37,16 @@
          .add(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME)
          .add(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME)
          .add(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME)

          • .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME).build();
            + .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME)
            + .add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, true)
              • End diff –

          yeah, namespace mapping for system tables will not always be true as it is controlled with phoenix.connection.mapSystemTablesToNamespace,
          but this is just the list of system tables, so it includes SYSTEM.CATAOG and SYSTEM:CATALOG both to set the priority for RPC accordingly.
          do you think, it could have any impact?

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58992390 — Diff: phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/MetadataRpcController.java — @@ -36,7 +37,16 @@ .add(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME) .add(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME) .add(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME) .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME).build(); + .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME) + .add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES, true) End diff – yeah, namespace mapping for system tables will not always be true as it is controlled with phoenix.connection.mapSystemTablesToNamespace, but this is just the list of system tables, so it includes SYSTEM.CATAOG and SYSTEM:CATALOG both to set the priority for RPC accordingly. do you think, it could have any impact?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58993453

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java —
          @@ -631,7 +664,7 @@ private boolean addIndexesFromPhysicalTable(MetaDataMutationResult result, Long
          if (view.getType() != PTableType.VIEW || view.getViewType() == ViewType.MAPPED)

          { return false; }
          • String physicalName = view.getPhysicalName().getString();
            + String physicalName = view.getPhysicalNames().get(0).toString();
              • End diff –

          Yes, we don't support views over multiple tables that's why I used get(0) only. But I have undo it by modifying the other api's to resolve schema and table-name correctly even if namespace separator is present. like SchemaUtil.getTableNameFromFullName()

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58993453 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java — @@ -631,7 +664,7 @@ private boolean addIndexesFromPhysicalTable(MetaDataMutationResult result, Long if (view.getType() != PTableType.VIEW || view.getViewType() == ViewType.MAPPED) { return false; } String physicalName = view.getPhysicalName().getString(); + String physicalName = view.getPhysicalNames().get(0).toString(); End diff – Yes, we don't support views over multiple tables that's why I used get(0) only. But I have undo it by modifying the other api's to resolve schema and table-name correctly even if namespace separator is present. like SchemaUtil.getTableNameFromFullName()
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58993926

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java —
          @@ -26,7 +28,8 @@
          public PTableKey(PName tenantId, String name) {
          Preconditions.checkNotNull(name);
          this.tenantId = tenantId;

          • this.name = name;
            + this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? name
              • End diff –

          We are still maintaining phoenix table name(A.B) in cache. so that's why if by mistake physical table is passed as a key , it should be resolve correctly. I think I just kept this for precaution only. LocalIndexIT/ViewIndexIT covers it right?

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58993926 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java — @@ -26,7 +28,8 @@ public PTableKey(PName tenantId, String name) { Preconditions.checkNotNull(name); this.tenantId = tenantId; this.name = name; + this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? name End diff – We are still maintaining phoenix table name(A.B) in cache. so that's why if by mistake physical table is passed as a key , it should be resolve correctly. I think I just kept this for precaution only. LocalIndexIT/ViewIndexIT covers it right?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58994035

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollectorFactory.java —
          @@ -63,6 +64,10 @@ public static StatisticsCollector createStatisticsCollector(
          DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME));
          DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME));
          DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME));
          + DISABLE_STATS.add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,true));
          — End diff –

          same as above, it just to avoid collecting stats for system table, so list contains both.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58994035 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollectorFactory.java — @@ -63,6 +64,10 @@ public static StatisticsCollector createStatisticsCollector( DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME)); DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME)); DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME)); + DISABLE_STATS.add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,true)); — End diff – same as above, it just to avoid collecting stats for system table, so list contains both.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58994197

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java —
          @@ -632,6 +645,16 @@ public static ExecutionCommand parseArgs(String[] args)

          { return execCmd; }

          + private static String validateTableName(String tableName) {
          — End diff –

          no this is to validate during upgrade only, if user is passing tablename with ":" by mistake in thought of hbase tablename.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58994197 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java — @@ -632,6 +645,16 @@ public static ExecutionCommand parseArgs(String[] args) { return execCmd; } + private static String validateTableName(String tableName) { — End diff – no this is to validate during upgrade only, if user is passing tablename with ":" by mistake in thought of hbase tablename.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r58994358

          — Diff: phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java —
          @@ -96,6 +97,7 @@ public void testUpsertSelectDoesntSeeUpsertedData() throws Exception {
          Connection conn = DriverManager.getConnection(getUrl(), props);
          conn.setAutoCommit(true);
          conn.createStatement().execute("CREATE SEQUENCE "+seqName);
          + BaseTest.createSchema(getUrl(), fullTableName, null);
          — End diff –

          no, this is not needed , I have removed it.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r58994358 — Diff: phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java — @@ -96,6 +97,7 @@ public void testUpsertSelectDoesntSeeUpsertedData() throws Exception { Connection conn = DriverManager.getConnection(getUrl(), props); conn.setAutoCommit(true); conn.createStatement().execute("CREATE SEQUENCE "+seqName); + BaseTest.createSchema(getUrl(), fullTableName, null); — End diff – no, this is not needed , I have removed it.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59075136

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java —
          @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta
          if (def.getColumnDefName().getFamilyName() != null)

          { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); }
          • }
          • Long scn = connection.getSCN();
          • PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families);
            + }
            + Long scn = connection.getSCN();
            + String schema = table.getName().getSchemaName();
            + if (connection.getSchema() != null) {
              • End diff –

          The code in discussion will be used during *creation* of *mapped views* only .. It will never be used anywhere else.
          So when user provide table name without schema then connection schema should be used if set and if user is providing tablename with schema then we should ignore connection schema. I think this is how most of the databases work.

          for eg:- connection schema is set to 'S' (`phoenix> USE 'S'`)
          so , if user create mapped view,
          `create view A.T(pk ...)`. //then this will map to A.T table only
          `create view T(pk..)` // //then this will map to S.T table

          And to map table to default schema (User can unset connection schema back to null by using `USE DEFAULT`)
          create view T(pk..) // //then this will map to T table

          May be I'm missing something, would you mind giving some examples in terms of sql statement.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59075136 — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java — @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta if (def.getColumnDefName().getFamilyName() != null) { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); } } Long scn = connection.getSCN(); PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families); + } + Long scn = connection.getSCN(); + String schema = table.getName().getSchemaName(); + if (connection.getSchema() != null) { End diff – The code in discussion will be used during * creation * of * mapped views * only .. It will never be used anywhere else. So when user provide table name without schema then connection schema should be used if set and if user is providing tablename with schema then we should ignore connection schema. I think this is how most of the databases work. for eg:- connection schema is set to 'S' (`phoenix> USE 'S'`) so , if user create mapped view, `create view A.T(pk ...)`. //then this will map to A.T table only `create view T(pk..)` // //then this will map to S.T table And to map table to default schema (User can unset connection schema back to null by using `USE DEFAULT`) create view T(pk..) // //then this will map to T table May be I'm missing something, would you mind giving some examples in terms of sql statement.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59080730

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java —
          @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta
          if (def.getColumnDefName().getFamilyName() != null)

          { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); }
          • }
          • Long scn = connection.getSCN();
          • PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families);
            + }
            + Long scn = connection.getSCN();
            + String schema = table.getName().getSchemaName();
            + if (connection.getSchema() != null) {
              • End diff –

          Yes, this sounds correct, @ankitsinghal. Thanks for clarifying.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59080730 — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java — @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta if (def.getColumnDefName().getFamilyName() != null) { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); } } Long scn = connection.getSCN(); PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families); + } + Long scn = connection.getSCN(); + String schema = table.getName().getSchemaName(); + if (connection.getSchema() != null) { End diff – Yes, this sounds correct, @ankitsinghal. Thanks for clarifying.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59082461

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java —
          @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta
          if (def.getColumnDefName().getFamilyName() != null)

          { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); }
          • }
          • Long scn = connection.getSCN();
          • PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families);
            + }
            + Long scn = connection.getSCN();
            + String schema = table.getName().getSchemaName();
            + if (connection.getSchema() != null) {
              • End diff –

          Makes sense. Thanks for the clarification @ankitsinghal

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59082461 — Diff: phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java — @@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection connection, NamedTableNode ta if (def.getColumnDefName().getFamilyName() != null) { families.add(new PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.<PColumn>emptyList())); } } Long scn = connection.getSCN(); PTable theTable = new PTableImpl(connection.getTenantId(), table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? HConstants.LATEST_TIMESTAMP : scn, families); + } + Long scn = connection.getSCN(); + String schema = table.getName().getSchemaName(); + if (connection.getSchema() != null) { End diff – Makes sense. Thanks for the clarification @ankitsinghal
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-208446625

          @samarthjain , added upgrade util test case too.. please look at the replies on your review comments and let me know if they are fine.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-208446625 @samarthjain , added upgrade util test case too.. please look at the replies on your review comments and let me know if they are fine.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-208573524

          @ankitsinghal - thanks for adding the upgrade util test. I have a question/suggestion: we should probably disallow users to execute create and use schema statements if the name space feature isn't enabled. I see that you are currently failing the upgrade if the name space feature isn't enabled which is good. To be consistent we should do the same for create and use schema statements. Another thing to think would be whether we should allow setting schema property on connection if name space feature is disabled. I lean towards not allowing it.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-208573524 @ankitsinghal - thanks for adding the upgrade util test. I have a question/suggestion: we should probably disallow users to execute create and use schema statements if the name space feature isn't enabled. I see that you are currently failing the upgrade if the name space feature isn't enabled which is good. To be consistent we should do the same for create and use schema statements. Another thing to think would be whether we should allow setting schema property on connection if name space feature is disabled. I lean towards not allowing it.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59307651

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) {
          + admin.snapshot(srcTableName, srcTableName);
          + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes());
          + admin.disableTable(srcTableName);
          — End diff –

          Sounds like we would leaving snapshots dangling around. We should remove snapshots once we have successfully upgraded the table. If the user wants to be cautious he/she can always snapshot their table/s first before running this upgrade. In case of failure, we should try restoring the table/s using the snapshot that Phoenix has taken and attempt to delete the snapshot after the attempt. We should try and not leavE any residual state from the upgrade, IMHO.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59307651 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); — End diff – Sounds like we would leaving snapshots dangling around. We should remove snapshots once we have successfully upgraded the table. If the user wants to be cautious he/she can always snapshot their table/s first before running this upgrade. In case of failure, we should try restoring the table/s using the snapshot that Phoenix has taken and attempt to delete the snapshot after the attempt. We should try and not leavE any residual state from the upgrade, IMHO.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59307776

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java —
          @@ -26,7 +28,8 @@
          public PTableKey(PName tenantId, String name) {
          Preconditions.checkNotNull(name);
          this.tenantId = tenantId;

          • this.name = name;
            + this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? name
              • End diff –

          I think we could potentially mask some bug or have it fail later than it should if we do this. We should remove this check altogether.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59307776 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java — @@ -26,7 +28,8 @@ public PTableKey(PName tenantId, String name) { Preconditions.checkNotNull(name); this.tenantId = tenantId; this.name = name; + this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? name End diff – I think we could potentially mask some bug or have it fail later than it should if we do this. We should remove this check altogether.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59386509

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java —
          @@ -26,7 +28,8 @@
          public PTableKey(PName tenantId, String name) {
          Preconditions.checkNotNull(name);
          this.tenantId = tenantId;

          • this.name = name;
            + this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? name
              • End diff –

          agreed.. removed in latest commit.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59386509 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java — @@ -26,7 +28,8 @@ public PTableKey(PName tenantId, String name) { Preconditions.checkNotNull(name); this.tenantId = tenantId; this.name = name; + this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? name End diff – agreed.. removed in latest commit.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59387189

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java —
          @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s
          }
          return false;
          }
          +
          + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName,
          + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType)
          + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException,
          + SQLException {
          + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
          + if (!SchemaUtil.isNamespaceMappingEnabled(
          + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null,
          + props))

          { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); }

          +
          + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) {
          + admin.snapshot(srcTableName, srcTableName);
          + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes());
          + admin.disableTable(srcTableName);
          — End diff –

          ok.. I have added a code to delete the snapshot.

          Re-trying from the phoenix snapshot would be risky.. if let's say, upgrade got failed and user keep on using the un-mapped table for a while and then he again go for upgrade, the snapshot taken in last upgrade will be obsolete.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59387189 — Diff: phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java — @@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface metaTable, HTableInterface s } return false; } + + public static void mapTableToNamespace(HBaseAdmin admin, HTableInterface metatable, String srcTableName, + String destTableName, ReadOnlyProps props, Long ts, String phoenixTableName, PTableType pTableType) + throws SnapshotCreationException, IllegalArgumentException, IOException, InterruptedException, + SQLException { + srcTableName = SchemaUtil.normalizeIdentifier(srcTableName); + if (!SchemaUtil.isNamespaceMappingEnabled( + SchemaUtil.isSystemTable(srcTableName.getBytes()) ? PTableType.SYSTEM : null, + props)) { throw new IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes()) + ? "For system table " + QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE + + " also needs to be enabled along with " + QueryServices.IS_NAMESPACE_MAPPING_ENABLED + : QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " is not enabled"); } + + if (PTableType.TABLE.equals(pTableType) || PTableType.INDEX.equals(pTableType)) { + admin.snapshot(srcTableName, srcTableName); + admin.cloneSnapshot(srcTableName.getBytes(), destTableName.getBytes()); + admin.disableTable(srcTableName); — End diff – ok.. I have added a code to delete the snapshot. Re-trying from the phoenix snapshot would be risky.. if let's say, upgrade got failed and user keep on using the un-mapped table for a while and then he again go for upgrade, the snapshot taken in last upgrade will be obsolete.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-208944403

          @samarthjain , thanks for the review.
          I'm unsure about disabling schema constructs if isNamespaceMapping is not enabled in config. As this property is client side property and even if it is unset , user can still access the table mapped to namespace created by another client which has this property set. This property just ensure schema mapping to namespace during creation of table.

          I think these constructs still make sense independently. but @JamesRTaylor / @samarthjain , if you guys think of disabling it , I'm ok with that too.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-208944403 @samarthjain , thanks for the review. I'm unsure about disabling schema constructs if isNamespaceMapping is not enabled in config. As this property is client side property and even if it is unset , user can still access the table mapped to namespace created by another client which has this property set. This property just ensure schema mapping to namespace during creation of table. I think these constructs still make sense independently. but @JamesRTaylor / @samarthjain , if you guys think of disabling it , I'm ok with that too.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209161997

          Thanks for the updated request, @ankitsinghal. Almost there! Can you tell me more about what would happen in the following scenario if the name space feature is disabled in config:

          CREATE SCHEMA S;
          USE SCHEMA S;
          CREATE TABLE T;

          Will the table T be created in the namespace S? If the feature is disabled, it shouldn't be. Would you mind adding a test for this if it hasn't been already?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209161997 Thanks for the updated request, @ankitsinghal. Almost there! Can you tell me more about what would happen in the following scenario if the name space feature is disabled in config: CREATE SCHEMA S; USE SCHEMA S; CREATE TABLE T; Will the table T be created in the namespace S? If the feature is disabled, it shouldn't be. Would you mind adding a test for this if it hasn't been already?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209165158

          Also, how about if we do a CREATE TABLE SCHEMA.TABLENAME with the feature on. Does it end up creating the namespace named SCHEMA first and then maps the table to that namespace. What about if the feature is off. Would be nice to have tests around this or if you already have them can you point me out which ones are testing these? Thanks!

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209165158 Also, how about if we do a CREATE TABLE SCHEMA.TABLENAME with the feature on. Does it end up creating the namespace named SCHEMA first and then maps the table to that namespace. What about if the feature is off. Would be nice to have tests around this or if you already have them can you point me out which ones are testing these? Thanks!
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209173008

          There are likely lots of issues if isNamespaceMapping config property changes from true to false, no? Seems like it's meant to enable/disable the feature. Wouldn't existing namespace mapped tables not be found if it was changed from true to false after tables had been created? What would be the purpose of allowing a schema (namespace) to be created if the feature is off? If none, then it's probably best to give an error message if isNamespaceMapping is off and CREATE SCHEMA is used.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209173008 There are likely lots of issues if isNamespaceMapping config property changes from true to false, no? Seems like it's meant to enable/disable the feature. Wouldn't existing namespace mapped tables not be found if it was changed from true to false after tables had been created? What would be the purpose of allowing a schema (namespace) to be created if the feature is off? If none, then it's probably best to give an error message if isNamespaceMapping is off and CREATE SCHEMA is used.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209296286

          @samarthjain

          > Will the table T be created in the namespace S? If the feature is disabled, it shouldn't be. Would you mind adding a test for this if it hasn't been already?

          Yes, table T will not be created in namespace S if isNamespaceMapped is disabled. Test is there in CreateTableIT#testCreateTable

          > Also, how about if we do a CREATE TABLE SCHEMA.TABLENAME with the feature on. Does it end up creating the namespace named SCHEMA first and then maps the table to that namespace

          If features is on and SCHEMA is not already present it will fail with SchemaNotFoundException. User has to create a schema first and then he can create a table if the config is set to true.
          but when config is set to false, it will create a table with name SCHEMA.TABLENAME in default namespace without throwing any SchemaNotFoundException.
          I have added a test(CreateTableIT#testCreateTableWithoutSchema) in the latest commit for the same.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209296286 @samarthjain > Will the table T be created in the namespace S? If the feature is disabled, it shouldn't be. Would you mind adding a test for this if it hasn't been already? Yes, table T will not be created in namespace S if isNamespaceMapped is disabled. Test is there in CreateTableIT#testCreateTable > Also, how about if we do a CREATE TABLE SCHEMA.TABLENAME with the feature on. Does it end up creating the namespace named SCHEMA first and then maps the table to that namespace If features is on and SCHEMA is not already present it will fail with SchemaNotFoundException. User has to create a schema first and then he can create a table if the config is set to true. but when config is set to false, it will create a table with name SCHEMA.TABLENAME in default namespace without throwing any SchemaNotFoundException. I have added a test(CreateTableIT#testCreateTableWithoutSchema) in the latest commit for the same.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209304973

          @JamesRTaylor ,

          > There are likely lots of issues if isNamespaceMapping config property changes from true to false, no?

          There will not be any problem if *phoenix.connection.isNamespaceMappingEnabled* is set from true to false unless the system tables are migrated after setting *"phoenix.connection.mapSystemTablesToNamespace"* to true as it will migrate the SYSTEM tables to SYSTEM namespace and it is need to be set at client and server both to have IndexFailure, stats collection to work correctly which directly refers system tables at server. Let me know if name of the properties needs to be change to depict it as server and client side property.

          > Wouldn't existing namespace mapped tables not be found if it was changed from true to false after tables had been created?

          Tables mapped will still be accessible. Although old client(<v4.8) will not be able to access tables mapped to namespace

          > What would be the purpose of allowing a schema (namespace) to be created if the feature is off?If none, then it's probably best to give an error message if isNamespaceMapping is off and CREATE SCHEMA is used.

          Yes, I think we can through exception during CREATE SCHEMA only if the property is not set .
          but other constructs (like USE <SCHEMA>) should still be allowed as mapped tables are accessible even isNamespaceMappingEnabled is set to false.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209304973 @JamesRTaylor , > There are likely lots of issues if isNamespaceMapping config property changes from true to false, no? There will not be any problem if * phoenix.connection.isNamespaceMappingEnabled * is set from true to false unless the system tables are migrated after setting * "phoenix.connection.mapSystemTablesToNamespace" * to true as it will migrate the SYSTEM tables to SYSTEM namespace and it is need to be set at client and server both to have IndexFailure, stats collection to work correctly which directly refers system tables at server. Let me know if name of the properties needs to be change to depict it as server and client side property. > Wouldn't existing namespace mapped tables not be found if it was changed from true to false after tables had been created? Tables mapped will still be accessible. Although old client(<v4.8) will not be able to access tables mapped to namespace > What would be the purpose of allowing a schema (namespace) to be created if the feature is off?If none, then it's probably best to give an error message if isNamespaceMapping is off and CREATE SCHEMA is used. Yes, I think we can through exception during CREATE SCHEMA only if the property is not set . but other constructs (like USE <SCHEMA>) should still be allowed as mapped tables are accessible even isNamespaceMappingEnabled is set to false.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209519420

          Sounds good, @ankitsinghal. I think we'll be good to go after these minor changes. Thanks for all your excellent work here.

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209519420 Sounds good, @ankitsinghal. I think we'll be good to go after these minor changes. Thanks for all your excellent work here.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209525622

          @JamesRTaylor , I am done with the changes in last commit. so is it good for commit now?

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209525622 @JamesRTaylor , I am done with the changes in last commit. so is it good for commit now?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user JamesRTaylor commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209542664

          Yes, LGTM @ankitsinghal. @samarthjain - how about you?

          Show
          githubbot ASF GitHub Bot added a comment - Github user JamesRTaylor commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209542664 Yes, LGTM @ankitsinghal. @samarthjain - how about you?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59607039

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/IllegalOperationException.java —
          @@ -0,0 +1,45 @@
          +/*
          + * Licensed to the Apache Software Foundation (ASF) under one
          + * or more contributor license agreements. See the NOTICE file
          + * distributed with this work for additional information
          + * regarding copyright ownership. The ASF licenses this file
          + * to you under the Apache License, Version 2.0 (the
          + * "License"); you may not use this file except in compliance
          + * with the License. You may obtain a copy of the License at
          + *
          + * http://www.apache.org/licenses/LICENSE-2.0
          + *
          + * Unless required by applicable law or agreed to in writing, software
          + * distributed under the License is distributed on an "AS IS" BASIS,
          + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
          + * See the License for the specific language governing permissions and
          + * limitations under the License.
          + */
          +package org.apache.phoenix.schema;
          +
          +import org.apache.phoenix.exception.SQLExceptionCode;
          +import org.apache.phoenix.exception.SQLExceptionInfo;
          +
          +/**
          + *
          + * Exception thrown when any illegal operation is performed.
          + *
          + *
          + * @since 180
          + */
          +public class IllegalOperationException extends RuntimeException {
          — End diff –

          Not sure if you need a new exception for this. Can you not just throw a SQLException instead?

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59607039 — Diff: phoenix-core/src/main/java/org/apache/phoenix/schema/IllegalOperationException.java — @@ -0,0 +1,45 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.schema; + +import org.apache.phoenix.exception.SQLExceptionCode; +import org.apache.phoenix.exception.SQLExceptionInfo; + +/** + * + * Exception thrown when any illegal operation is performed. + * + * + * @since 180 + */ +public class IllegalOperationException extends RuntimeException { — End diff – Not sure if you need a new exception for this. Can you not just throw a SQLException instead?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59607658

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java —
          @@ -77,6 +79,7 @@ public SQLException newException(SQLExceptionInfo info) {
          MISSING_MAX_LENGTH(207, "22004", "Max length must be specified for type."),
          NONPOSITIVE_MAX_LENGTH(208, "22006", "Max length must have a positive length for type."),
          DECIMAL_PRECISION_OUT_OF_RANGE(209, "22003", "Decimal precision outside of range. Should be within 1 and " + PDataType.MAX_PRECISION + "."),
          + ILLEGAL_OPERATION(210, "22010", "Illegal Operation."),
          — End diff –

          Change this to something like this:
          CREATE_SCHEMA_NOT_ALLOWED(210, "22010", "Cannot create schema because config " + QueryServicesOptions. IS_NAMESPACE_MAPPING_ENABLED " + " for enabling name space mapping isn't enabled.")

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59607658 — Diff: phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java — @@ -77,6 +79,7 @@ public SQLException newException(SQLExceptionInfo info) { MISSING_MAX_LENGTH(207, "22004", "Max length must be specified for type."), NONPOSITIVE_MAX_LENGTH(208, "22006", "Max length must have a positive length for type."), DECIMAL_PRECISION_OUT_OF_RANGE(209, "22003", "Decimal precision outside of range. Should be within 1 and " + PDataType.MAX_PRECISION + "."), + ILLEGAL_OPERATION(210, "22010", "Illegal Operation."), — End diff – Change this to something like this: CREATE_SCHEMA_NOT_ALLOWED(210, "22010", "Cannot create schema because config " + QueryServicesOptions. IS_NAMESPACE_MAPPING_ENABLED " + " for enabling name space mapping isn't enabled.")
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on a diff in the pull request:

          https://github.com/apache/phoenix/pull/153#discussion_r59608279

          — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java —
          @@ -2387,8 +2452,27 @@ public Void call() throws Exception

          { logger.info("Update of SYSTEM.CATALOG complete"); clearCache(); }
          • +
            + if (currentServerSideTableTimeStamp < MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) {
            + // Add these columns one at a time, each with different timestamps so that if folks

              • End diff –

          This comment is just a copy paste of the comment in the previous if block. It would make more sense to move this comment to above if (currentServerSideTableTimeStamp < MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_6_0) {} block as it applies to all the upgrades we will do in future.

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on a diff in the pull request: https://github.com/apache/phoenix/pull/153#discussion_r59608279 — Diff: phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java — @@ -2387,8 +2452,27 @@ public Void call() throws Exception { logger.info("Update of SYSTEM.CATALOG complete"); clearCache(); } + + if (currentServerSideTableTimeStamp < MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) { + // Add these columns one at a time, each with different timestamps so that if folks End diff – This comment is just a copy paste of the comment in the previous if block. It would make more sense to move this comment to above if (currentServerSideTableTimeStamp < MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_6_0) {} block as it applies to all the upgrades we will do in future.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user samarthjain commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209605592

          This looks great now @ankitsinghal. I am +1 once you have addressed the last review comments. Thanks a lot for your patience and diligence!

          Show
          githubbot ASF GitHub Bot added a comment - Github user samarthjain commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209605592 This looks great now @ankitsinghal. I am +1 once you have addressed the last review comments. Thanks a lot for your patience and diligence!
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the pull request:

          https://github.com/apache/phoenix/pull/153#issuecomment-209773123

          I have done the changes in last commit.
          Thanks you so much @samarthjain and @JamesRTaylor for taking time and reviewing it. I'll just get this committed.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the pull request: https://github.com/apache/phoenix/pull/153#issuecomment-209773123 I have done the changes in last commit. Thanks you so much @samarthjain and @JamesRTaylor for taking time and reviewing it. I'll just get this committed.
          Hide
          ankit.singhal Ankit Singhal added a comment -

          committed to master and 4.x branches.

          Show
          ankit.singhal Ankit Singhal added a comment - committed to master and 4.x branches.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Phoenix-master #1193 (See https://builds.apache.org/job/Phoenix-master/1193/)
          PHOENIX-1311 HBase namespaces surfaced in phoenix (ankitsinghal59: rev de9a2c7b0249cd5a1a75374aa5244d5ee076f3c1)

          • phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/ParallelWriterIndexCommitter.java
          • phoenix-protocol/src/main/PTable.proto
          • phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
          • phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
          • phoenix-core/src/main/java/org/apache/phoenix/compile/PostDDLCompiler.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
          • phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/recovery/TrackingParallelWriterIndexCommitter.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsWriter.java
          • phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServices.java
          • phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java
          • phoenix-core/src/main/java/org/apache/phoenix/compile/ColumnResolver.java
          • phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
          • phoenix-core/src/main/antlr3/PhoenixSQL.g
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/util/IndexUtil.java
          • phoenix-core/src/main/java/org/apache/phoenix/parse/ParseNodeFactory.java
          • phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
          • phoenix-core/src/main/java/org/apache/phoenix/compile/CreateSchemaCompiler.java
          • phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
          • phoenix-core/src/main/java/org/apache/phoenix/parse/PSchema.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java
          • phoenix-protocol/src/main/PSchema.proto
          • phoenix-core/src/main/java/org/apache/phoenix/query/DelegateConnectionQueryServices.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionlessQueryServicesImpl.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/UseSchemaIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/NewerSchemaAlreadyExistsException.java
          • phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/DropSchemaIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/mapreduce/MultiHfileOutputFormat.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
          • phoenix-core/src/test/java/org/apache/phoenix/query/ParallelIteratorsSplitTest.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
          • phoenix-core/src/main/java/org/apache/phoenix/iterate/SerialIterators.java
          • phoenix-core/src/test/java/org/apache/phoenix/util/JDBCUtilTest.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/SchemaNotFoundException.java
          • phoenix-core/src/test/java/org/apache/phoenix/execute/LiteralResultIteratorPlanTest.java
          • phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java
          • phoenix-core/src/main/java/org/apache/phoenix/parse/UseSchemaStatement.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/SchemaAlreadyExistsException.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/MetaDataMutated.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/DelegateTable.java
          • phoenix-core/src/main/java/org/apache/phoenix/compile/UnionCompiler.java
          • phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
          • phoenix-protocol/src/main/MetaDataService.proto
          • phoenix-core/src/main/java/org/apache/phoenix/parse/DropSchemaStatement.java
          • phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
          • phoenix-core/src/main/java/org/apache/phoenix/parse/CreateSchemaStatement.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
          • phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PSchemaProtos.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/PhoenixRuntimeIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/protobuf/ProtobufUtil.java
          • phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/MetadataRpcController.java
          • phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
          • phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
          • phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
          • phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollectorFactory.java
          • phoenix-core/src/main/java/org/apache/phoenix/util/JDBCUtil.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
          • phoenix-core/src/main/java/org/apache/phoenix/hbase/index/master/IndexMasterObserver.java
          • phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
          • phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
          • phoenix-core/src/main/java/org/apache/phoenix/mapreduce/CsvBulkImportUtil.java
          • phoenix-core/src/test/java/org/apache/phoenix/execute/CorrelatePlanTest.java
          • phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
          • phoenix-core/src/main/java/org/apache/phoenix/iterate/ParallelIterators.java
          • phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/NamespaceSchemaMappingIT.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaData.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Phoenix-master #1193 (See https://builds.apache.org/job/Phoenix-master/1193/ ) PHOENIX-1311 HBase namespaces surfaced in phoenix (ankitsinghal59: rev de9a2c7b0249cd5a1a75374aa5244d5ee076f3c1) phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/ParallelWriterIndexCommitter.java phoenix-protocol/src/main/PTable.proto phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java phoenix-core/src/main/java/org/apache/phoenix/compile/PostDDLCompiler.java phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/recovery/TrackingParallelWriterIndexCommitter.java phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsWriter.java phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServices.java phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java phoenix-core/src/main/java/org/apache/phoenix/compile/ColumnResolver.java phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java phoenix-core/src/main/antlr3/PhoenixSQL.g phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java phoenix-core/src/main/java/org/apache/phoenix/util/IndexUtil.java phoenix-core/src/main/java/org/apache/phoenix/parse/ParseNodeFactory.java phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java phoenix-core/src/main/java/org/apache/phoenix/compile/CreateSchemaCompiler.java phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java phoenix-core/src/main/java/org/apache/phoenix/parse/PSchema.java phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java phoenix-protocol/src/main/PSchema.proto phoenix-core/src/main/java/org/apache/phoenix/query/DelegateConnectionQueryServices.java phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionlessQueryServicesImpl.java phoenix-core/src/it/java/org/apache/phoenix/end2end/UseSchemaIT.java phoenix-core/src/main/java/org/apache/phoenix/schema/NewerSchemaAlreadyExistsException.java phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java phoenix-core/src/it/java/org/apache/phoenix/end2end/DropSchemaIT.java phoenix-core/src/main/java/org/apache/phoenix/mapreduce/MultiHfileOutputFormat.java phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java phoenix-core/src/test/java/org/apache/phoenix/query/ParallelIteratorsSplitTest.java phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java phoenix-core/src/main/java/org/apache/phoenix/iterate/SerialIterators.java phoenix-core/src/test/java/org/apache/phoenix/util/JDBCUtilTest.java phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java phoenix-core/src/main/java/org/apache/phoenix/schema/SchemaNotFoundException.java phoenix-core/src/test/java/org/apache/phoenix/execute/LiteralResultIteratorPlanTest.java phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java phoenix-core/src/main/java/org/apache/phoenix/parse/UseSchemaStatement.java phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java phoenix-core/src/main/java/org/apache/phoenix/schema/SchemaAlreadyExistsException.java phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java phoenix-core/src/main/java/org/apache/phoenix/query/MetaDataMutated.java phoenix-core/src/main/java/org/apache/phoenix/schema/DelegateTable.java phoenix-core/src/main/java/org/apache/phoenix/compile/UnionCompiler.java phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java phoenix-protocol/src/main/MetaDataService.proto phoenix-core/src/main/java/org/apache/phoenix/parse/DropSchemaStatement.java phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java phoenix-core/src/main/java/org/apache/phoenix/parse/CreateSchemaStatement.java phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PSchemaProtos.java phoenix-core/src/it/java/org/apache/phoenix/end2end/PhoenixRuntimeIT.java phoenix-core/src/main/java/org/apache/phoenix/protobuf/ProtobufUtil.java phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/MetadataRpcController.java phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollectorFactory.java phoenix-core/src/main/java/org/apache/phoenix/util/JDBCUtil.java phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java phoenix-core/src/main/java/org/apache/phoenix/hbase/index/master/IndexMasterObserver.java phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java phoenix-core/src/main/java/org/apache/phoenix/mapreduce/CsvBulkImportUtil.java phoenix-core/src/test/java/org/apache/phoenix/execute/CorrelatePlanTest.java phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java phoenix-core/src/main/java/org/apache/phoenix/iterate/ParallelIterators.java phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java phoenix-core/src/it/java/org/apache/phoenix/end2end/NamespaceSchemaMappingIT.java phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaData.java phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
          Hide
          jamestaylor James Taylor added a comment -

          Looks like some test failures perhaps related to your check-in. Please investigate:

          https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1120/
          
          Show
          jamestaylor James Taylor added a comment - Looks like some test failures perhaps related to your check-in. Please investigate: https: //builds.apache.org/job/Phoenix-4.x-HBase-0.98/1120/
          Hide
          ankit.singhal Ankit Singhal added a comment -

          Yes James Taylor, there is only one failure because of the above check-in.(got missed as it is working independently)

          org.apache.phoenix.end2end.QueryDatabaseMetaDataIT.testSchemaMetadataScan
          

          I have fixed the same in PHOENIX-2838 with other changes.

          Show
          ankit.singhal Ankit Singhal added a comment - Yes James Taylor , there is only one failure because of the above check-in.(got missed as it is working independently) org.apache.phoenix.end2end.QueryDatabaseMetaDataIT.testSchemaMetadataScan I have fixed the same in PHOENIX-2838 with other changes.
          Hide
          jamestaylor James Taylor added a comment -

          Re-resolving as the fixes are in a different JIRA.

          Show
          jamestaylor James Taylor added a comment - Re-resolving as the fixes are in a different JIRA.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user jimdowling commented on the issue:

          https://github.com/apache/phoenix/pull/153

          Is anything happening on this branch? We're interested in this feature for multi-tenant access to Phoenix...

          Show
          githubbot ASF GitHub Bot added a comment - Github user jimdowling commented on the issue: https://github.com/apache/phoenix/pull/153 Is anything happening on this branch? We're interested in this feature for multi-tenant access to Phoenix...
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal commented on the issue:

          https://github.com/apache/phoenix/pull/153

          This pull request was already merged and feature is available v4.8.0

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal commented on the issue: https://github.com/apache/phoenix/pull/153 This pull request was already merged and feature is available v4.8.0
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ankitsinghal closed the pull request at:

          https://github.com/apache/phoenix/pull/153

          Show
          githubbot ASF GitHub Bot added a comment - Github user ankitsinghal closed the pull request at: https://github.com/apache/phoenix/pull/153

            People

            • Assignee:
              ankit.singhal Ankit Singhal
              Reporter:
              nmaillard nicolas maillard
            • Votes:
              5 Vote for this issue
              Watchers:
              24 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development