Description
Currently, INSERT INTO a partitioned table in V2 in-memory catalog doesn't create partitions. The example below demonstrates the issue:
test("insert into partitioned table") { val t = "testpart.ns1.ns2.tbl" withTable(t) { spark.sql( s""" |CREATE TABLE $t (id bigint, name string, data string) |USING foo |PARTITIONED BY (id, name)""".stripMargin) spark.sql(s"INSERT INTO $t PARTITION(id = 1, name = 'Max') SELECT 'abc'") val partTable = catalog("testpart").asTableCatalog .loadTable(Identifier.of(Array("ns1", "ns2"), "tbl")).asInstanceOf[InMemoryPartitionTable] assert(partTable.partitionExists(InternalRow.fromSeq(Seq(1, UTF8String.fromString("Max"))))) } }
The partitionExists() function return false for the partitions that must be created.