Uploaded image for project: 'Tajo'
  1. Tajo
  2. TAJO-269

Protocol buffer De/Serialization for LogicalNode

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.10.0
    • Component/s: QueryMaster, Worker
    • Labels:
      None

      Description

      In the current implementation, the logical plan is serialized into a JSON object and sent to each worker.
      However, the transmission of JSON object incurs the high overhead due to its large size.
      ProtocolBuffer is a good alternative because its overhead is quite small and already used in other modules of Tajo.

      1. TAJO-269_3.patch
        300 kB
        Hyunsik Choi
      2. TAJO-269_2.patch
        293 kB
        Hyunsik Choi
      3. TAJO-269.patch
        252 kB
        Hyunsik Choi
      There are no Sub-Tasks for this issue.

        Activity

        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Tajo-master-build #525 (See https://builds.apache.org/job/Tajo-master-build/525/)
        TAJO-269: Protocol buffer De/Serialization for LogicalNode. (hyunsik: rev 32be38d41affc498b01286938f3fea89a8def1a9)

        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteEngine.java
        • tajo-core/src/test/java/org/apache/tajo/engine/eval/ExprTestBase.java
        • tajo-core/src/main/java/org/apache/tajo/master/GlobalEngine.java
        • tajo-core/src/test/java/org/apache/tajo/engine/query/TestSelectQuery.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/StoreTableNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/UnaryNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/FilterPushDownRule.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/AlterTablespaceNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/CreateTableNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/nameresolver/NameResolver.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/enforce/Enforcer.java
        • tajo-common/src/main/java/org/apache/tajo/conf/TajoConf.java
        • tajo-core/src/test/java/org/apache/tajo/TajoTestingCluster.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/TableSubQueryNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/visitor/BasicLogicalPlanVisitor.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/visitor/ExplainLogicalPlanVisitor.java
        • tajo-core/src/main/java/org/apache/tajo/worker/Task.java
        • tajo-plan/src/main/proto/Plan.proto
        • tajo-core/src/main/java/org/apache/tajo/master/querymaster/QueryMasterTask.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbySecondAggregationExec.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/nameresolver/ResolverByLegacy.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/LogicalPlanEqualityTester.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalTreeProtoDeserializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/PartitionedTableRewriter.java
        • tajo-storage/tajo-storage-hbase/src/main/java/org/apache/tajo/storage/hbase/HBaseStorageManager.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/WindowSpec.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/TruncateTableNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/util/PlannerUtil.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/CreateDatabaseNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/NodeType.java
        • tajo-core/src/test/java/org/apache/tajo/engine/query/TestWindowQuery.java
        • tajo-common/src/main/java/org/apache/tajo/util/ProtoUtil.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbySortAggregationExec.java
        • tajo-core/src/main/java/org/apache/tajo/engine/query/TaskRequest.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/rules/GlobalPlanEqualityTester.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/package-info.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalTreeProtoSerializer.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyFirstAggregationExec.java
        • tajo-common/src/main/java/org/apache/tajo/util/ReflectionUtil.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/ProjectionNode.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanTestRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/LogicalPlanPreprocessor.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/AlterTableNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/DropTableNode.java
        • tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/TableDesc.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/LogicalPlanner.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/builder/DistinctGroupbyBuilder.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/LogicalNode.java
        • tajo-core/src/main/java/org/apache/tajo/master/exec/QueryExecutor.java
        • tajo-core/src/main/java/org/apache/tajo/master/DefaultTaskScheduler.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/InsertNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BaseLogicalPlanRewriteRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/QueryRewriteEngine.java
        • tajo-core/src/test/java/org/apache/tajo/engine/query/TestTruncateTable.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/LogicalOptimizer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/BinaryNode.java
        • CHANGES
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/DropDatabaseNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/RelationNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/ProjectionPushDownRule.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyThirdAggregationExec.java
        • tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/Schema.java
        • tajo-core/src/main/java/org/apache/tajo/engine/query/TaskRequestImpl.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/GroupbyNode.java
        • tajo-common/src/main/java/org/apache/tajo/util/TUtil.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/EvalExprNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/expr/AggregationFunctionCallEval.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/BaseGlobalPlanRewriteRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/DistinctGroupbyNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/RewriteRule.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BaseLogicalPlanRewriteEngine.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteEngine.java
        • tajo-core/src/main/proto/TajoWorkerProtocol.proto
        • tajo-plan/src/main/java/org/apache/tajo/plan/Target.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/GlobalPlanner.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BasicQueryRewriteEngine.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeSerializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalNodeDeserializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/expr/WindowFunctionEval.java
        • tajo-core/src/test/java/org/apache/tajo/engine/query/TestGroupByQuery.java
        • tajo-storage/tajo-storage-hbase/src/main/java/org/apache/tajo/storage/hbase/AddSortForInsertRewriter.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/ScanNode.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/PhysicalPlannerImpl.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyHashAggregationExec.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/expr/EvalNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalNodeSerializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/visitor/LogicalPlanVisitor.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanTestRuleProvider.java
        • tajo-core/src/main/java/org/apache/tajo/engine/utils/test/ErrorInjectionRewriter.java
        • tajo-core/src/main/java/org/apache/tajo/engine/codegen/ExecutorPreCompiler.java
        • tajo-core/src/test/java/org/apache/tajo/master/TestGlobalPlanner.java
        • tajo-storage/tajo-storage-common/src/main/java/org/apache/tajo/storage/StorageManager.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteRule.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteRule.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/SetSessionNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Tajo-master-build #525 (See https://builds.apache.org/job/Tajo-master-build/525/ ) TAJO-269 : Protocol buffer De/Serialization for LogicalNode. (hyunsik: rev 32be38d41affc498b01286938f3fea89a8def1a9) tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteEngine.java tajo-core/src/test/java/org/apache/tajo/engine/eval/ExprTestBase.java tajo-core/src/main/java/org/apache/tajo/master/GlobalEngine.java tajo-core/src/test/java/org/apache/tajo/engine/query/TestSelectQuery.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/StoreTableNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/UnaryNode.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/FilterPushDownRule.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/AlterTablespaceNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/CreateTableNode.java tajo-plan/src/main/java/org/apache/tajo/plan/nameresolver/NameResolver.java tajo-core/src/main/java/org/apache/tajo/engine/planner/enforce/Enforcer.java tajo-common/src/main/java/org/apache/tajo/conf/TajoConf.java tajo-core/src/test/java/org/apache/tajo/TajoTestingCluster.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/TableSubQueryNode.java tajo-plan/src/main/java/org/apache/tajo/plan/visitor/BasicLogicalPlanVisitor.java tajo-plan/src/main/java/org/apache/tajo/plan/visitor/ExplainLogicalPlanVisitor.java tajo-core/src/main/java/org/apache/tajo/worker/Task.java tajo-plan/src/main/proto/Plan.proto tajo-core/src/main/java/org/apache/tajo/master/querymaster/QueryMasterTask.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbySecondAggregationExec.java tajo-plan/src/main/java/org/apache/tajo/plan/nameresolver/ResolverByLegacy.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/LogicalPlanEqualityTester.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalTreeProtoDeserializer.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/PartitionedTableRewriter.java tajo-storage/tajo-storage-hbase/src/main/java/org/apache/tajo/storage/hbase/HBaseStorageManager.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/WindowSpec.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/TruncateTableNode.java tajo-plan/src/main/java/org/apache/tajo/plan/util/PlannerUtil.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/CreateDatabaseNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/NodeType.java tajo-core/src/test/java/org/apache/tajo/engine/query/TestWindowQuery.java tajo-common/src/main/java/org/apache/tajo/util/ProtoUtil.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbySortAggregationExec.java tajo-core/src/main/java/org/apache/tajo/engine/query/TaskRequest.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/rules/GlobalPlanEqualityTester.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/package-info.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalTreeProtoSerializer.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyFirstAggregationExec.java tajo-common/src/main/java/org/apache/tajo/util/ReflectionUtil.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/ProjectionNode.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanTestRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/LogicalPlanPreprocessor.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/AlterTableNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/DropTableNode.java tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/TableDesc.java tajo-plan/src/main/java/org/apache/tajo/plan/LogicalPlanner.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/builder/DistinctGroupbyBuilder.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/LogicalNode.java tajo-core/src/main/java/org/apache/tajo/master/exec/QueryExecutor.java tajo-core/src/main/java/org/apache/tajo/master/DefaultTaskScheduler.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/InsertNode.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BaseLogicalPlanRewriteRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/QueryRewriteEngine.java tajo-core/src/test/java/org/apache/tajo/engine/query/TestTruncateTable.java tajo-plan/src/main/java/org/apache/tajo/plan/LogicalOptimizer.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/BinaryNode.java CHANGES tajo-plan/src/main/java/org/apache/tajo/plan/logical/DropDatabaseNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/RelationNode.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/ProjectionPushDownRule.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyThirdAggregationExec.java tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/Schema.java tajo-core/src/main/java/org/apache/tajo/engine/query/TaskRequestImpl.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/GroupbyNode.java tajo-common/src/main/java/org/apache/tajo/util/TUtil.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/EvalExprNode.java tajo-plan/src/main/java/org/apache/tajo/plan/expr/AggregationFunctionCallEval.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/BaseGlobalPlanRewriteRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/DistinctGroupbyNode.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/RewriteRule.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BaseLogicalPlanRewriteEngine.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteEngine.java tajo-core/src/main/proto/TajoWorkerProtocol.proto tajo-plan/src/main/java/org/apache/tajo/plan/Target.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/GlobalPlanner.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BasicQueryRewriteEngine.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeSerializer.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalNodeDeserializer.java tajo-plan/src/main/java/org/apache/tajo/plan/expr/WindowFunctionEval.java tajo-core/src/test/java/org/apache/tajo/engine/query/TestGroupByQuery.java tajo-storage/tajo-storage-hbase/src/main/java/org/apache/tajo/storage/hbase/AddSortForInsertRewriter.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/ScanNode.java tajo-core/src/main/java/org/apache/tajo/engine/planner/PhysicalPlannerImpl.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyHashAggregationExec.java tajo-plan/src/main/java/org/apache/tajo/plan/expr/EvalNode.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalNodeSerializer.java tajo-plan/src/main/java/org/apache/tajo/plan/visitor/LogicalPlanVisitor.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanTestRuleProvider.java tajo-core/src/main/java/org/apache/tajo/engine/utils/test/ErrorInjectionRewriter.java tajo-core/src/main/java/org/apache/tajo/engine/codegen/ExecutorPreCompiler.java tajo-core/src/test/java/org/apache/tajo/master/TestGlobalPlanner.java tajo-storage/tajo-storage-common/src/main/java/org/apache/tajo/storage/StorageManager.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteRule.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteRule.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/SetSessionNode.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Tajo-master-CODEGEN-build #165 (See https://builds.apache.org/job/Tajo-master-CODEGEN-build/165/)
        TAJO-269: Protocol buffer De/Serialization for LogicalNode. (hyunsik: rev 32be38d41affc498b01286938f3fea89a8def1a9)

        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/rules/GlobalPlanEqualityTester.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/LogicalPlanEqualityTester.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteRule.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/PartitionedTableRewriter.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalTreeProtoDeserializer.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/builder/DistinctGroupbyBuilder.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/DropDatabaseNode.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbySecondAggregationExec.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteEngine.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/nameresolver/ResolverByLegacy.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalNodeDeserializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/ProjectionPushDownRule.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/LogicalOptimizer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/util/PlannerUtil.java
        • tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/Schema.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbySortAggregationExec.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/nameresolver/NameResolver.java
        • tajo-core/src/main/java/org/apache/tajo/engine/query/TaskRequestImpl.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/AlterTablespaceNode.java
        • tajo-core/src/main/java/org/apache/tajo/master/GlobalEngine.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/UnaryNode.java
        • tajo-storage/tajo-storage-hbase/src/main/java/org/apache/tajo/storage/hbase/AddSortForInsertRewriter.java
        • tajo-common/src/main/java/org/apache/tajo/util/ReflectionUtil.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/CreateDatabaseNode.java
        • tajo-core/src/main/java/org/apache/tajo/engine/utils/test/ErrorInjectionRewriter.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteEngine.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/ProjectionNode.java
        • tajo-core/src/test/java/org/apache/tajo/master/TestGlobalPlanner.java
        • tajo-common/src/main/java/org/apache/tajo/util/ProtoUtil.java
        • tajo-core/src/test/java/org/apache/tajo/engine/query/TestSelectQuery.java
        • tajo-core/src/main/proto/TajoWorkerProtocol.proto
        • tajo-plan/src/main/java/org/apache/tajo/plan/LogicalPlanPreprocessor.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/TruncateTableNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/CreateTableNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/RewriteRule.java
        • tajo-common/src/main/java/org/apache/tajo/conf/TajoConf.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanTestRuleProvider.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/enforce/Enforcer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/QueryRewriteEngine.java
        • tajo-core/src/test/java/org/apache/tajo/engine/eval/ExprTestBase.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/expr/EvalNode.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyFirstAggregationExec.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanTestRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/visitor/LogicalPlanVisitor.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/GlobalPlanner.java
        • tajo-core/src/main/java/org/apache/tajo/master/exec/QueryExecutor.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/RelationNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/DistinctGroupbyNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/expr/AggregationFunctionCallEval.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyThirdAggregationExec.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/visitor/BasicLogicalPlanVisitor.java
        • tajo-core/src/main/java/org/apache/tajo/worker/Task.java
        • tajo-core/src/main/java/org/apache/tajo/engine/query/TaskRequest.java
        • tajo-plan/src/main/proto/Plan.proto
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/TableSubQueryNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/Target.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/PhysicalPlannerImpl.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteRule.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/DropTableNode.java
        • tajo-core/src/test/java/org/apache/tajo/TajoTestingCluster.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/ScanNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/visitor/ExplainLogicalPlanVisitor.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeSerializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BaseLogicalPlanRewriteEngine.java
        • tajo-storage/tajo-storage-hbase/src/main/java/org/apache/tajo/storage/hbase/HBaseStorageManager.java
        • tajo-core/src/main/java/org/apache/tajo/master/querymaster/QueryMasterTask.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/EvalExprNode.java
        • tajo-core/src/test/java/org/apache/tajo/engine/query/TestTruncateTable.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalTreeProtoSerializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/SetSessionNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/BinaryNode.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyHashAggregationExec.java
        • tajo-core/src/main/java/org/apache/tajo/engine/codegen/ExecutorPreCompiler.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/WindowSpec.java
        • tajo-storage/tajo-storage-common/src/main/java/org/apache/tajo/storage/StorageManager.java
        • tajo-core/src/test/java/org/apache/tajo/engine/query/TestGroupByQuery.java
        • tajo-core/src/test/java/org/apache/tajo/engine/query/TestWindowQuery.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/NodeType.java
        • tajo-common/src/main/java/org/apache/tajo/util/TUtil.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/InsertNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BasicQueryRewriteEngine.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/LogicalPlanner.java
        • tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/TableDesc.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/GroupbyNode.java
        • tajo-core/src/main/java/org/apache/tajo/master/DefaultTaskScheduler.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/AlterTableNode.java
        • tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/BaseGlobalPlanRewriteRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalNodeSerializer.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/FilterPushDownRule.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BaseLogicalPlanRewriteRuleProvider.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/StoreTableNode.java
        • CHANGES
        • tajo-plan/src/main/java/org/apache/tajo/plan/expr/WindowFunctionEval.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/logical/LogicalNode.java
        • tajo-plan/src/main/java/org/apache/tajo/plan/serder/package-info.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Tajo-master-CODEGEN-build #165 (See https://builds.apache.org/job/Tajo-master-CODEGEN-build/165/ ) TAJO-269 : Protocol buffer De/Serialization for LogicalNode. (hyunsik: rev 32be38d41affc498b01286938f3fea89a8def1a9) tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/rules/GlobalPlanEqualityTester.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/LogicalPlanEqualityTester.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteRule.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/PartitionedTableRewriter.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalTreeProtoDeserializer.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/builder/DistinctGroupbyBuilder.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/DropDatabaseNode.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbySecondAggregationExec.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteEngine.java tajo-plan/src/main/java/org/apache/tajo/plan/nameresolver/ResolverByLegacy.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalNodeDeserializer.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/ProjectionPushDownRule.java tajo-plan/src/main/java/org/apache/tajo/plan/LogicalOptimizer.java tajo-plan/src/main/java/org/apache/tajo/plan/util/PlannerUtil.java tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/Schema.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbySortAggregationExec.java tajo-plan/src/main/java/org/apache/tajo/plan/nameresolver/NameResolver.java tajo-core/src/main/java/org/apache/tajo/engine/query/TaskRequestImpl.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/AlterTablespaceNode.java tajo-core/src/main/java/org/apache/tajo/master/GlobalEngine.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/UnaryNode.java tajo-storage/tajo-storage-hbase/src/main/java/org/apache/tajo/storage/hbase/AddSortForInsertRewriter.java tajo-common/src/main/java/org/apache/tajo/util/ReflectionUtil.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/CreateDatabaseNode.java tajo-core/src/main/java/org/apache/tajo/engine/utils/test/ErrorInjectionRewriter.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteEngine.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/ProjectionNode.java tajo-core/src/test/java/org/apache/tajo/master/TestGlobalPlanner.java tajo-common/src/main/java/org/apache/tajo/util/ProtoUtil.java tajo-core/src/test/java/org/apache/tajo/engine/query/TestSelectQuery.java tajo-core/src/main/proto/TajoWorkerProtocol.proto tajo-plan/src/main/java/org/apache/tajo/plan/LogicalPlanPreprocessor.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/TruncateTableNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/CreateTableNode.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/RewriteRule.java tajo-common/src/main/java/org/apache/tajo/conf/TajoConf.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanTestRuleProvider.java tajo-core/src/main/java/org/apache/tajo/engine/planner/enforce/Enforcer.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/QueryRewriteEngine.java tajo-core/src/test/java/org/apache/tajo/engine/eval/ExprTestBase.java tajo-plan/src/main/java/org/apache/tajo/plan/expr/EvalNode.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyFirstAggregationExec.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanTestRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/visitor/LogicalPlanVisitor.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/GlobalPlanner.java tajo-core/src/main/java/org/apache/tajo/master/exec/QueryExecutor.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/RelationNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/DistinctGroupbyNode.java tajo-plan/src/main/java/org/apache/tajo/plan/expr/AggregationFunctionCallEval.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyThirdAggregationExec.java tajo-plan/src/main/java/org/apache/tajo/plan/visitor/BasicLogicalPlanVisitor.java tajo-core/src/main/java/org/apache/tajo/worker/Task.java tajo-core/src/main/java/org/apache/tajo/engine/query/TaskRequest.java tajo-plan/src/main/proto/Plan.proto tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/LogicalPlanRewriteRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/TableSubQueryNode.java tajo-plan/src/main/java/org/apache/tajo/plan/Target.java tajo-core/src/main/java/org/apache/tajo/engine/planner/PhysicalPlannerImpl.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/GlobalPlanRewriteRule.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/DropTableNode.java tajo-core/src/test/java/org/apache/tajo/TajoTestingCluster.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/ScanNode.java tajo-plan/src/main/java/org/apache/tajo/plan/visitor/ExplainLogicalPlanVisitor.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeSerializer.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BaseLogicalPlanRewriteEngine.java tajo-storage/tajo-storage-hbase/src/main/java/org/apache/tajo/storage/hbase/HBaseStorageManager.java tajo-core/src/main/java/org/apache/tajo/master/querymaster/QueryMasterTask.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/EvalExprNode.java tajo-core/src/test/java/org/apache/tajo/engine/query/TestTruncateTable.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalTreeProtoSerializer.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/SetSessionNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/BinaryNode.java tajo-core/src/main/java/org/apache/tajo/engine/planner/physical/DistinctGroupbyHashAggregationExec.java tajo-core/src/main/java/org/apache/tajo/engine/codegen/ExecutorPreCompiler.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/WindowSpec.java tajo-storage/tajo-storage-common/src/main/java/org/apache/tajo/storage/StorageManager.java tajo-core/src/test/java/org/apache/tajo/engine/query/TestGroupByQuery.java tajo-core/src/test/java/org/apache/tajo/engine/query/TestWindowQuery.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/NodeType.java tajo-common/src/main/java/org/apache/tajo/util/TUtil.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/InsertNode.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BasicQueryRewriteEngine.java tajo-plan/src/main/java/org/apache/tajo/plan/LogicalPlanner.java tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/TableDesc.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/GroupbyNode.java tajo-core/src/main/java/org/apache/tajo/master/DefaultTaskScheduler.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/AlterTableNode.java tajo-core/src/main/java/org/apache/tajo/engine/planner/global/rewriter/BaseGlobalPlanRewriteRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/EvalNodeSerializer.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/rules/FilterPushDownRule.java tajo-plan/src/main/java/org/apache/tajo/plan/rewrite/BaseLogicalPlanRewriteRuleProvider.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/StoreTableNode.java CHANGES tajo-plan/src/main/java/org/apache/tajo/plan/expr/WindowFunctionEval.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/LogicalNode.java tajo-plan/src/main/java/org/apache/tajo/plan/serder/package-info.java
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user asfgit closed the pull request at:

        https://github.com/apache/tajo/pull/322

        Show
        githubbot ASF GitHub Bot added a comment - Github user asfgit closed the pull request at: https://github.com/apache/tajo/pull/322
        Hide
        hyunsik Hyunsik Choi added a comment -

        I just committed it to master branch.

        Show
        hyunsik Hyunsik Choi added a comment - I just committed it to master branch.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user hyunsik commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68353868

        I also simplified Schema::getColumnId(). Thank you for quick review. I'll commit it shortly.

        Show
        githubbot ASF GitHub Bot added a comment - Github user hyunsik commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68353868 I also simplified Schema::getColumnId(). Thank you for quick review. I'll commit it shortly.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user jihoonson commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68346334

        +1
        Thanks for awesome work!

        Show
        githubbot ASF GitHub Bot added a comment - Github user jihoonson commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68346334 +1 Thanks for awesome work!
        Hide
        tajoqa Tajo QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12689469/TAJO-269_3.patch
        against master revision release-0.9.0-rc0-113-g6fde9e5.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 8 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The applied patch does not increase the total number of javadoc warnings.

        +1 checkstyle. The patch generated 0 code style errors.

        -1 findbugs. The patch appears to cause Findbugs (version 2.0.3) to fail.

        -1 release audit. The applied patch generated 718 release audit warnings.

        +1 core tests. The patch passed unit tests in tajo-catalog/tajo-catalog-common tajo-common tajo-core tajo-plan tajo-storage/tajo-storage-common tajo-storage/tajo-storage-hbase.

        Test results: https://builds.apache.org/job/PreCommit-TAJO-Build/564//testReport/
        Release audit warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/564//artifact/incubator-tajo/patchprocess/patchReleaseAuditProblems.txt
        Findbugs results: https://builds.apache.org/job/PreCommit-TAJO-Build/564//findbugsResult
        Console output: https://builds.apache.org/job/PreCommit-TAJO-Build/564//console

        This message is automatically generated.

        Show
        tajoqa Tajo QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12689469/TAJO-269_3.patch against master revision release-0.9.0-rc0-113-g6fde9e5. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 8 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The applied patch does not increase the total number of javadoc warnings. +1 checkstyle. The patch generated 0 code style errors. -1 findbugs. The patch appears to cause Findbugs (version 2.0.3) to fail. -1 release audit. The applied patch generated 718 release audit warnings. +1 core tests. The patch passed unit tests in tajo-catalog/tajo-catalog-common tajo-common tajo-core tajo-plan tajo-storage/tajo-storage-common tajo-storage/tajo-storage-hbase. Test results: https://builds.apache.org/job/PreCommit-TAJO-Build/564//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/564//artifact/incubator-tajo/patchprocess/patchReleaseAuditProblems.txt Findbugs results: https://builds.apache.org/job/PreCommit-TAJO-Build/564//findbugsResult Console output: https://builds.apache.org/job/PreCommit-TAJO-Build/564//console This message is automatically generated.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user jihoonson commented on a diff in the pull request:

        https://github.com/apache/tajo/pull/322#discussion_r22342445

        — Diff: tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java —
        @@ -0,0 +1,678 @@
        +/*
        + * Lisensed to the Apache Software Foundation (ASF) under one
        + * or more contributor license agreements. See the NOTICE file
        + * distributed with this work for additional information
        + * regarding copyright ownership. The ASF licenses this file
        + * to you under the Apache License, Version 2.0 (the
        + * "License"); you may not use this file except in compliance
        + * with the License. You may obtain a copy of the License at
        + *
        + * http://www.apache.org/licenses/LICENSE-2.0
        + *
        + * Unless required by applicable law or agreed to in writing, software
        + * distributed under the License is distributed on an "AS IS" BASIS,
        + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        + * See the License for the specific language governing permissions and
        + * limitations under the License.
        + */
        +
        +package org.apache.tajo.plan.serder;
        +
        +import com.google.common.collect.Lists;
        +import com.google.common.collect.Maps;
        +import org.apache.hadoop.fs.Path;
        +import org.apache.tajo.OverridableConf;
        +import org.apache.tajo.algebra.JoinType;
        +import org.apache.tajo.catalog.Column;
        +import org.apache.tajo.catalog.Schema;
        +import org.apache.tajo.catalog.SortSpec;
        +import org.apache.tajo.catalog.TableDesc;
        +import org.apache.tajo.catalog.partition.PartitionMethodDesc;
        +import org.apache.tajo.catalog.proto.CatalogProtos;
        +import org.apache.tajo.exception.UnimplementedException;
        +import org.apache.tajo.plan.Target;
        +import org.apache.tajo.plan.expr.AggregationFunctionCallEval;
        +import org.apache.tajo.plan.expr.EvalNode;
        +import org.apache.tajo.plan.expr.FieldEval;
        +import org.apache.tajo.plan.expr.WindowFunctionEval;
        +import org.apache.tajo.plan.logical.*;
        +import org.apache.tajo.util.KeyValueSet;
        +import org.apache.tajo.util.TUtil;
        +
        +import java.util.*;
        +
        +/**
        + * It deserializes a list of serialized logical nodes into a logical node tree.
        + */
        +public class LogicalNodeDeserializer {
        + private static final LogicalNodeDeserializer instance;
        +
        + static

        { + instance = new LogicalNodeDeserializer(); + }

        +
        + /**
        + * Deserialize a list of nodes into a logical node tree.
        + *
        + * @param context QueryContext
        + * @param tree LogicalNodeTree which contains a list of serialized logical nodes.
        + * @return A logical node tree
        + */
        + public static LogicalNode deserialize(OverridableConf context, PlanProto.LogicalNodeTree tree) {
        + Map<Integer, LogicalNode> nodeMap = Maps.newHashMap();
        +
        + // sort serialized logical nodes in an ascending order of their sids
        + List<PlanProto.LogicalNode> nodeList = Lists.newArrayList(tree.getNodesList());
        + Collections.sort(nodeList, new Comparator<PlanProto.LogicalNode>() {
        + @Override
        + public int compare(PlanProto.LogicalNode o1, PlanProto.LogicalNode o2)

        { + return o1.getSid() - o2.getSid(); + }

        + });
        +
        + LogicalNode current = null;
        +
        + // The sorted order is the same of a postfix traverse order.
        + // So, it sequentially transforms each serialized node into a LogicalNode instance in a postfix order of
        + // the original logical node tree.
        +
        + Iterator<PlanProto.LogicalNode> it = nodeList.iterator();
        + while (it.hasNext()) {
        + PlanProto.LogicalNode protoNode = it.next();
        +
        + switch (protoNode.getType())

        { + case ROOT: + current = convertRoot(nodeMap, protoNode); + break; + case SET_SESSION: + current = convertSetSession(protoNode); + break; + case EXPRS: + current = convertEvalExpr(context, protoNode); + break; + case PROJECTION: + current = convertProjection(context, nodeMap, protoNode); + break; + case LIMIT: + current = convertLimit(nodeMap, protoNode); + break; + case SORT: + current = convertSort(nodeMap, protoNode); + break; + case WINDOW_AGG: + current = convertWindowAgg(context, nodeMap, protoNode); + break; + case HAVING: + current = convertHaving(context, nodeMap, protoNode); + break; + case GROUP_BY: + current = convertGroupby(context, nodeMap, protoNode); + break; + case DISTINCT_GROUP_BY: + current = convertDistinctGroupby(context, nodeMap, protoNode); + break; + case SELECTION: + current = convertFilter(context, nodeMap, protoNode); + break; + case JOIN: + current = convertJoin(context, nodeMap, protoNode); + break; + case TABLE_SUBQUERY: + current = convertTableSubQuery(context, nodeMap, protoNode); + break; + case UNION: + current = convertUnion(nodeMap, protoNode); + break; + case PARTITIONS_SCAN: + current = convertPartitionScan(context, protoNode); + break; + case SCAN: + current = convertScan(context, protoNode); + break; + + case CREATE_TABLE: + current = convertCreateTable(nodeMap, protoNode); + break; + case INSERT: + current = convertInsert(nodeMap, protoNode); + break; + case DROP_TABLE: + current = convertDropTable(protoNode); + break; + + case CREATE_DATABASE: + current = convertCreateDatabase(protoNode); + break; + case DROP_DATABASE: + current = convertDropDatabase(protoNode); + break; + + case ALTER_TABLESPACE: + current = convertAlterTablespace(protoNode); + break; + case ALTER_TABLE: + current = convertAlterTable(protoNode); + break; + case TRUNCATE_TABLE: + current = convertTruncateTable(protoNode); + break; + + default: + throw new RuntimeException("Unknown NodeType: " + protoNode.getType().name()); + }

        +
        + nodeMap.put(protoNode.getSid(), current);
        + }
        +
        + return current;
        + }
        +
        + private static LogicalRootNode convertRoot(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.RootNode rootProto = protoNode.getRoot();
        +
        + LogicalRootNode root = new LogicalRootNode(protoNode.getPid());
        + root.setChild(nodeMap.get(rootProto.getChildId()));
        + if (protoNode.hasInSchema())

        { + root.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + root.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        +
        + return root;
        + }
        +
        + private static SetSessionNode convertSetSession(PlanProto.LogicalNode protoNode)

        { + PlanProto.SetSessionNode setSessionProto = protoNode.getSetSession(); + + SetSessionNode setSession = new SetSessionNode(protoNode.getPid()); + setSession.init(setSessionProto.getName(), setSessionProto.hasValue() ? setSessionProto.getValue() : null); + + return setSession; + }

        +
        + private static EvalExprNode convertEvalExpr(OverridableConf context, PlanProto.LogicalNode protoNode)

        { + PlanProto.EvalExprNode evalExprProto = protoNode.getExprEval(); + + EvalExprNode evalExpr = new EvalExprNode(protoNode.getPid()); + evalExpr.setInSchema(convertSchema(protoNode.getInSchema())); + evalExpr.setTargets(convertTargets(context, evalExprProto.getTargetsList())); + + return evalExpr; + }

        +
        + private static ProjectionNode convertProjection(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.ProjectionNode projectionProto = protoNode.getProjection(); + + ProjectionNode projectionNode = new ProjectionNode(protoNode.getPid()); + projectionNode.init(projectionProto.getDistinct(), convertTargets(context, projectionProto.getTargetsList())); + projectionNode.setChild(nodeMap.get(projectionProto.getChildId())); + projectionNode.setInSchema(convertSchema(protoNode.getInSchema())); + projectionNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return projectionNode; + }

        +
        + private static LimitNode convertLimit(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.LimitNode limitProto = protoNode.getLimit(); + + LimitNode limitNode = new LimitNode(protoNode.getPid()); + limitNode.setChild(nodeMap.get(limitProto.getChildId())); + limitNode.setInSchema(convertSchema(protoNode.getInSchema())); + limitNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + limitNode.setFetchFirst(limitProto.getFetchFirstNum()); + + return limitNode; + }

        +
        + private static SortNode convertSort(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.SortNode sortProto = protoNode.getSort(); + + SortNode sortNode = new SortNode(protoNode.getPid()); + sortNode.setChild(nodeMap.get(sortProto.getChildId())); + sortNode.setInSchema(convertSchema(protoNode.getInSchema())); + sortNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + sortNode.setSortSpecs(convertSortSpecs(sortProto.getSortSpecsList())); + + return sortNode; + }

        +
        + private static HavingNode convertHaving(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.FilterNode havingProto = protoNode.getFilter(); + + HavingNode having = new HavingNode(protoNode.getPid()); + having.setChild(nodeMap.get(havingProto.getChildId())); + having.setQual(EvalNodeDeserializer.deserialize(context, havingProto.getQual())); + having.setInSchema(convertSchema(protoNode.getInSchema())); + having.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return having; + }

        +
        + private static WindowAggNode convertWindowAgg(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.WindowAggNode windowAggProto = protoNode.getWindowAgg();
        +
        + WindowAggNode windowAgg = new WindowAggNode(protoNode.getPid());
        + windowAgg.setChild(nodeMap.get(windowAggProto.getChildId()));
        +
        + if (windowAggProto.getPartitionKeysCount() > 0)

        { + windowAgg.setPartitionKeys(convertColumns(windowAggProto.getPartitionKeysList())); + }

        +
        + if (windowAggProto.getWindowFunctionsCount() > 0)

        { + windowAgg.setWindowFunctions(convertWindowFunccEvals(context, windowAggProto.getWindowFunctionsList())); + }

        +
        + windowAgg.setDistinct(windowAggProto.getDistinct());
        +
        + if (windowAggProto.getSortSpecsCount() > 0)

        { + windowAgg.setSortSpecs(convertSortSpecs(windowAggProto.getSortSpecsList())); + }

        +
        + if (windowAggProto.getTargetsCount() > 0)

        { + windowAgg.setTargets(convertTargets(context, windowAggProto.getTargetsList())); + }

        +
        + windowAgg.setInSchema(convertSchema(protoNode.getInSchema()));
        + windowAgg.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return windowAgg;
        + }
        +
        + private static GroupbyNode convertGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.GroupbyNode groupbyProto = protoNode.getGroupby();
        +
        + GroupbyNode groupby = new GroupbyNode(protoNode.getPid());
        + groupby.setChild(nodeMap.get(groupbyProto.getChildId()));
        + groupby.setDistinct(groupbyProto.getDistinct());
        +
        + if (groupbyProto.getGroupingKeysCount() > 0)

        { + groupby.setGroupingColumns(convertColumns(groupbyProto.getGroupingKeysList())); + }

        + if (groupbyProto.getAggFunctionsCount() > 0)

        { + groupby.setAggFunctions(convertAggFuncCallEvals(context, groupbyProto.getAggFunctionsList())); + }

        + if (groupbyProto.getTargetsCount() > 0)

        { + groupby.setTargets(convertTargets(context, groupbyProto.getTargetsList())); + }

        +
        + groupby.setInSchema(convertSchema(protoNode.getInSchema()));
        + groupby.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return groupby;
        + }
        +
        + private static DistinctGroupbyNode convertDistinctGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.DistinctGroupbyNode distinctGroupbyProto = protoNode.getDistinctGroupby();
        +
        + DistinctGroupbyNode distinctGroupby = new DistinctGroupbyNode(protoNode.getPid());
        + distinctGroupby.setChild(nodeMap.get(distinctGroupbyProto.getChildId()));
        +
        + if (distinctGroupbyProto.hasGroupbyNode())

        { + distinctGroupby.setGroupbyPlan(convertGroupby(context, nodeMap, distinctGroupbyProto.getGroupbyNode())); + }

        +
        + if (distinctGroupbyProto.getSubPlansCount() > 0) {
        + List<GroupbyNode> subPlans = TUtil.newList();
        + for (int i = 0; i < distinctGroupbyProto.getSubPlansCount(); i++)

        { + subPlans.add(convertGroupby(context, nodeMap, distinctGroupbyProto.getSubPlans(i))); + }

        + distinctGroupby.setSubPlans(subPlans);
        + }
        +
        + if (distinctGroupbyProto.getGroupingKeysCount() > 0)

        { + distinctGroupby.setGroupingColumns(convertColumns(distinctGroupbyProto.getGroupingKeysList())); + }

        + if (distinctGroupbyProto.getAggFunctionsCount() > 0)

        { + distinctGroupby.setAggFunctions(convertAggFuncCallEvals(context, distinctGroupbyProto.getAggFunctionsList())); + }

        + if (distinctGroupbyProto.getTargetsCount() > 0)

        { + distinctGroupby.setTargets(convertTargets(context, distinctGroupbyProto.getTargetsList())); + }

        + int [] resultColumnIds = new int[distinctGroupbyProto.getResultIdCount()];
        + for (int i = 0; i < distinctGroupbyProto.getResultIdCount(); i++)

        { + resultColumnIds[i] = distinctGroupbyProto.getResultId(i); + }

        + distinctGroupby.setResultColumnIds(resultColumnIds);
        +
        + // TODO - in distinct groupby, output and target are not matched to each other. It does not follow the convention.
        + distinctGroupby.setInSchema(convertSchema(protoNode.getInSchema()));
        + distinctGroupby.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return distinctGroupby;
        + }
        +
        + private static JoinNode convertJoin(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.JoinNode joinProto = protoNode.getJoin();
        +
        + JoinNode join = new JoinNode(protoNode.getPid());
        + join.setLeftChild(nodeMap.get(joinProto.getLeftChildId()));
        + join.setRightChild(nodeMap.get(joinProto.getRightChildId()));
        + join.setJoinType(convertJoinType(joinProto.getJoinType()));
        + join.setInSchema(convertSchema(protoNode.getInSchema()));
        + join.setOutSchema(convertSchema(protoNode.getOutSchema()));
        + if (joinProto.hasJoinQual())

        { + join.setJoinQual(EvalNodeDeserializer.deserialize(context, joinProto.getJoinQual())); + }

        + if (joinProto.getExistsTargets())

        { + join.setTargets(convertTargets(context, joinProto.getTargetsList())); + }

        +
        + return join;
        + }
        +
        + private static SelectionNode convertFilter(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.FilterNode filterProto = protoNode.getFilter(); + + SelectionNode selection = new SelectionNode(protoNode.getPid()); + selection.setInSchema(convertSchema(protoNode.getInSchema())); + selection.setOutSchema(convertSchema(protoNode.getOutSchema())); + selection.setChild(nodeMap.get(filterProto.getChildId())); + selection.setQual(EvalNodeDeserializer.deserialize(context, filterProto.getQual())); + + return selection; + }

        +
        + private static UnionNode convertUnion(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.UnionNode unionProto = protoNode.getUnion(); + + UnionNode union = new UnionNode(protoNode.getPid()); + union.setInSchema(convertSchema(protoNode.getInSchema())); + union.setOutSchema(convertSchema(protoNode.getOutSchema())); + union.setLeftChild(nodeMap.get(unionProto.getLeftChildId())); + union.setRightChild(nodeMap.get(unionProto.getRightChildId())); + + return union; + }

        +
        + private static ScanNode convertScan(OverridableConf context, PlanProto.LogicalNode protoNode)

        { + ScanNode scan = new ScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, scan); + + return scan; + }

        +
        + private static void fillScanNode(OverridableConf context, PlanProto.LogicalNode protoNode, ScanNode scan) {
        + PlanProto.ScanNode scanProto = protoNode.getScan();
        + if (scanProto.hasAlias())

        { + scan.init(new TableDesc(scanProto.getTable()), scanProto.getAlias()); + }

        else

        { + scan.init(new TableDesc(scanProto.getTable())); + }

        +
        + if (scanProto.getExistTargets())

        { + scan.setTargets(convertTargets(context, scanProto.getTargetsList())); + }

        +
        + if (scanProto.hasQual())

        { + scan.setQual(EvalNodeDeserializer.deserialize(context, scanProto.getQual())); + }

        +
        + scan.setInSchema(convertSchema(protoNode.getInSchema()));
        + scan.setOutSchema(convertSchema(protoNode.getOutSchema()));
        + }
        +
        + private static PartitionedTableScanNode convertPartitionScan(OverridableConf context, PlanProto.LogicalNode protoNode) {
        + PartitionedTableScanNode partitionedScan = new PartitionedTableScanNode(protoNode.getPid());
        + fillScanNode(context, protoNode, partitionedScan);
        +
        + PlanProto.PartitionScanSpec partitionScanProto = protoNode.getPartitionScan();
        + Path [] paths = new Path[partitionScanProto.getPathsCount()];
        + for (int i = 0; i < partitionScanProto.getPathsCount(); i++)

        { + paths[i] = new Path(partitionScanProto.getPaths(i)); + }

        + partitionedScan.setInputPaths(paths);
        + return partitionedScan;
        + }
        +
        + private static TableSubQueryNode convertTableSubQuery(OverridableConf context,
        + Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.TableSubQueryNode proto = protoNode.getTableSubQuery();
        +
        + TableSubQueryNode tableSubQuery = new TableSubQueryNode(protoNode.getPid());
        + tableSubQuery.init(proto.getTableName(), nodeMap.get(proto.getChildId()));
        + tableSubQuery.setInSchema(convertSchema(protoNode.getInSchema()));
        + if (proto.getTargetsCount() > 0)

        { + tableSubQuery.setTargets(convertTargets(context, proto.getTargetsList())); + }

        +
        + return tableSubQuery;
        + }
        +
        + private static CreateTableNode convertCreateTable(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore();
        + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable();
        + PlanProto.CreateTableNodeSpec createTableNodeSpec = protoNode.getCreateTable();
        +
        + CreateTableNode createTable = new CreateTableNode(protoNode.getPid());
        + if (protoNode.hasInSchema())

        { + createTable.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + createTable.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        + createTable.setChild(nodeMap.get(persistentStoreProto.getChildId()));
        + createTable.setStorageType(persistentStoreProto.getStorageType());
        + createTable.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties()));
        +
        + createTable.setTableName(storeTableNodeSpec.getTableName());
        + if (storeTableNodeSpec.hasPartitionMethod())

        { + createTable.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + }

        +
        + createTable.setTableSchema(convertSchema(createTableNodeSpec.getSchema()));
        + createTable.setExternal(createTableNodeSpec.getExternal());
        + if (createTableNodeSpec.getExternal() && createTableNodeSpec.hasPath())

        { + createTable.setPath(new Path(createTableNodeSpec.getPath())); + }

        + createTable.setIfNotExists(createTableNodeSpec.getIfNotExists());
        +
        + return createTable;
        + }
        +
        + private static InsertNode convertInsert(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore();
        + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable();
        + PlanProto.InsertNodeSpec insertNodeSpec = protoNode.getInsert();
        +
        + InsertNode insertNode = new InsertNode(protoNode.getPid());
        + if (protoNode.hasInSchema())

        { + insertNode.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + insertNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        + insertNode.setChild(nodeMap.get(persistentStoreProto.getChildId()));
        + insertNode.setStorageType(persistentStoreProto.getStorageType());
        + insertNode.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties()));
        +
        + if (storeTableNodeSpec.hasTableName())

        { + insertNode.setTableName(storeTableNodeSpec.getTableName()); + }

        + if (storeTableNodeSpec.hasPartitionMethod())

        { + insertNode.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + }

        +
        + insertNode.setOverwrite(insertNodeSpec.getOverwrite());
        + insertNode.setTableSchema(convertSchema(insertNodeSpec.getTableSchema()));
        + if (insertNodeSpec.hasTargetSchema())

        { + insertNode.setTargetSchema(convertSchema(insertNodeSpec.getTargetSchema())); + }

        + if (insertNodeSpec.hasProjectedSchema())

        { + insertNode.setProjectedSchema(convertSchema(insertNodeSpec.getProjectedSchema())); + }

        + if (insertNodeSpec.hasPath())

        { + insertNode.setPath(new Path(insertNodeSpec.getPath())); + }

        +
        + return insertNode;
        + }
        +
        + private static DropTableNode convertDropTable(PlanProto.LogicalNode protoNode)

        { + DropTableNode dropTable = new DropTableNode(protoNode.getPid()); + + PlanProto.DropTableNode dropTableProto = protoNode.getDropTable(); + dropTable.init(dropTableProto.getTableName(), dropTableProto.getIfExists(), dropTableProto.getPurge()); + + return dropTable; + }

        +
        + private static CreateDatabaseNode convertCreateDatabase(PlanProto.LogicalNode protoNode)

        { + CreateDatabaseNode createDatabase = new CreateDatabaseNode(protoNode.getPid()); + + PlanProto.CreateDatabaseNode createDatabaseProto = protoNode.getCreateDatabase(); + createDatabase.init(createDatabaseProto.getDbName(), createDatabaseProto.getIfNotExists()); + + return createDatabase; + }

        +
        + private static DropDatabaseNode convertDropDatabase(PlanProto.LogicalNode protoNode)

        { + DropDatabaseNode dropDatabase = new DropDatabaseNode(protoNode.getPid()); + + PlanProto.DropDatabaseNode dropDatabaseProto = protoNode.getDropDatabase(); + dropDatabase.init(dropDatabaseProto.getDbName(), dropDatabaseProto.getIfExists()); + + return dropDatabase; + }

        +
        + private static AlterTablespaceNode convertAlterTablespace(PlanProto.LogicalNode protoNode) {
        + AlterTablespaceNode alterTablespace = new AlterTablespaceNode(protoNode.getPid());
        +
        + PlanProto.AlterTablespaceNode alterTablespaceProto = protoNode.getAlterTablespace();
        + alterTablespace.setTablespaceName(alterTablespaceProto.getTableSpaceName());
        +
        + switch (alterTablespaceProto.getSetType())

        { + case LOCATION: + alterTablespace.setLocation(alterTablespaceProto.getSetLocation().getLocation()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTablespaceProto.getSetType().name()); + }

        +
        + return alterTablespace;
        + }
        +
        + private static AlterTableNode convertAlterTable(PlanProto.LogicalNode protoNode) {
        + AlterTableNode alterTable = new AlterTableNode(protoNode.getPid());
        +
        + PlanProto.AlterTableNode alterTableProto = protoNode.getAlterTable();
        + alterTable.setTableName(alterTableProto.getTableName());
        +
        + switch (alterTableProto.getSetType())

        { + case RENAME_TABLE: + alterTable.setNewTableName(alterTableProto.getRenameTable().getNewName()); + break; + case ADD_COLUMN: + alterTable.setAddNewColumn(new Column(alterTableProto.getAddColumn().getAddColumn())); + break; + case RENAME_COLUMN: + alterTable.setColumnName(alterTableProto.getRenameColumn().getOldName()); + alterTable.setNewColumnName(alterTableProto.getRenameColumn().getNewName()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTableProto.getSetType().name()); + }

        +
        + return alterTable;
        + }
        +
        + private static TruncateTableNode convertTruncateTable(PlanProto.LogicalNode protoNode)

        { + TruncateTableNode truncateTable = new TruncateTableNode(protoNode.getPid()); + + PlanProto.TruncateTableNode truncateTableProto = protoNode.getTruncateTableNode(); + truncateTable.setTableNames(truncateTableProto.getTableNamesList()); + + return truncateTable; + }

        +
        + private static AggregationFunctionCallEval [] convertAggFuncCallEvals(OverridableConf context,
        + List<PlanProto.EvalNodeTree> evalTrees) {
        + AggregationFunctionCallEval [] aggFuncs = new AggregationFunctionCallEval[evalTrees.size()];
        + for (int i = 0; i < aggFuncs.length; i++)

        { + aggFuncs[i] = (AggregationFunctionCallEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + }

        + return aggFuncs;
        + }
        +
        + private static WindowFunctionEval[] convertWindowFunccEvals(OverridableConf context,
        + List<PlanProto.EvalNodeTree> evalTrees) {
        + WindowFunctionEval[] winFuncEvals = new WindowFunctionEval[evalTrees.size()];
        + for (int i = 0; i < winFuncEvals.length; i++)

        { + winFuncEvals[i] = (WindowFunctionEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + }

        + return winFuncEvals;
        + }
        +
        + public static Schema convertSchema(CatalogProtos.SchemaProto proto)

        { + return new Schema(proto); + }

        +
        + public static Column[] convertColumns(List<CatalogProtos.ColumnProto> columnProtos) {
        + Column [] columns = new Column[columnProtos.size()];
        + for (int i = 0; i < columns.length; i++)

        { + columns[i] = new Column(columnProtos.get(i)); + }

        + return columns;
        + }
        +
        + public static Target[] convertTargets(OverridableConf context, List<PlanProto.Target> targetsProto) {
        + Target [] targets = new Target[targetsProto.size()];
        + for (int i = 0; i < targets.length; i++) {
        + PlanProto.Target targetProto = targetsProto.get;
        + EvalNode evalNode = EvalNodeDeserializer.deserialize(context, targetProto.getExpr());
        + if (targetProto.hasAlias())

        { + targets[i] = new Target(evalNode, targetProto.getAlias()); + }

        else

        { + targets[i] = new Target((FieldEval) evalNode); + }

        + }
        + return targets;
        + }
        +
        + public static SortSpec[] convertSortSpecs(List<CatalogProtos.SortSpecProto> sortSpecProtos) {
        + SortSpec[] sortSpecs = new SortSpec[sortSpecProtos.size()];
        + int i = 0;
        + for (CatalogProtos.SortSpecProto proto : sortSpecProtos)

        { + sortSpecs[i++] = new SortSpec(proto); + }

        + return sortSpecs;
        + }
        +
        + public static JoinType convertJoinType(PlanProto.JoinType type) {
        + switch (type) {
        + case CROSS_JOIN:
        — End diff –

        Got it.

        Show
        githubbot ASF GitHub Bot added a comment - Github user jihoonson commented on a diff in the pull request: https://github.com/apache/tajo/pull/322#discussion_r22342445 — Diff: tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java — @@ -0,0 +1,678 @@ +/* + * Lisensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.tajo.plan.serder; + +import com.google.common.collect.Lists; +import com.google.common.collect.Maps; +import org.apache.hadoop.fs.Path; +import org.apache.tajo.OverridableConf; +import org.apache.tajo.algebra.JoinType; +import org.apache.tajo.catalog.Column; +import org.apache.tajo.catalog.Schema; +import org.apache.tajo.catalog.SortSpec; +import org.apache.tajo.catalog.TableDesc; +import org.apache.tajo.catalog.partition.PartitionMethodDesc; +import org.apache.tajo.catalog.proto.CatalogProtos; +import org.apache.tajo.exception.UnimplementedException; +import org.apache.tajo.plan.Target; +import org.apache.tajo.plan.expr.AggregationFunctionCallEval; +import org.apache.tajo.plan.expr.EvalNode; +import org.apache.tajo.plan.expr.FieldEval; +import org.apache.tajo.plan.expr.WindowFunctionEval; +import org.apache.tajo.plan.logical.*; +import org.apache.tajo.util.KeyValueSet; +import org.apache.tajo.util.TUtil; + +import java.util.*; + +/** + * It deserializes a list of serialized logical nodes into a logical node tree. + */ +public class LogicalNodeDeserializer { + private static final LogicalNodeDeserializer instance; + + static { + instance = new LogicalNodeDeserializer(); + } + + /** + * Deserialize a list of nodes into a logical node tree. + * + * @param context QueryContext + * @param tree LogicalNodeTree which contains a list of serialized logical nodes. + * @return A logical node tree + */ + public static LogicalNode deserialize(OverridableConf context, PlanProto.LogicalNodeTree tree) { + Map<Integer, LogicalNode> nodeMap = Maps.newHashMap(); + + // sort serialized logical nodes in an ascending order of their sids + List<PlanProto.LogicalNode> nodeList = Lists.newArrayList(tree.getNodesList()); + Collections.sort(nodeList, new Comparator<PlanProto.LogicalNode>() { + @Override + public int compare(PlanProto.LogicalNode o1, PlanProto.LogicalNode o2) { + return o1.getSid() - o2.getSid(); + } + }); + + LogicalNode current = null; + + // The sorted order is the same of a postfix traverse order. + // So, it sequentially transforms each serialized node into a LogicalNode instance in a postfix order of + // the original logical node tree. + + Iterator<PlanProto.LogicalNode> it = nodeList.iterator(); + while (it.hasNext()) { + PlanProto.LogicalNode protoNode = it.next(); + + switch (protoNode.getType()) { + case ROOT: + current = convertRoot(nodeMap, protoNode); + break; + case SET_SESSION: + current = convertSetSession(protoNode); + break; + case EXPRS: + current = convertEvalExpr(context, protoNode); + break; + case PROJECTION: + current = convertProjection(context, nodeMap, protoNode); + break; + case LIMIT: + current = convertLimit(nodeMap, protoNode); + break; + case SORT: + current = convertSort(nodeMap, protoNode); + break; + case WINDOW_AGG: + current = convertWindowAgg(context, nodeMap, protoNode); + break; + case HAVING: + current = convertHaving(context, nodeMap, protoNode); + break; + case GROUP_BY: + current = convertGroupby(context, nodeMap, protoNode); + break; + case DISTINCT_GROUP_BY: + current = convertDistinctGroupby(context, nodeMap, protoNode); + break; + case SELECTION: + current = convertFilter(context, nodeMap, protoNode); + break; + case JOIN: + current = convertJoin(context, nodeMap, protoNode); + break; + case TABLE_SUBQUERY: + current = convertTableSubQuery(context, nodeMap, protoNode); + break; + case UNION: + current = convertUnion(nodeMap, protoNode); + break; + case PARTITIONS_SCAN: + current = convertPartitionScan(context, protoNode); + break; + case SCAN: + current = convertScan(context, protoNode); + break; + + case CREATE_TABLE: + current = convertCreateTable(nodeMap, protoNode); + break; + case INSERT: + current = convertInsert(nodeMap, protoNode); + break; + case DROP_TABLE: + current = convertDropTable(protoNode); + break; + + case CREATE_DATABASE: + current = convertCreateDatabase(protoNode); + break; + case DROP_DATABASE: + current = convertDropDatabase(protoNode); + break; + + case ALTER_TABLESPACE: + current = convertAlterTablespace(protoNode); + break; + case ALTER_TABLE: + current = convertAlterTable(protoNode); + break; + case TRUNCATE_TABLE: + current = convertTruncateTable(protoNode); + break; + + default: + throw new RuntimeException("Unknown NodeType: " + protoNode.getType().name()); + } + + nodeMap.put(protoNode.getSid(), current); + } + + return current; + } + + private static LogicalRootNode convertRoot(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.RootNode rootProto = protoNode.getRoot(); + + LogicalRootNode root = new LogicalRootNode(protoNode.getPid()); + root.setChild(nodeMap.get(rootProto.getChildId())); + if (protoNode.hasInSchema()) { + root.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + root.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + + return root; + } + + private static SetSessionNode convertSetSession(PlanProto.LogicalNode protoNode) { + PlanProto.SetSessionNode setSessionProto = protoNode.getSetSession(); + + SetSessionNode setSession = new SetSessionNode(protoNode.getPid()); + setSession.init(setSessionProto.getName(), setSessionProto.hasValue() ? setSessionProto.getValue() : null); + + return setSession; + } + + private static EvalExprNode convertEvalExpr(OverridableConf context, PlanProto.LogicalNode protoNode) { + PlanProto.EvalExprNode evalExprProto = protoNode.getExprEval(); + + EvalExprNode evalExpr = new EvalExprNode(protoNode.getPid()); + evalExpr.setInSchema(convertSchema(protoNode.getInSchema())); + evalExpr.setTargets(convertTargets(context, evalExprProto.getTargetsList())); + + return evalExpr; + } + + private static ProjectionNode convertProjection(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.ProjectionNode projectionProto = protoNode.getProjection(); + + ProjectionNode projectionNode = new ProjectionNode(protoNode.getPid()); + projectionNode.init(projectionProto.getDistinct(), convertTargets(context, projectionProto.getTargetsList())); + projectionNode.setChild(nodeMap.get(projectionProto.getChildId())); + projectionNode.setInSchema(convertSchema(protoNode.getInSchema())); + projectionNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return projectionNode; + } + + private static LimitNode convertLimit(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.LimitNode limitProto = protoNode.getLimit(); + + LimitNode limitNode = new LimitNode(protoNode.getPid()); + limitNode.setChild(nodeMap.get(limitProto.getChildId())); + limitNode.setInSchema(convertSchema(protoNode.getInSchema())); + limitNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + limitNode.setFetchFirst(limitProto.getFetchFirstNum()); + + return limitNode; + } + + private static SortNode convertSort(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.SortNode sortProto = protoNode.getSort(); + + SortNode sortNode = new SortNode(protoNode.getPid()); + sortNode.setChild(nodeMap.get(sortProto.getChildId())); + sortNode.setInSchema(convertSchema(protoNode.getInSchema())); + sortNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + sortNode.setSortSpecs(convertSortSpecs(sortProto.getSortSpecsList())); + + return sortNode; + } + + private static HavingNode convertHaving(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.FilterNode havingProto = protoNode.getFilter(); + + HavingNode having = new HavingNode(protoNode.getPid()); + having.setChild(nodeMap.get(havingProto.getChildId())); + having.setQual(EvalNodeDeserializer.deserialize(context, havingProto.getQual())); + having.setInSchema(convertSchema(protoNode.getInSchema())); + having.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return having; + } + + private static WindowAggNode convertWindowAgg(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.WindowAggNode windowAggProto = protoNode.getWindowAgg(); + + WindowAggNode windowAgg = new WindowAggNode(protoNode.getPid()); + windowAgg.setChild(nodeMap.get(windowAggProto.getChildId())); + + if (windowAggProto.getPartitionKeysCount() > 0) { + windowAgg.setPartitionKeys(convertColumns(windowAggProto.getPartitionKeysList())); + } + + if (windowAggProto.getWindowFunctionsCount() > 0) { + windowAgg.setWindowFunctions(convertWindowFunccEvals(context, windowAggProto.getWindowFunctionsList())); + } + + windowAgg.setDistinct(windowAggProto.getDistinct()); + + if (windowAggProto.getSortSpecsCount() > 0) { + windowAgg.setSortSpecs(convertSortSpecs(windowAggProto.getSortSpecsList())); + } + + if (windowAggProto.getTargetsCount() > 0) { + windowAgg.setTargets(convertTargets(context, windowAggProto.getTargetsList())); + } + + windowAgg.setInSchema(convertSchema(protoNode.getInSchema())); + windowAgg.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return windowAgg; + } + + private static GroupbyNode convertGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.GroupbyNode groupbyProto = protoNode.getGroupby(); + + GroupbyNode groupby = new GroupbyNode(protoNode.getPid()); + groupby.setChild(nodeMap.get(groupbyProto.getChildId())); + groupby.setDistinct(groupbyProto.getDistinct()); + + if (groupbyProto.getGroupingKeysCount() > 0) { + groupby.setGroupingColumns(convertColumns(groupbyProto.getGroupingKeysList())); + } + if (groupbyProto.getAggFunctionsCount() > 0) { + groupby.setAggFunctions(convertAggFuncCallEvals(context, groupbyProto.getAggFunctionsList())); + } + if (groupbyProto.getTargetsCount() > 0) { + groupby.setTargets(convertTargets(context, groupbyProto.getTargetsList())); + } + + groupby.setInSchema(convertSchema(protoNode.getInSchema())); + groupby.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return groupby; + } + + private static DistinctGroupbyNode convertDistinctGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.DistinctGroupbyNode distinctGroupbyProto = protoNode.getDistinctGroupby(); + + DistinctGroupbyNode distinctGroupby = new DistinctGroupbyNode(protoNode.getPid()); + distinctGroupby.setChild(nodeMap.get(distinctGroupbyProto.getChildId())); + + if (distinctGroupbyProto.hasGroupbyNode()) { + distinctGroupby.setGroupbyPlan(convertGroupby(context, nodeMap, distinctGroupbyProto.getGroupbyNode())); + } + + if (distinctGroupbyProto.getSubPlansCount() > 0) { + List<GroupbyNode> subPlans = TUtil.newList(); + for (int i = 0; i < distinctGroupbyProto.getSubPlansCount(); i++) { + subPlans.add(convertGroupby(context, nodeMap, distinctGroupbyProto.getSubPlans(i))); + } + distinctGroupby.setSubPlans(subPlans); + } + + if (distinctGroupbyProto.getGroupingKeysCount() > 0) { + distinctGroupby.setGroupingColumns(convertColumns(distinctGroupbyProto.getGroupingKeysList())); + } + if (distinctGroupbyProto.getAggFunctionsCount() > 0) { + distinctGroupby.setAggFunctions(convertAggFuncCallEvals(context, distinctGroupbyProto.getAggFunctionsList())); + } + if (distinctGroupbyProto.getTargetsCount() > 0) { + distinctGroupby.setTargets(convertTargets(context, distinctGroupbyProto.getTargetsList())); + } + int [] resultColumnIds = new int [distinctGroupbyProto.getResultIdCount()] ; + for (int i = 0; i < distinctGroupbyProto.getResultIdCount(); i++) { + resultColumnIds[i] = distinctGroupbyProto.getResultId(i); + } + distinctGroupby.setResultColumnIds(resultColumnIds); + + // TODO - in distinct groupby, output and target are not matched to each other. It does not follow the convention. + distinctGroupby.setInSchema(convertSchema(protoNode.getInSchema())); + distinctGroupby.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return distinctGroupby; + } + + private static JoinNode convertJoin(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.JoinNode joinProto = protoNode.getJoin(); + + JoinNode join = new JoinNode(protoNode.getPid()); + join.setLeftChild(nodeMap.get(joinProto.getLeftChildId())); + join.setRightChild(nodeMap.get(joinProto.getRightChildId())); + join.setJoinType(convertJoinType(joinProto.getJoinType())); + join.setInSchema(convertSchema(protoNode.getInSchema())); + join.setOutSchema(convertSchema(protoNode.getOutSchema())); + if (joinProto.hasJoinQual()) { + join.setJoinQual(EvalNodeDeserializer.deserialize(context, joinProto.getJoinQual())); + } + if (joinProto.getExistsTargets()) { + join.setTargets(convertTargets(context, joinProto.getTargetsList())); + } + + return join; + } + + private static SelectionNode convertFilter(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.FilterNode filterProto = protoNode.getFilter(); + + SelectionNode selection = new SelectionNode(protoNode.getPid()); + selection.setInSchema(convertSchema(protoNode.getInSchema())); + selection.setOutSchema(convertSchema(protoNode.getOutSchema())); + selection.setChild(nodeMap.get(filterProto.getChildId())); + selection.setQual(EvalNodeDeserializer.deserialize(context, filterProto.getQual())); + + return selection; + } + + private static UnionNode convertUnion(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.UnionNode unionProto = protoNode.getUnion(); + + UnionNode union = new UnionNode(protoNode.getPid()); + union.setInSchema(convertSchema(protoNode.getInSchema())); + union.setOutSchema(convertSchema(protoNode.getOutSchema())); + union.setLeftChild(nodeMap.get(unionProto.getLeftChildId())); + union.setRightChild(nodeMap.get(unionProto.getRightChildId())); + + return union; + } + + private static ScanNode convertScan(OverridableConf context, PlanProto.LogicalNode protoNode) { + ScanNode scan = new ScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, scan); + + return scan; + } + + private static void fillScanNode(OverridableConf context, PlanProto.LogicalNode protoNode, ScanNode scan) { + PlanProto.ScanNode scanProto = protoNode.getScan(); + if (scanProto.hasAlias()) { + scan.init(new TableDesc(scanProto.getTable()), scanProto.getAlias()); + } else { + scan.init(new TableDesc(scanProto.getTable())); + } + + if (scanProto.getExistTargets()) { + scan.setTargets(convertTargets(context, scanProto.getTargetsList())); + } + + if (scanProto.hasQual()) { + scan.setQual(EvalNodeDeserializer.deserialize(context, scanProto.getQual())); + } + + scan.setInSchema(convertSchema(protoNode.getInSchema())); + scan.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + + private static PartitionedTableScanNode convertPartitionScan(OverridableConf context, PlanProto.LogicalNode protoNode) { + PartitionedTableScanNode partitionedScan = new PartitionedTableScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, partitionedScan); + + PlanProto.PartitionScanSpec partitionScanProto = protoNode.getPartitionScan(); + Path [] paths = new Path [partitionScanProto.getPathsCount()] ; + for (int i = 0; i < partitionScanProto.getPathsCount(); i++) { + paths[i] = new Path(partitionScanProto.getPaths(i)); + } + partitionedScan.setInputPaths(paths); + return partitionedScan; + } + + private static TableSubQueryNode convertTableSubQuery(OverridableConf context, + Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.TableSubQueryNode proto = protoNode.getTableSubQuery(); + + TableSubQueryNode tableSubQuery = new TableSubQueryNode(protoNode.getPid()); + tableSubQuery.init(proto.getTableName(), nodeMap.get(proto.getChildId())); + tableSubQuery.setInSchema(convertSchema(protoNode.getInSchema())); + if (proto.getTargetsCount() > 0) { + tableSubQuery.setTargets(convertTargets(context, proto.getTargetsList())); + } + + return tableSubQuery; + } + + private static CreateTableNode convertCreateTable(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore(); + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable(); + PlanProto.CreateTableNodeSpec createTableNodeSpec = protoNode.getCreateTable(); + + CreateTableNode createTable = new CreateTableNode(protoNode.getPid()); + if (protoNode.hasInSchema()) { + createTable.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + createTable.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + createTable.setChild(nodeMap.get(persistentStoreProto.getChildId())); + createTable.setStorageType(persistentStoreProto.getStorageType()); + createTable.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties())); + + createTable.setTableName(storeTableNodeSpec.getTableName()); + if (storeTableNodeSpec.hasPartitionMethod()) { + createTable.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + } + + createTable.setTableSchema(convertSchema(createTableNodeSpec.getSchema())); + createTable.setExternal(createTableNodeSpec.getExternal()); + if (createTableNodeSpec.getExternal() && createTableNodeSpec.hasPath()) { + createTable.setPath(new Path(createTableNodeSpec.getPath())); + } + createTable.setIfNotExists(createTableNodeSpec.getIfNotExists()); + + return createTable; + } + + private static InsertNode convertInsert(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore(); + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable(); + PlanProto.InsertNodeSpec insertNodeSpec = protoNode.getInsert(); + + InsertNode insertNode = new InsertNode(protoNode.getPid()); + if (protoNode.hasInSchema()) { + insertNode.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + insertNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + insertNode.setChild(nodeMap.get(persistentStoreProto.getChildId())); + insertNode.setStorageType(persistentStoreProto.getStorageType()); + insertNode.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties())); + + if (storeTableNodeSpec.hasTableName()) { + insertNode.setTableName(storeTableNodeSpec.getTableName()); + } + if (storeTableNodeSpec.hasPartitionMethod()) { + insertNode.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + } + + insertNode.setOverwrite(insertNodeSpec.getOverwrite()); + insertNode.setTableSchema(convertSchema(insertNodeSpec.getTableSchema())); + if (insertNodeSpec.hasTargetSchema()) { + insertNode.setTargetSchema(convertSchema(insertNodeSpec.getTargetSchema())); + } + if (insertNodeSpec.hasProjectedSchema()) { + insertNode.setProjectedSchema(convertSchema(insertNodeSpec.getProjectedSchema())); + } + if (insertNodeSpec.hasPath()) { + insertNode.setPath(new Path(insertNodeSpec.getPath())); + } + + return insertNode; + } + + private static DropTableNode convertDropTable(PlanProto.LogicalNode protoNode) { + DropTableNode dropTable = new DropTableNode(protoNode.getPid()); + + PlanProto.DropTableNode dropTableProto = protoNode.getDropTable(); + dropTable.init(dropTableProto.getTableName(), dropTableProto.getIfExists(), dropTableProto.getPurge()); + + return dropTable; + } + + private static CreateDatabaseNode convertCreateDatabase(PlanProto.LogicalNode protoNode) { + CreateDatabaseNode createDatabase = new CreateDatabaseNode(protoNode.getPid()); + + PlanProto.CreateDatabaseNode createDatabaseProto = protoNode.getCreateDatabase(); + createDatabase.init(createDatabaseProto.getDbName(), createDatabaseProto.getIfNotExists()); + + return createDatabase; + } + + private static DropDatabaseNode convertDropDatabase(PlanProto.LogicalNode protoNode) { + DropDatabaseNode dropDatabase = new DropDatabaseNode(protoNode.getPid()); + + PlanProto.DropDatabaseNode dropDatabaseProto = protoNode.getDropDatabase(); + dropDatabase.init(dropDatabaseProto.getDbName(), dropDatabaseProto.getIfExists()); + + return dropDatabase; + } + + private static AlterTablespaceNode convertAlterTablespace(PlanProto.LogicalNode protoNode) { + AlterTablespaceNode alterTablespace = new AlterTablespaceNode(protoNode.getPid()); + + PlanProto.AlterTablespaceNode alterTablespaceProto = protoNode.getAlterTablespace(); + alterTablespace.setTablespaceName(alterTablespaceProto.getTableSpaceName()); + + switch (alterTablespaceProto.getSetType()) { + case LOCATION: + alterTablespace.setLocation(alterTablespaceProto.getSetLocation().getLocation()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTablespaceProto.getSetType().name()); + } + + return alterTablespace; + } + + private static AlterTableNode convertAlterTable(PlanProto.LogicalNode protoNode) { + AlterTableNode alterTable = new AlterTableNode(protoNode.getPid()); + + PlanProto.AlterTableNode alterTableProto = protoNode.getAlterTable(); + alterTable.setTableName(alterTableProto.getTableName()); + + switch (alterTableProto.getSetType()) { + case RENAME_TABLE: + alterTable.setNewTableName(alterTableProto.getRenameTable().getNewName()); + break; + case ADD_COLUMN: + alterTable.setAddNewColumn(new Column(alterTableProto.getAddColumn().getAddColumn())); + break; + case RENAME_COLUMN: + alterTable.setColumnName(alterTableProto.getRenameColumn().getOldName()); + alterTable.setNewColumnName(alterTableProto.getRenameColumn().getNewName()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTableProto.getSetType().name()); + } + + return alterTable; + } + + private static TruncateTableNode convertTruncateTable(PlanProto.LogicalNode protoNode) { + TruncateTableNode truncateTable = new TruncateTableNode(protoNode.getPid()); + + PlanProto.TruncateTableNode truncateTableProto = protoNode.getTruncateTableNode(); + truncateTable.setTableNames(truncateTableProto.getTableNamesList()); + + return truncateTable; + } + + private static AggregationFunctionCallEval [] convertAggFuncCallEvals(OverridableConf context, + List<PlanProto.EvalNodeTree> evalTrees) { + AggregationFunctionCallEval [] aggFuncs = new AggregationFunctionCallEval [evalTrees.size()] ; + for (int i = 0; i < aggFuncs.length; i++) { + aggFuncs[i] = (AggregationFunctionCallEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + } + return aggFuncs; + } + + private static WindowFunctionEval[] convertWindowFunccEvals(OverridableConf context, + List<PlanProto.EvalNodeTree> evalTrees) { + WindowFunctionEval[] winFuncEvals = new WindowFunctionEval [evalTrees.size()] ; + for (int i = 0; i < winFuncEvals.length; i++) { + winFuncEvals[i] = (WindowFunctionEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + } + return winFuncEvals; + } + + public static Schema convertSchema(CatalogProtos.SchemaProto proto) { + return new Schema(proto); + } + + public static Column[] convertColumns(List<CatalogProtos.ColumnProto> columnProtos) { + Column [] columns = new Column [columnProtos.size()] ; + for (int i = 0; i < columns.length; i++) { + columns[i] = new Column(columnProtos.get(i)); + } + return columns; + } + + public static Target[] convertTargets(OverridableConf context, List<PlanProto.Target> targetsProto) { + Target [] targets = new Target [targetsProto.size()] ; + for (int i = 0; i < targets.length; i++) { + PlanProto.Target targetProto = targetsProto.get ; + EvalNode evalNode = EvalNodeDeserializer.deserialize(context, targetProto.getExpr()); + if (targetProto.hasAlias()) { + targets[i] = new Target(evalNode, targetProto.getAlias()); + } else { + targets[i] = new Target((FieldEval) evalNode); + } + } + return targets; + } + + public static SortSpec[] convertSortSpecs(List<CatalogProtos.SortSpecProto> sortSpecProtos) { + SortSpec[] sortSpecs = new SortSpec [sortSpecProtos.size()] ; + int i = 0; + for (CatalogProtos.SortSpecProto proto : sortSpecProtos) { + sortSpecs[i++] = new SortSpec(proto); + } + return sortSpecs; + } + + public static JoinType convertJoinType(PlanProto.JoinType type) { + switch (type) { + case CROSS_JOIN: — End diff – Got it.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user hyunsik commented on a diff in the pull request:

        https://github.com/apache/tajo/pull/322#discussion_r22341820

        — Diff: tajo-plan/src/main/proto/Plan.proto —
        @@ -26,58 +26,280 @@ import "CatalogProtos.proto";
        import "DataTypes.proto";

        enum NodeType

        { - BST_INDEX_SCAN = 0; - EXCEPT = 1; + SET_SESSION = 0; + + ROOT = 1; EXPRS = 2; - DISTINCT_GROUP_BY = 3; - GROUP_BY = 4; - HAVING = 5; - JOIN = 6; - INSERT = 7; - INTERSECT = 8; - LIMIT = 9; - PARTITIONS_SCAN = 10; - PROJECTION = 11; - ROOT = 12; - SCAN = 13; - SELECTION = 14; - SORT = 15; - STORE = 16; - TABLE_SUBQUERY = 17; - UNION = 18; - WINDOW_AGG = 19; - - CREATE_DATABASE = 20; - DROP_DATABASE = 21; - CREATE_TABLE = 22; - DROP_TABLE = 23; - ALTER_TABLESPACE = 24; - ALTER_TABLE = 25; - TRUNCATE_TABLE = 26; -}

        -
        -message LogicalPlan

        { - required KeyValueSetProto adjacentList = 1; + PROJECTION = 3; + LIMIT = 4; + WINDOW_AGG = 5; + SORT = 6; + HAVING = 7; + GROUP_BY = 8; + DISTINCT_GROUP_BY = 9; + SELECTION = 10; + JOIN = 11; + UNION = 12; + INTERSECT = 13; + EXCEPT = 14; + TABLE_SUBQUERY = 15; + SCAN = 16; + PARTITIONS_SCAN = 17; + BST_INDEX_SCAN = 18; + STORE = 19; + INSERT = 20; + + CREATE_DATABASE = 21; + DROP_DATABASE = 22; + CREATE_TABLE = 23; + DROP_TABLE = 24; + ALTER_TABLESPACE = 25; + ALTER_TABLE = 26; + TRUNCATE_TABLE = 27; }

        -message LogicalNode {

        • required int32 pid = 1;
        • required NodeType type = 2;
        • required SchemaProto in_schema = 3;
        • required SchemaProto out_schema = 4;
        • required NodeSpec spec = 5;
          +message LogicalNodeTree { + repeated LogicalNode nodes = 1; }

        -message NodeSpec {

        • optional ScanNode scan = 1;
          +message LogicalNode {
          + required int32 sid = 1;
            • End diff –

        That's good idea. I'll change them.

        Show
        githubbot ASF GitHub Bot added a comment - Github user hyunsik commented on a diff in the pull request: https://github.com/apache/tajo/pull/322#discussion_r22341820 — Diff: tajo-plan/src/main/proto/Plan.proto — @@ -26,58 +26,280 @@ import "CatalogProtos.proto"; import "DataTypes.proto"; enum NodeType { - BST_INDEX_SCAN = 0; - EXCEPT = 1; + SET_SESSION = 0; + + ROOT = 1; EXPRS = 2; - DISTINCT_GROUP_BY = 3; - GROUP_BY = 4; - HAVING = 5; - JOIN = 6; - INSERT = 7; - INTERSECT = 8; - LIMIT = 9; - PARTITIONS_SCAN = 10; - PROJECTION = 11; - ROOT = 12; - SCAN = 13; - SELECTION = 14; - SORT = 15; - STORE = 16; - TABLE_SUBQUERY = 17; - UNION = 18; - WINDOW_AGG = 19; - - CREATE_DATABASE = 20; - DROP_DATABASE = 21; - CREATE_TABLE = 22; - DROP_TABLE = 23; - ALTER_TABLESPACE = 24; - ALTER_TABLE = 25; - TRUNCATE_TABLE = 26; -} - -message LogicalPlan { - required KeyValueSetProto adjacentList = 1; + PROJECTION = 3; + LIMIT = 4; + WINDOW_AGG = 5; + SORT = 6; + HAVING = 7; + GROUP_BY = 8; + DISTINCT_GROUP_BY = 9; + SELECTION = 10; + JOIN = 11; + UNION = 12; + INTERSECT = 13; + EXCEPT = 14; + TABLE_SUBQUERY = 15; + SCAN = 16; + PARTITIONS_SCAN = 17; + BST_INDEX_SCAN = 18; + STORE = 19; + INSERT = 20; + + CREATE_DATABASE = 21; + DROP_DATABASE = 22; + CREATE_TABLE = 23; + DROP_TABLE = 24; + ALTER_TABLESPACE = 25; + ALTER_TABLE = 26; + TRUNCATE_TABLE = 27; } -message LogicalNode { required int32 pid = 1; required NodeType type = 2; required SchemaProto in_schema = 3; required SchemaProto out_schema = 4; required NodeSpec spec = 5; +message LogicalNodeTree { + repeated LogicalNode nodes = 1; } -message NodeSpec { optional ScanNode scan = 1; +message LogicalNode { + required int32 sid = 1; End diff – That's good idea. I'll change them.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user hyunsik commented on a diff in the pull request:

        https://github.com/apache/tajo/pull/322#discussion_r22341678

        — Diff: tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java —
        @@ -0,0 +1,678 @@
        +/*
        + * Lisensed to the Apache Software Foundation (ASF) under one
        + * or more contributor license agreements. See the NOTICE file
        + * distributed with this work for additional information
        + * regarding copyright ownership. The ASF licenses this file
        + * to you under the Apache License, Version 2.0 (the
        + * "License"); you may not use this file except in compliance
        + * with the License. You may obtain a copy of the License at
        + *
        + * http://www.apache.org/licenses/LICENSE-2.0
        + *
        + * Unless required by applicable law or agreed to in writing, software
        + * distributed under the License is distributed on an "AS IS" BASIS,
        + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        + * See the License for the specific language governing permissions and
        + * limitations under the License.
        + */
        +
        +package org.apache.tajo.plan.serder;
        +
        +import com.google.common.collect.Lists;
        +import com.google.common.collect.Maps;
        +import org.apache.hadoop.fs.Path;
        +import org.apache.tajo.OverridableConf;
        +import org.apache.tajo.algebra.JoinType;
        +import org.apache.tajo.catalog.Column;
        +import org.apache.tajo.catalog.Schema;
        +import org.apache.tajo.catalog.SortSpec;
        +import org.apache.tajo.catalog.TableDesc;
        +import org.apache.tajo.catalog.partition.PartitionMethodDesc;
        +import org.apache.tajo.catalog.proto.CatalogProtos;
        +import org.apache.tajo.exception.UnimplementedException;
        +import org.apache.tajo.plan.Target;
        +import org.apache.tajo.plan.expr.AggregationFunctionCallEval;
        +import org.apache.tajo.plan.expr.EvalNode;
        +import org.apache.tajo.plan.expr.FieldEval;
        +import org.apache.tajo.plan.expr.WindowFunctionEval;
        +import org.apache.tajo.plan.logical.*;
        +import org.apache.tajo.util.KeyValueSet;
        +import org.apache.tajo.util.TUtil;
        +
        +import java.util.*;
        +
        +/**
        + * It deserializes a list of serialized logical nodes into a logical node tree.
        + */
        +public class LogicalNodeDeserializer {
        + private static final LogicalNodeDeserializer instance;
        +
        + static

        { + instance = new LogicalNodeDeserializer(); + }

        +
        + /**
        + * Deserialize a list of nodes into a logical node tree.
        + *
        + * @param context QueryContext
        + * @param tree LogicalNodeTree which contains a list of serialized logical nodes.
        + * @return A logical node tree
        + */
        + public static LogicalNode deserialize(OverridableConf context, PlanProto.LogicalNodeTree tree) {
        + Map<Integer, LogicalNode> nodeMap = Maps.newHashMap();
        +
        + // sort serialized logical nodes in an ascending order of their sids
        + List<PlanProto.LogicalNode> nodeList = Lists.newArrayList(tree.getNodesList());
        + Collections.sort(nodeList, new Comparator<PlanProto.LogicalNode>() {
        + @Override
        + public int compare(PlanProto.LogicalNode o1, PlanProto.LogicalNode o2)

        { + return o1.getSid() - o2.getSid(); + }

        + });
        +
        + LogicalNode current = null;
        +
        + // The sorted order is the same of a postfix traverse order.
        + // So, it sequentially transforms each serialized node into a LogicalNode instance in a postfix order of
        + // the original logical node tree.
        +
        + Iterator<PlanProto.LogicalNode> it = nodeList.iterator();
        + while (it.hasNext()) {
        + PlanProto.LogicalNode protoNode = it.next();
        +
        + switch (protoNode.getType())

        { + case ROOT: + current = convertRoot(nodeMap, protoNode); + break; + case SET_SESSION: + current = convertSetSession(protoNode); + break; + case EXPRS: + current = convertEvalExpr(context, protoNode); + break; + case PROJECTION: + current = convertProjection(context, nodeMap, protoNode); + break; + case LIMIT: + current = convertLimit(nodeMap, protoNode); + break; + case SORT: + current = convertSort(nodeMap, protoNode); + break; + case WINDOW_AGG: + current = convertWindowAgg(context, nodeMap, protoNode); + break; + case HAVING: + current = convertHaving(context, nodeMap, protoNode); + break; + case GROUP_BY: + current = convertGroupby(context, nodeMap, protoNode); + break; + case DISTINCT_GROUP_BY: + current = convertDistinctGroupby(context, nodeMap, protoNode); + break; + case SELECTION: + current = convertFilter(context, nodeMap, protoNode); + break; + case JOIN: + current = convertJoin(context, nodeMap, protoNode); + break; + case TABLE_SUBQUERY: + current = convertTableSubQuery(context, nodeMap, protoNode); + break; + case UNION: + current = convertUnion(nodeMap, protoNode); + break; + case PARTITIONS_SCAN: + current = convertPartitionScan(context, protoNode); + break; + case SCAN: + current = convertScan(context, protoNode); + break; + + case CREATE_TABLE: + current = convertCreateTable(nodeMap, protoNode); + break; + case INSERT: + current = convertInsert(nodeMap, protoNode); + break; + case DROP_TABLE: + current = convertDropTable(protoNode); + break; + + case CREATE_DATABASE: + current = convertCreateDatabase(protoNode); + break; + case DROP_DATABASE: + current = convertDropDatabase(protoNode); + break; + + case ALTER_TABLESPACE: + current = convertAlterTablespace(protoNode); + break; + case ALTER_TABLE: + current = convertAlterTable(protoNode); + break; + case TRUNCATE_TABLE: + current = convertTruncateTable(protoNode); + break; + + default: + throw new RuntimeException("Unknown NodeType: " + protoNode.getType().name()); + }

        +
        + nodeMap.put(protoNode.getSid(), current);
        + }
        +
        + return current;
        + }
        +
        + private static LogicalRootNode convertRoot(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.RootNode rootProto = protoNode.getRoot();
        +
        + LogicalRootNode root = new LogicalRootNode(protoNode.getPid());
        + root.setChild(nodeMap.get(rootProto.getChildId()));
        + if (protoNode.hasInSchema())

        { + root.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + root.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        +
        + return root;
        + }
        +
        + private static SetSessionNode convertSetSession(PlanProto.LogicalNode protoNode)

        { + PlanProto.SetSessionNode setSessionProto = protoNode.getSetSession(); + + SetSessionNode setSession = new SetSessionNode(protoNode.getPid()); + setSession.init(setSessionProto.getName(), setSessionProto.hasValue() ? setSessionProto.getValue() : null); + + return setSession; + }

        +
        + private static EvalExprNode convertEvalExpr(OverridableConf context, PlanProto.LogicalNode protoNode)

        { + PlanProto.EvalExprNode evalExprProto = protoNode.getExprEval(); + + EvalExprNode evalExpr = new EvalExprNode(protoNode.getPid()); + evalExpr.setInSchema(convertSchema(protoNode.getInSchema())); + evalExpr.setTargets(convertTargets(context, evalExprProto.getTargetsList())); + + return evalExpr; + }

        +
        + private static ProjectionNode convertProjection(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.ProjectionNode projectionProto = protoNode.getProjection(); + + ProjectionNode projectionNode = new ProjectionNode(protoNode.getPid()); + projectionNode.init(projectionProto.getDistinct(), convertTargets(context, projectionProto.getTargetsList())); + projectionNode.setChild(nodeMap.get(projectionProto.getChildId())); + projectionNode.setInSchema(convertSchema(protoNode.getInSchema())); + projectionNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return projectionNode; + }

        +
        + private static LimitNode convertLimit(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.LimitNode limitProto = protoNode.getLimit(); + + LimitNode limitNode = new LimitNode(protoNode.getPid()); + limitNode.setChild(nodeMap.get(limitProto.getChildId())); + limitNode.setInSchema(convertSchema(protoNode.getInSchema())); + limitNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + limitNode.setFetchFirst(limitProto.getFetchFirstNum()); + + return limitNode; + }

        +
        + private static SortNode convertSort(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.SortNode sortProto = protoNode.getSort(); + + SortNode sortNode = new SortNode(protoNode.getPid()); + sortNode.setChild(nodeMap.get(sortProto.getChildId())); + sortNode.setInSchema(convertSchema(protoNode.getInSchema())); + sortNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + sortNode.setSortSpecs(convertSortSpecs(sortProto.getSortSpecsList())); + + return sortNode; + }

        +
        + private static HavingNode convertHaving(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.FilterNode havingProto = protoNode.getFilter(); + + HavingNode having = new HavingNode(protoNode.getPid()); + having.setChild(nodeMap.get(havingProto.getChildId())); + having.setQual(EvalNodeDeserializer.deserialize(context, havingProto.getQual())); + having.setInSchema(convertSchema(protoNode.getInSchema())); + having.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return having; + }

        +
        + private static WindowAggNode convertWindowAgg(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.WindowAggNode windowAggProto = protoNode.getWindowAgg();
        +
        + WindowAggNode windowAgg = new WindowAggNode(protoNode.getPid());
        + windowAgg.setChild(nodeMap.get(windowAggProto.getChildId()));
        +
        + if (windowAggProto.getPartitionKeysCount() > 0)

        { + windowAgg.setPartitionKeys(convertColumns(windowAggProto.getPartitionKeysList())); + }

        +
        + if (windowAggProto.getWindowFunctionsCount() > 0)

        { + windowAgg.setWindowFunctions(convertWindowFunccEvals(context, windowAggProto.getWindowFunctionsList())); + }

        +
        + windowAgg.setDistinct(windowAggProto.getDistinct());
        +
        + if (windowAggProto.getSortSpecsCount() > 0)

        { + windowAgg.setSortSpecs(convertSortSpecs(windowAggProto.getSortSpecsList())); + }

        +
        + if (windowAggProto.getTargetsCount() > 0)

        { + windowAgg.setTargets(convertTargets(context, windowAggProto.getTargetsList())); + }

        +
        + windowAgg.setInSchema(convertSchema(protoNode.getInSchema()));
        + windowAgg.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return windowAgg;
        + }
        +
        + private static GroupbyNode convertGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.GroupbyNode groupbyProto = protoNode.getGroupby();
        +
        + GroupbyNode groupby = new GroupbyNode(protoNode.getPid());
        + groupby.setChild(nodeMap.get(groupbyProto.getChildId()));
        + groupby.setDistinct(groupbyProto.getDistinct());
        +
        + if (groupbyProto.getGroupingKeysCount() > 0)

        { + groupby.setGroupingColumns(convertColumns(groupbyProto.getGroupingKeysList())); + }

        + if (groupbyProto.getAggFunctionsCount() > 0)

        { + groupby.setAggFunctions(convertAggFuncCallEvals(context, groupbyProto.getAggFunctionsList())); + }

        + if (groupbyProto.getTargetsCount() > 0)

        { + groupby.setTargets(convertTargets(context, groupbyProto.getTargetsList())); + }

        +
        + groupby.setInSchema(convertSchema(protoNode.getInSchema()));
        + groupby.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return groupby;
        + }
        +
        + private static DistinctGroupbyNode convertDistinctGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.DistinctGroupbyNode distinctGroupbyProto = protoNode.getDistinctGroupby();
        +
        + DistinctGroupbyNode distinctGroupby = new DistinctGroupbyNode(protoNode.getPid());
        + distinctGroupby.setChild(nodeMap.get(distinctGroupbyProto.getChildId()));
        +
        + if (distinctGroupbyProto.hasGroupbyNode())

        { + distinctGroupby.setGroupbyPlan(convertGroupby(context, nodeMap, distinctGroupbyProto.getGroupbyNode())); + }

        +
        + if (distinctGroupbyProto.getSubPlansCount() > 0) {
        + List<GroupbyNode> subPlans = TUtil.newList();
        + for (int i = 0; i < distinctGroupbyProto.getSubPlansCount(); i++)

        { + subPlans.add(convertGroupby(context, nodeMap, distinctGroupbyProto.getSubPlans(i))); + }

        + distinctGroupby.setSubPlans(subPlans);
        + }
        +
        + if (distinctGroupbyProto.getGroupingKeysCount() > 0)

        { + distinctGroupby.setGroupingColumns(convertColumns(distinctGroupbyProto.getGroupingKeysList())); + }

        + if (distinctGroupbyProto.getAggFunctionsCount() > 0)

        { + distinctGroupby.setAggFunctions(convertAggFuncCallEvals(context, distinctGroupbyProto.getAggFunctionsList())); + }

        + if (distinctGroupbyProto.getTargetsCount() > 0)

        { + distinctGroupby.setTargets(convertTargets(context, distinctGroupbyProto.getTargetsList())); + }

        + int [] resultColumnIds = new int[distinctGroupbyProto.getResultIdCount()];
        + for (int i = 0; i < distinctGroupbyProto.getResultIdCount(); i++)

        { + resultColumnIds[i] = distinctGroupbyProto.getResultId(i); + }

        + distinctGroupby.setResultColumnIds(resultColumnIds);
        +
        + // TODO - in distinct groupby, output and target are not matched to each other. It does not follow the convention.
        + distinctGroupby.setInSchema(convertSchema(protoNode.getInSchema()));
        + distinctGroupby.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return distinctGroupby;
        + }
        +
        + private static JoinNode convertJoin(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.JoinNode joinProto = protoNode.getJoin();
        +
        + JoinNode join = new JoinNode(protoNode.getPid());
        + join.setLeftChild(nodeMap.get(joinProto.getLeftChildId()));
        + join.setRightChild(nodeMap.get(joinProto.getRightChildId()));
        + join.setJoinType(convertJoinType(joinProto.getJoinType()));
        + join.setInSchema(convertSchema(protoNode.getInSchema()));
        + join.setOutSchema(convertSchema(protoNode.getOutSchema()));
        + if (joinProto.hasJoinQual())

        { + join.setJoinQual(EvalNodeDeserializer.deserialize(context, joinProto.getJoinQual())); + }

        + if (joinProto.getExistsTargets())

        { + join.setTargets(convertTargets(context, joinProto.getTargetsList())); + }

        +
        + return join;
        + }
        +
        + private static SelectionNode convertFilter(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.FilterNode filterProto = protoNode.getFilter(); + + SelectionNode selection = new SelectionNode(protoNode.getPid()); + selection.setInSchema(convertSchema(protoNode.getInSchema())); + selection.setOutSchema(convertSchema(protoNode.getOutSchema())); + selection.setChild(nodeMap.get(filterProto.getChildId())); + selection.setQual(EvalNodeDeserializer.deserialize(context, filterProto.getQual())); + + return selection; + }

        +
        + private static UnionNode convertUnion(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.UnionNode unionProto = protoNode.getUnion(); + + UnionNode union = new UnionNode(protoNode.getPid()); + union.setInSchema(convertSchema(protoNode.getInSchema())); + union.setOutSchema(convertSchema(protoNode.getOutSchema())); + union.setLeftChild(nodeMap.get(unionProto.getLeftChildId())); + union.setRightChild(nodeMap.get(unionProto.getRightChildId())); + + return union; + }

        +
        + private static ScanNode convertScan(OverridableConf context, PlanProto.LogicalNode protoNode)

        { + ScanNode scan = new ScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, scan); + + return scan; + }

        +
        + private static void fillScanNode(OverridableConf context, PlanProto.LogicalNode protoNode, ScanNode scan) {
        + PlanProto.ScanNode scanProto = protoNode.getScan();
        + if (scanProto.hasAlias())

        { + scan.init(new TableDesc(scanProto.getTable()), scanProto.getAlias()); + }

        else

        { + scan.init(new TableDesc(scanProto.getTable())); + }

        +
        + if (scanProto.getExistTargets())

        { + scan.setTargets(convertTargets(context, scanProto.getTargetsList())); + }

        +
        + if (scanProto.hasQual())

        { + scan.setQual(EvalNodeDeserializer.deserialize(context, scanProto.getQual())); + }

        +
        + scan.setInSchema(convertSchema(protoNode.getInSchema()));
        + scan.setOutSchema(convertSchema(protoNode.getOutSchema()));
        + }
        +
        + private static PartitionedTableScanNode convertPartitionScan(OverridableConf context, PlanProto.LogicalNode protoNode) {
        + PartitionedTableScanNode partitionedScan = new PartitionedTableScanNode(protoNode.getPid());
        + fillScanNode(context, protoNode, partitionedScan);
        +
        + PlanProto.PartitionScanSpec partitionScanProto = protoNode.getPartitionScan();
        + Path [] paths = new Path[partitionScanProto.getPathsCount()];
        + for (int i = 0; i < partitionScanProto.getPathsCount(); i++)

        { + paths[i] = new Path(partitionScanProto.getPaths(i)); + }

        + partitionedScan.setInputPaths(paths);
        + return partitionedScan;
        + }
        +
        + private static TableSubQueryNode convertTableSubQuery(OverridableConf context,
        + Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.TableSubQueryNode proto = protoNode.getTableSubQuery();
        +
        + TableSubQueryNode tableSubQuery = new TableSubQueryNode(protoNode.getPid());
        + tableSubQuery.init(proto.getTableName(), nodeMap.get(proto.getChildId()));
        + tableSubQuery.setInSchema(convertSchema(protoNode.getInSchema()));
        + if (proto.getTargetsCount() > 0)

        { + tableSubQuery.setTargets(convertTargets(context, proto.getTargetsList())); + }

        +
        + return tableSubQuery;
        + }
        +
        + private static CreateTableNode convertCreateTable(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore();
        + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable();
        + PlanProto.CreateTableNodeSpec createTableNodeSpec = protoNode.getCreateTable();
        +
        + CreateTableNode createTable = new CreateTableNode(protoNode.getPid());
        + if (protoNode.hasInSchema())

        { + createTable.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + createTable.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        + createTable.setChild(nodeMap.get(persistentStoreProto.getChildId()));
        + createTable.setStorageType(persistentStoreProto.getStorageType());
        + createTable.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties()));
        +
        + createTable.setTableName(storeTableNodeSpec.getTableName());
        + if (storeTableNodeSpec.hasPartitionMethod())

        { + createTable.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + }

        +
        + createTable.setTableSchema(convertSchema(createTableNodeSpec.getSchema()));
        + createTable.setExternal(createTableNodeSpec.getExternal());
        + if (createTableNodeSpec.getExternal() && createTableNodeSpec.hasPath())

        { + createTable.setPath(new Path(createTableNodeSpec.getPath())); + }

        + createTable.setIfNotExists(createTableNodeSpec.getIfNotExists());
        +
        + return createTable;
        + }
        +
        + private static InsertNode convertInsert(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore();
        + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable();
        + PlanProto.InsertNodeSpec insertNodeSpec = protoNode.getInsert();
        +
        + InsertNode insertNode = new InsertNode(protoNode.getPid());
        + if (protoNode.hasInSchema())

        { + insertNode.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + insertNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        + insertNode.setChild(nodeMap.get(persistentStoreProto.getChildId()));
        + insertNode.setStorageType(persistentStoreProto.getStorageType());
        + insertNode.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties()));
        +
        + if (storeTableNodeSpec.hasTableName())

        { + insertNode.setTableName(storeTableNodeSpec.getTableName()); + }

        + if (storeTableNodeSpec.hasPartitionMethod())

        { + insertNode.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + }

        +
        + insertNode.setOverwrite(insertNodeSpec.getOverwrite());
        + insertNode.setTableSchema(convertSchema(insertNodeSpec.getTableSchema()));
        + if (insertNodeSpec.hasTargetSchema())

        { + insertNode.setTargetSchema(convertSchema(insertNodeSpec.getTargetSchema())); + }

        + if (insertNodeSpec.hasProjectedSchema())

        { + insertNode.setProjectedSchema(convertSchema(insertNodeSpec.getProjectedSchema())); + }

        + if (insertNodeSpec.hasPath())

        { + insertNode.setPath(new Path(insertNodeSpec.getPath())); + }

        +
        + return insertNode;
        + }
        +
        + private static DropTableNode convertDropTable(PlanProto.LogicalNode protoNode)

        { + DropTableNode dropTable = new DropTableNode(protoNode.getPid()); + + PlanProto.DropTableNode dropTableProto = protoNode.getDropTable(); + dropTable.init(dropTableProto.getTableName(), dropTableProto.getIfExists(), dropTableProto.getPurge()); + + return dropTable; + }

        +
        + private static CreateDatabaseNode convertCreateDatabase(PlanProto.LogicalNode protoNode)

        { + CreateDatabaseNode createDatabase = new CreateDatabaseNode(protoNode.getPid()); + + PlanProto.CreateDatabaseNode createDatabaseProto = protoNode.getCreateDatabase(); + createDatabase.init(createDatabaseProto.getDbName(), createDatabaseProto.getIfNotExists()); + + return createDatabase; + }

        +
        + private static DropDatabaseNode convertDropDatabase(PlanProto.LogicalNode protoNode)

        { + DropDatabaseNode dropDatabase = new DropDatabaseNode(protoNode.getPid()); + + PlanProto.DropDatabaseNode dropDatabaseProto = protoNode.getDropDatabase(); + dropDatabase.init(dropDatabaseProto.getDbName(), dropDatabaseProto.getIfExists()); + + return dropDatabase; + }

        +
        + private static AlterTablespaceNode convertAlterTablespace(PlanProto.LogicalNode protoNode) {
        + AlterTablespaceNode alterTablespace = new AlterTablespaceNode(protoNode.getPid());
        +
        + PlanProto.AlterTablespaceNode alterTablespaceProto = protoNode.getAlterTablespace();
        + alterTablespace.setTablespaceName(alterTablespaceProto.getTableSpaceName());
        +
        + switch (alterTablespaceProto.getSetType())

        { + case LOCATION: + alterTablespace.setLocation(alterTablespaceProto.getSetLocation().getLocation()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTablespaceProto.getSetType().name()); + }

        +
        + return alterTablespace;
        + }
        +
        + private static AlterTableNode convertAlterTable(PlanProto.LogicalNode protoNode) {
        + AlterTableNode alterTable = new AlterTableNode(protoNode.getPid());
        +
        + PlanProto.AlterTableNode alterTableProto = protoNode.getAlterTable();
        + alterTable.setTableName(alterTableProto.getTableName());
        +
        + switch (alterTableProto.getSetType())

        { + case RENAME_TABLE: + alterTable.setNewTableName(alterTableProto.getRenameTable().getNewName()); + break; + case ADD_COLUMN: + alterTable.setAddNewColumn(new Column(alterTableProto.getAddColumn().getAddColumn())); + break; + case RENAME_COLUMN: + alterTable.setColumnName(alterTableProto.getRenameColumn().getOldName()); + alterTable.setNewColumnName(alterTableProto.getRenameColumn().getNewName()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTableProto.getSetType().name()); + }

        +
        + return alterTable;
        + }
        +
        + private static TruncateTableNode convertTruncateTable(PlanProto.LogicalNode protoNode)

        { + TruncateTableNode truncateTable = new TruncateTableNode(protoNode.getPid()); + + PlanProto.TruncateTableNode truncateTableProto = protoNode.getTruncateTableNode(); + truncateTable.setTableNames(truncateTableProto.getTableNamesList()); + + return truncateTable; + }

        +
        + private static AggregationFunctionCallEval [] convertAggFuncCallEvals(OverridableConf context,
        + List<PlanProto.EvalNodeTree> evalTrees) {
        + AggregationFunctionCallEval [] aggFuncs = new AggregationFunctionCallEval[evalTrees.size()];
        + for (int i = 0; i < aggFuncs.length; i++)

        { + aggFuncs[i] = (AggregationFunctionCallEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + }

        + return aggFuncs;
        + }
        +
        + private static WindowFunctionEval[] convertWindowFunccEvals(OverridableConf context,
        + List<PlanProto.EvalNodeTree> evalTrees) {
        + WindowFunctionEval[] winFuncEvals = new WindowFunctionEval[evalTrees.size()];
        + for (int i = 0; i < winFuncEvals.length; i++)

        { + winFuncEvals[i] = (WindowFunctionEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + }

        + return winFuncEvals;
        + }
        +
        + public static Schema convertSchema(CatalogProtos.SchemaProto proto)

        { + return new Schema(proto); + }

        +
        + public static Column[] convertColumns(List<CatalogProtos.ColumnProto> columnProtos) {
        + Column [] columns = new Column[columnProtos.size()];
        + for (int i = 0; i < columns.length; i++)

        { + columns[i] = new Column(columnProtos.get(i)); + }

        + return columns;
        + }
        +
        + public static Target[] convertTargets(OverridableConf context, List<PlanProto.Target> targetsProto) {
        + Target [] targets = new Target[targetsProto.size()];
        + for (int i = 0; i < targets.length; i++) {
        + PlanProto.Target targetProto = targetsProto.get;
        + EvalNode evalNode = EvalNodeDeserializer.deserialize(context, targetProto.getExpr());
        + if (targetProto.hasAlias())

        { + targets[i] = new Target(evalNode, targetProto.getAlias()); + }

        else

        { + targets[i] = new Target((FieldEval) evalNode); + }

        + }
        + return targets;
        + }
        +
        + public static SortSpec[] convertSortSpecs(List<CatalogProtos.SortSpecProto> sortSpecProtos) {
        + SortSpec[] sortSpecs = new SortSpec[sortSpecProtos.size()];
        + int i = 0;
        + for (CatalogProtos.SortSpecProto proto : sortSpecProtos)

        { + sortSpecs[i++] = new SortSpec(proto); + }

        + return sortSpecs;
        + }
        +
        + public static JoinType convertJoinType(PlanProto.JoinType type) {
        + switch (type) {
        + case CROSS_JOIN:
        — End diff –

        That's good idea. Actually, I have a plan to use protobuf-based serialized plans in C++ implementation. Some enum constant name is not available in C++. Please see this protobuf issue (https://code.google.com/p/protobuf/issues/detail?id=515). I think that it would be good to keep the separation for a while.

        So, in this time, it would be good to separate them. Later, we can use single enum type.

        Show
        githubbot ASF GitHub Bot added a comment - Github user hyunsik commented on a diff in the pull request: https://github.com/apache/tajo/pull/322#discussion_r22341678 — Diff: tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java — @@ -0,0 +1,678 @@ +/* + * Lisensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.tajo.plan.serder; + +import com.google.common.collect.Lists; +import com.google.common.collect.Maps; +import org.apache.hadoop.fs.Path; +import org.apache.tajo.OverridableConf; +import org.apache.tajo.algebra.JoinType; +import org.apache.tajo.catalog.Column; +import org.apache.tajo.catalog.Schema; +import org.apache.tajo.catalog.SortSpec; +import org.apache.tajo.catalog.TableDesc; +import org.apache.tajo.catalog.partition.PartitionMethodDesc; +import org.apache.tajo.catalog.proto.CatalogProtos; +import org.apache.tajo.exception.UnimplementedException; +import org.apache.tajo.plan.Target; +import org.apache.tajo.plan.expr.AggregationFunctionCallEval; +import org.apache.tajo.plan.expr.EvalNode; +import org.apache.tajo.plan.expr.FieldEval; +import org.apache.tajo.plan.expr.WindowFunctionEval; +import org.apache.tajo.plan.logical.*; +import org.apache.tajo.util.KeyValueSet; +import org.apache.tajo.util.TUtil; + +import java.util.*; + +/** + * It deserializes a list of serialized logical nodes into a logical node tree. + */ +public class LogicalNodeDeserializer { + private static final LogicalNodeDeserializer instance; + + static { + instance = new LogicalNodeDeserializer(); + } + + /** + * Deserialize a list of nodes into a logical node tree. + * + * @param context QueryContext + * @param tree LogicalNodeTree which contains a list of serialized logical nodes. + * @return A logical node tree + */ + public static LogicalNode deserialize(OverridableConf context, PlanProto.LogicalNodeTree tree) { + Map<Integer, LogicalNode> nodeMap = Maps.newHashMap(); + + // sort serialized logical nodes in an ascending order of their sids + List<PlanProto.LogicalNode> nodeList = Lists.newArrayList(tree.getNodesList()); + Collections.sort(nodeList, new Comparator<PlanProto.LogicalNode>() { + @Override + public int compare(PlanProto.LogicalNode o1, PlanProto.LogicalNode o2) { + return o1.getSid() - o2.getSid(); + } + }); + + LogicalNode current = null; + + // The sorted order is the same of a postfix traverse order. + // So, it sequentially transforms each serialized node into a LogicalNode instance in a postfix order of + // the original logical node tree. + + Iterator<PlanProto.LogicalNode> it = nodeList.iterator(); + while (it.hasNext()) { + PlanProto.LogicalNode protoNode = it.next(); + + switch (protoNode.getType()) { + case ROOT: + current = convertRoot(nodeMap, protoNode); + break; + case SET_SESSION: + current = convertSetSession(protoNode); + break; + case EXPRS: + current = convertEvalExpr(context, protoNode); + break; + case PROJECTION: + current = convertProjection(context, nodeMap, protoNode); + break; + case LIMIT: + current = convertLimit(nodeMap, protoNode); + break; + case SORT: + current = convertSort(nodeMap, protoNode); + break; + case WINDOW_AGG: + current = convertWindowAgg(context, nodeMap, protoNode); + break; + case HAVING: + current = convertHaving(context, nodeMap, protoNode); + break; + case GROUP_BY: + current = convertGroupby(context, nodeMap, protoNode); + break; + case DISTINCT_GROUP_BY: + current = convertDistinctGroupby(context, nodeMap, protoNode); + break; + case SELECTION: + current = convertFilter(context, nodeMap, protoNode); + break; + case JOIN: + current = convertJoin(context, nodeMap, protoNode); + break; + case TABLE_SUBQUERY: + current = convertTableSubQuery(context, nodeMap, protoNode); + break; + case UNION: + current = convertUnion(nodeMap, protoNode); + break; + case PARTITIONS_SCAN: + current = convertPartitionScan(context, protoNode); + break; + case SCAN: + current = convertScan(context, protoNode); + break; + + case CREATE_TABLE: + current = convertCreateTable(nodeMap, protoNode); + break; + case INSERT: + current = convertInsert(nodeMap, protoNode); + break; + case DROP_TABLE: + current = convertDropTable(protoNode); + break; + + case CREATE_DATABASE: + current = convertCreateDatabase(protoNode); + break; + case DROP_DATABASE: + current = convertDropDatabase(protoNode); + break; + + case ALTER_TABLESPACE: + current = convertAlterTablespace(protoNode); + break; + case ALTER_TABLE: + current = convertAlterTable(protoNode); + break; + case TRUNCATE_TABLE: + current = convertTruncateTable(protoNode); + break; + + default: + throw new RuntimeException("Unknown NodeType: " + protoNode.getType().name()); + } + + nodeMap.put(protoNode.getSid(), current); + } + + return current; + } + + private static LogicalRootNode convertRoot(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.RootNode rootProto = protoNode.getRoot(); + + LogicalRootNode root = new LogicalRootNode(protoNode.getPid()); + root.setChild(nodeMap.get(rootProto.getChildId())); + if (protoNode.hasInSchema()) { + root.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + root.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + + return root; + } + + private static SetSessionNode convertSetSession(PlanProto.LogicalNode protoNode) { + PlanProto.SetSessionNode setSessionProto = protoNode.getSetSession(); + + SetSessionNode setSession = new SetSessionNode(protoNode.getPid()); + setSession.init(setSessionProto.getName(), setSessionProto.hasValue() ? setSessionProto.getValue() : null); + + return setSession; + } + + private static EvalExprNode convertEvalExpr(OverridableConf context, PlanProto.LogicalNode protoNode) { + PlanProto.EvalExprNode evalExprProto = protoNode.getExprEval(); + + EvalExprNode evalExpr = new EvalExprNode(protoNode.getPid()); + evalExpr.setInSchema(convertSchema(protoNode.getInSchema())); + evalExpr.setTargets(convertTargets(context, evalExprProto.getTargetsList())); + + return evalExpr; + } + + private static ProjectionNode convertProjection(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.ProjectionNode projectionProto = protoNode.getProjection(); + + ProjectionNode projectionNode = new ProjectionNode(protoNode.getPid()); + projectionNode.init(projectionProto.getDistinct(), convertTargets(context, projectionProto.getTargetsList())); + projectionNode.setChild(nodeMap.get(projectionProto.getChildId())); + projectionNode.setInSchema(convertSchema(protoNode.getInSchema())); + projectionNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return projectionNode; + } + + private static LimitNode convertLimit(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.LimitNode limitProto = protoNode.getLimit(); + + LimitNode limitNode = new LimitNode(protoNode.getPid()); + limitNode.setChild(nodeMap.get(limitProto.getChildId())); + limitNode.setInSchema(convertSchema(protoNode.getInSchema())); + limitNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + limitNode.setFetchFirst(limitProto.getFetchFirstNum()); + + return limitNode; + } + + private static SortNode convertSort(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.SortNode sortProto = protoNode.getSort(); + + SortNode sortNode = new SortNode(protoNode.getPid()); + sortNode.setChild(nodeMap.get(sortProto.getChildId())); + sortNode.setInSchema(convertSchema(protoNode.getInSchema())); + sortNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + sortNode.setSortSpecs(convertSortSpecs(sortProto.getSortSpecsList())); + + return sortNode; + } + + private static HavingNode convertHaving(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.FilterNode havingProto = protoNode.getFilter(); + + HavingNode having = new HavingNode(protoNode.getPid()); + having.setChild(nodeMap.get(havingProto.getChildId())); + having.setQual(EvalNodeDeserializer.deserialize(context, havingProto.getQual())); + having.setInSchema(convertSchema(protoNode.getInSchema())); + having.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return having; + } + + private static WindowAggNode convertWindowAgg(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.WindowAggNode windowAggProto = protoNode.getWindowAgg(); + + WindowAggNode windowAgg = new WindowAggNode(protoNode.getPid()); + windowAgg.setChild(nodeMap.get(windowAggProto.getChildId())); + + if (windowAggProto.getPartitionKeysCount() > 0) { + windowAgg.setPartitionKeys(convertColumns(windowAggProto.getPartitionKeysList())); + } + + if (windowAggProto.getWindowFunctionsCount() > 0) { + windowAgg.setWindowFunctions(convertWindowFunccEvals(context, windowAggProto.getWindowFunctionsList())); + } + + windowAgg.setDistinct(windowAggProto.getDistinct()); + + if (windowAggProto.getSortSpecsCount() > 0) { + windowAgg.setSortSpecs(convertSortSpecs(windowAggProto.getSortSpecsList())); + } + + if (windowAggProto.getTargetsCount() > 0) { + windowAgg.setTargets(convertTargets(context, windowAggProto.getTargetsList())); + } + + windowAgg.setInSchema(convertSchema(protoNode.getInSchema())); + windowAgg.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return windowAgg; + } + + private static GroupbyNode convertGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.GroupbyNode groupbyProto = protoNode.getGroupby(); + + GroupbyNode groupby = new GroupbyNode(protoNode.getPid()); + groupby.setChild(nodeMap.get(groupbyProto.getChildId())); + groupby.setDistinct(groupbyProto.getDistinct()); + + if (groupbyProto.getGroupingKeysCount() > 0) { + groupby.setGroupingColumns(convertColumns(groupbyProto.getGroupingKeysList())); + } + if (groupbyProto.getAggFunctionsCount() > 0) { + groupby.setAggFunctions(convertAggFuncCallEvals(context, groupbyProto.getAggFunctionsList())); + } + if (groupbyProto.getTargetsCount() > 0) { + groupby.setTargets(convertTargets(context, groupbyProto.getTargetsList())); + } + + groupby.setInSchema(convertSchema(protoNode.getInSchema())); + groupby.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return groupby; + } + + private static DistinctGroupbyNode convertDistinctGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.DistinctGroupbyNode distinctGroupbyProto = protoNode.getDistinctGroupby(); + + DistinctGroupbyNode distinctGroupby = new DistinctGroupbyNode(protoNode.getPid()); + distinctGroupby.setChild(nodeMap.get(distinctGroupbyProto.getChildId())); + + if (distinctGroupbyProto.hasGroupbyNode()) { + distinctGroupby.setGroupbyPlan(convertGroupby(context, nodeMap, distinctGroupbyProto.getGroupbyNode())); + } + + if (distinctGroupbyProto.getSubPlansCount() > 0) { + List<GroupbyNode> subPlans = TUtil.newList(); + for (int i = 0; i < distinctGroupbyProto.getSubPlansCount(); i++) { + subPlans.add(convertGroupby(context, nodeMap, distinctGroupbyProto.getSubPlans(i))); + } + distinctGroupby.setSubPlans(subPlans); + } + + if (distinctGroupbyProto.getGroupingKeysCount() > 0) { + distinctGroupby.setGroupingColumns(convertColumns(distinctGroupbyProto.getGroupingKeysList())); + } + if (distinctGroupbyProto.getAggFunctionsCount() > 0) { + distinctGroupby.setAggFunctions(convertAggFuncCallEvals(context, distinctGroupbyProto.getAggFunctionsList())); + } + if (distinctGroupbyProto.getTargetsCount() > 0) { + distinctGroupby.setTargets(convertTargets(context, distinctGroupbyProto.getTargetsList())); + } + int [] resultColumnIds = new int [distinctGroupbyProto.getResultIdCount()] ; + for (int i = 0; i < distinctGroupbyProto.getResultIdCount(); i++) { + resultColumnIds[i] = distinctGroupbyProto.getResultId(i); + } + distinctGroupby.setResultColumnIds(resultColumnIds); + + // TODO - in distinct groupby, output and target are not matched to each other. It does not follow the convention. + distinctGroupby.setInSchema(convertSchema(protoNode.getInSchema())); + distinctGroupby.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return distinctGroupby; + } + + private static JoinNode convertJoin(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.JoinNode joinProto = protoNode.getJoin(); + + JoinNode join = new JoinNode(protoNode.getPid()); + join.setLeftChild(nodeMap.get(joinProto.getLeftChildId())); + join.setRightChild(nodeMap.get(joinProto.getRightChildId())); + join.setJoinType(convertJoinType(joinProto.getJoinType())); + join.setInSchema(convertSchema(protoNode.getInSchema())); + join.setOutSchema(convertSchema(protoNode.getOutSchema())); + if (joinProto.hasJoinQual()) { + join.setJoinQual(EvalNodeDeserializer.deserialize(context, joinProto.getJoinQual())); + } + if (joinProto.getExistsTargets()) { + join.setTargets(convertTargets(context, joinProto.getTargetsList())); + } + + return join; + } + + private static SelectionNode convertFilter(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.FilterNode filterProto = protoNode.getFilter(); + + SelectionNode selection = new SelectionNode(protoNode.getPid()); + selection.setInSchema(convertSchema(protoNode.getInSchema())); + selection.setOutSchema(convertSchema(protoNode.getOutSchema())); + selection.setChild(nodeMap.get(filterProto.getChildId())); + selection.setQual(EvalNodeDeserializer.deserialize(context, filterProto.getQual())); + + return selection; + } + + private static UnionNode convertUnion(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.UnionNode unionProto = protoNode.getUnion(); + + UnionNode union = new UnionNode(protoNode.getPid()); + union.setInSchema(convertSchema(protoNode.getInSchema())); + union.setOutSchema(convertSchema(protoNode.getOutSchema())); + union.setLeftChild(nodeMap.get(unionProto.getLeftChildId())); + union.setRightChild(nodeMap.get(unionProto.getRightChildId())); + + return union; + } + + private static ScanNode convertScan(OverridableConf context, PlanProto.LogicalNode protoNode) { + ScanNode scan = new ScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, scan); + + return scan; + } + + private static void fillScanNode(OverridableConf context, PlanProto.LogicalNode protoNode, ScanNode scan) { + PlanProto.ScanNode scanProto = protoNode.getScan(); + if (scanProto.hasAlias()) { + scan.init(new TableDesc(scanProto.getTable()), scanProto.getAlias()); + } else { + scan.init(new TableDesc(scanProto.getTable())); + } + + if (scanProto.getExistTargets()) { + scan.setTargets(convertTargets(context, scanProto.getTargetsList())); + } + + if (scanProto.hasQual()) { + scan.setQual(EvalNodeDeserializer.deserialize(context, scanProto.getQual())); + } + + scan.setInSchema(convertSchema(protoNode.getInSchema())); + scan.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + + private static PartitionedTableScanNode convertPartitionScan(OverridableConf context, PlanProto.LogicalNode protoNode) { + PartitionedTableScanNode partitionedScan = new PartitionedTableScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, partitionedScan); + + PlanProto.PartitionScanSpec partitionScanProto = protoNode.getPartitionScan(); + Path [] paths = new Path [partitionScanProto.getPathsCount()] ; + for (int i = 0; i < partitionScanProto.getPathsCount(); i++) { + paths[i] = new Path(partitionScanProto.getPaths(i)); + } + partitionedScan.setInputPaths(paths); + return partitionedScan; + } + + private static TableSubQueryNode convertTableSubQuery(OverridableConf context, + Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.TableSubQueryNode proto = protoNode.getTableSubQuery(); + + TableSubQueryNode tableSubQuery = new TableSubQueryNode(protoNode.getPid()); + tableSubQuery.init(proto.getTableName(), nodeMap.get(proto.getChildId())); + tableSubQuery.setInSchema(convertSchema(protoNode.getInSchema())); + if (proto.getTargetsCount() > 0) { + tableSubQuery.setTargets(convertTargets(context, proto.getTargetsList())); + } + + return tableSubQuery; + } + + private static CreateTableNode convertCreateTable(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore(); + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable(); + PlanProto.CreateTableNodeSpec createTableNodeSpec = protoNode.getCreateTable(); + + CreateTableNode createTable = new CreateTableNode(protoNode.getPid()); + if (protoNode.hasInSchema()) { + createTable.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + createTable.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + createTable.setChild(nodeMap.get(persistentStoreProto.getChildId())); + createTable.setStorageType(persistentStoreProto.getStorageType()); + createTable.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties())); + + createTable.setTableName(storeTableNodeSpec.getTableName()); + if (storeTableNodeSpec.hasPartitionMethod()) { + createTable.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + } + + createTable.setTableSchema(convertSchema(createTableNodeSpec.getSchema())); + createTable.setExternal(createTableNodeSpec.getExternal()); + if (createTableNodeSpec.getExternal() && createTableNodeSpec.hasPath()) { + createTable.setPath(new Path(createTableNodeSpec.getPath())); + } + createTable.setIfNotExists(createTableNodeSpec.getIfNotExists()); + + return createTable; + } + + private static InsertNode convertInsert(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore(); + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable(); + PlanProto.InsertNodeSpec insertNodeSpec = protoNode.getInsert(); + + InsertNode insertNode = new InsertNode(protoNode.getPid()); + if (protoNode.hasInSchema()) { + insertNode.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + insertNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + insertNode.setChild(nodeMap.get(persistentStoreProto.getChildId())); + insertNode.setStorageType(persistentStoreProto.getStorageType()); + insertNode.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties())); + + if (storeTableNodeSpec.hasTableName()) { + insertNode.setTableName(storeTableNodeSpec.getTableName()); + } + if (storeTableNodeSpec.hasPartitionMethod()) { + insertNode.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + } + + insertNode.setOverwrite(insertNodeSpec.getOverwrite()); + insertNode.setTableSchema(convertSchema(insertNodeSpec.getTableSchema())); + if (insertNodeSpec.hasTargetSchema()) { + insertNode.setTargetSchema(convertSchema(insertNodeSpec.getTargetSchema())); + } + if (insertNodeSpec.hasProjectedSchema()) { + insertNode.setProjectedSchema(convertSchema(insertNodeSpec.getProjectedSchema())); + } + if (insertNodeSpec.hasPath()) { + insertNode.setPath(new Path(insertNodeSpec.getPath())); + } + + return insertNode; + } + + private static DropTableNode convertDropTable(PlanProto.LogicalNode protoNode) { + DropTableNode dropTable = new DropTableNode(protoNode.getPid()); + + PlanProto.DropTableNode dropTableProto = protoNode.getDropTable(); + dropTable.init(dropTableProto.getTableName(), dropTableProto.getIfExists(), dropTableProto.getPurge()); + + return dropTable; + } + + private static CreateDatabaseNode convertCreateDatabase(PlanProto.LogicalNode protoNode) { + CreateDatabaseNode createDatabase = new CreateDatabaseNode(protoNode.getPid()); + + PlanProto.CreateDatabaseNode createDatabaseProto = protoNode.getCreateDatabase(); + createDatabase.init(createDatabaseProto.getDbName(), createDatabaseProto.getIfNotExists()); + + return createDatabase; + } + + private static DropDatabaseNode convertDropDatabase(PlanProto.LogicalNode protoNode) { + DropDatabaseNode dropDatabase = new DropDatabaseNode(protoNode.getPid()); + + PlanProto.DropDatabaseNode dropDatabaseProto = protoNode.getDropDatabase(); + dropDatabase.init(dropDatabaseProto.getDbName(), dropDatabaseProto.getIfExists()); + + return dropDatabase; + } + + private static AlterTablespaceNode convertAlterTablespace(PlanProto.LogicalNode protoNode) { + AlterTablespaceNode alterTablespace = new AlterTablespaceNode(protoNode.getPid()); + + PlanProto.AlterTablespaceNode alterTablespaceProto = protoNode.getAlterTablespace(); + alterTablespace.setTablespaceName(alterTablespaceProto.getTableSpaceName()); + + switch (alterTablespaceProto.getSetType()) { + case LOCATION: + alterTablespace.setLocation(alterTablespaceProto.getSetLocation().getLocation()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTablespaceProto.getSetType().name()); + } + + return alterTablespace; + } + + private static AlterTableNode convertAlterTable(PlanProto.LogicalNode protoNode) { + AlterTableNode alterTable = new AlterTableNode(protoNode.getPid()); + + PlanProto.AlterTableNode alterTableProto = protoNode.getAlterTable(); + alterTable.setTableName(alterTableProto.getTableName()); + + switch (alterTableProto.getSetType()) { + case RENAME_TABLE: + alterTable.setNewTableName(alterTableProto.getRenameTable().getNewName()); + break; + case ADD_COLUMN: + alterTable.setAddNewColumn(new Column(alterTableProto.getAddColumn().getAddColumn())); + break; + case RENAME_COLUMN: + alterTable.setColumnName(alterTableProto.getRenameColumn().getOldName()); + alterTable.setNewColumnName(alterTableProto.getRenameColumn().getNewName()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTableProto.getSetType().name()); + } + + return alterTable; + } + + private static TruncateTableNode convertTruncateTable(PlanProto.LogicalNode protoNode) { + TruncateTableNode truncateTable = new TruncateTableNode(protoNode.getPid()); + + PlanProto.TruncateTableNode truncateTableProto = protoNode.getTruncateTableNode(); + truncateTable.setTableNames(truncateTableProto.getTableNamesList()); + + return truncateTable; + } + + private static AggregationFunctionCallEval [] convertAggFuncCallEvals(OverridableConf context, + List<PlanProto.EvalNodeTree> evalTrees) { + AggregationFunctionCallEval [] aggFuncs = new AggregationFunctionCallEval [evalTrees.size()] ; + for (int i = 0; i < aggFuncs.length; i++) { + aggFuncs[i] = (AggregationFunctionCallEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + } + return aggFuncs; + } + + private static WindowFunctionEval[] convertWindowFunccEvals(OverridableConf context, + List<PlanProto.EvalNodeTree> evalTrees) { + WindowFunctionEval[] winFuncEvals = new WindowFunctionEval [evalTrees.size()] ; + for (int i = 0; i < winFuncEvals.length; i++) { + winFuncEvals[i] = (WindowFunctionEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + } + return winFuncEvals; + } + + public static Schema convertSchema(CatalogProtos.SchemaProto proto) { + return new Schema(proto); + } + + public static Column[] convertColumns(List<CatalogProtos.ColumnProto> columnProtos) { + Column [] columns = new Column [columnProtos.size()] ; + for (int i = 0; i < columns.length; i++) { + columns[i] = new Column(columnProtos.get(i)); + } + return columns; + } + + public static Target[] convertTargets(OverridableConf context, List<PlanProto.Target> targetsProto) { + Target [] targets = new Target [targetsProto.size()] ; + for (int i = 0; i < targets.length; i++) { + PlanProto.Target targetProto = targetsProto.get ; + EvalNode evalNode = EvalNodeDeserializer.deserialize(context, targetProto.getExpr()); + if (targetProto.hasAlias()) { + targets[i] = new Target(evalNode, targetProto.getAlias()); + } else { + targets[i] = new Target((FieldEval) evalNode); + } + } + return targets; + } + + public static SortSpec[] convertSortSpecs(List<CatalogProtos.SortSpecProto> sortSpecProtos) { + SortSpec[] sortSpecs = new SortSpec [sortSpecProtos.size()] ; + int i = 0; + for (CatalogProtos.SortSpecProto proto : sortSpecProtos) { + sortSpecs[i++] = new SortSpec(proto); + } + return sortSpecs; + } + + public static JoinType convertJoinType(PlanProto.JoinType type) { + switch (type) { + case CROSS_JOIN: — End diff – That's good idea. Actually, I have a plan to use protobuf-based serialized plans in C++ implementation. Some enum constant name is not available in C++. Please see this protobuf issue ( https://code.google.com/p/protobuf/issues/detail?id=515 ). I think that it would be good to keep the separation for a while. So, in this time, it would be good to separate them. Later, we can use single enum type.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user jihoonson commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68336860

        Great work!
        This patch contains simple but effective changes.
        I left some comments.
        Please consider them.

        Show
        githubbot ASF GitHub Bot added a comment - Github user jihoonson commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68336860 Great work! This patch contains simple but effective changes. I left some comments. Please consider them.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user jihoonson commented on a diff in the pull request:

        https://github.com/apache/tajo/pull/322#discussion_r22341017

        — Diff: tajo-plan/src/main/proto/Plan.proto —
        @@ -26,58 +26,280 @@ import "CatalogProtos.proto";
        import "DataTypes.proto";

        enum NodeType

        { - BST_INDEX_SCAN = 0; - EXCEPT = 1; + SET_SESSION = 0; + + ROOT = 1; EXPRS = 2; - DISTINCT_GROUP_BY = 3; - GROUP_BY = 4; - HAVING = 5; - JOIN = 6; - INSERT = 7; - INTERSECT = 8; - LIMIT = 9; - PARTITIONS_SCAN = 10; - PROJECTION = 11; - ROOT = 12; - SCAN = 13; - SELECTION = 14; - SORT = 15; - STORE = 16; - TABLE_SUBQUERY = 17; - UNION = 18; - WINDOW_AGG = 19; - - CREATE_DATABASE = 20; - DROP_DATABASE = 21; - CREATE_TABLE = 22; - DROP_TABLE = 23; - ALTER_TABLESPACE = 24; - ALTER_TABLE = 25; - TRUNCATE_TABLE = 26; -}

        -
        -message LogicalPlan

        { - required KeyValueSetProto adjacentList = 1; + PROJECTION = 3; + LIMIT = 4; + WINDOW_AGG = 5; + SORT = 6; + HAVING = 7; + GROUP_BY = 8; + DISTINCT_GROUP_BY = 9; + SELECTION = 10; + JOIN = 11; + UNION = 12; + INTERSECT = 13; + EXCEPT = 14; + TABLE_SUBQUERY = 15; + SCAN = 16; + PARTITIONS_SCAN = 17; + BST_INDEX_SCAN = 18; + STORE = 19; + INSERT = 20; + + CREATE_DATABASE = 21; + DROP_DATABASE = 22; + CREATE_TABLE = 23; + DROP_TABLE = 24; + ALTER_TABLESPACE = 25; + ALTER_TABLE = 26; + TRUNCATE_TABLE = 27; }

        -message LogicalNode {

        • required int32 pid = 1;
        • required NodeType type = 2;
        • required SchemaProto in_schema = 3;
        • required SchemaProto out_schema = 4;
        • required NodeSpec spec = 5;
          +message LogicalNodeTree { + repeated LogicalNode nodes = 1; }

        -message NodeSpec {

        • optional ScanNode scan = 1;
          +message LogicalNode {
          + required int32 sid = 1;
            • End diff –

        The names of ```sid``` and ```pid``` are hard to guess their meanings.
        Would you add some comments?
        It would be better if you can change these names to be more expressible.

        Show
        githubbot ASF GitHub Bot added a comment - Github user jihoonson commented on a diff in the pull request: https://github.com/apache/tajo/pull/322#discussion_r22341017 — Diff: tajo-plan/src/main/proto/Plan.proto — @@ -26,58 +26,280 @@ import "CatalogProtos.proto"; import "DataTypes.proto"; enum NodeType { - BST_INDEX_SCAN = 0; - EXCEPT = 1; + SET_SESSION = 0; + + ROOT = 1; EXPRS = 2; - DISTINCT_GROUP_BY = 3; - GROUP_BY = 4; - HAVING = 5; - JOIN = 6; - INSERT = 7; - INTERSECT = 8; - LIMIT = 9; - PARTITIONS_SCAN = 10; - PROJECTION = 11; - ROOT = 12; - SCAN = 13; - SELECTION = 14; - SORT = 15; - STORE = 16; - TABLE_SUBQUERY = 17; - UNION = 18; - WINDOW_AGG = 19; - - CREATE_DATABASE = 20; - DROP_DATABASE = 21; - CREATE_TABLE = 22; - DROP_TABLE = 23; - ALTER_TABLESPACE = 24; - ALTER_TABLE = 25; - TRUNCATE_TABLE = 26; -} - -message LogicalPlan { - required KeyValueSetProto adjacentList = 1; + PROJECTION = 3; + LIMIT = 4; + WINDOW_AGG = 5; + SORT = 6; + HAVING = 7; + GROUP_BY = 8; + DISTINCT_GROUP_BY = 9; + SELECTION = 10; + JOIN = 11; + UNION = 12; + INTERSECT = 13; + EXCEPT = 14; + TABLE_SUBQUERY = 15; + SCAN = 16; + PARTITIONS_SCAN = 17; + BST_INDEX_SCAN = 18; + STORE = 19; + INSERT = 20; + + CREATE_DATABASE = 21; + DROP_DATABASE = 22; + CREATE_TABLE = 23; + DROP_TABLE = 24; + ALTER_TABLESPACE = 25; + ALTER_TABLE = 26; + TRUNCATE_TABLE = 27; } -message LogicalNode { required int32 pid = 1; required NodeType type = 2; required SchemaProto in_schema = 3; required SchemaProto out_schema = 4; required NodeSpec spec = 5; +message LogicalNodeTree { + repeated LogicalNode nodes = 1; } -message NodeSpec { optional ScanNode scan = 1; +message LogicalNode { + required int32 sid = 1; End diff – The names of ```sid``` and ```pid``` are hard to guess their meanings. Would you add some comments? It would be better if you can change these names to be more expressible.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user jihoonson commented on a diff in the pull request:

        https://github.com/apache/tajo/pull/322#discussion_r22340959

        — Diff: tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java —
        @@ -0,0 +1,678 @@
        +/*
        + * Lisensed to the Apache Software Foundation (ASF) under one
        + * or more contributor license agreements. See the NOTICE file
        + * distributed with this work for additional information
        + * regarding copyright ownership. The ASF licenses this file
        + * to you under the Apache License, Version 2.0 (the
        + * "License"); you may not use this file except in compliance
        + * with the License. You may obtain a copy of the License at
        + *
        + * http://www.apache.org/licenses/LICENSE-2.0
        + *
        + * Unless required by applicable law or agreed to in writing, software
        + * distributed under the License is distributed on an "AS IS" BASIS,
        + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        + * See the License for the specific language governing permissions and
        + * limitations under the License.
        + */
        +
        +package org.apache.tajo.plan.serder;
        +
        +import com.google.common.collect.Lists;
        +import com.google.common.collect.Maps;
        +import org.apache.hadoop.fs.Path;
        +import org.apache.tajo.OverridableConf;
        +import org.apache.tajo.algebra.JoinType;
        +import org.apache.tajo.catalog.Column;
        +import org.apache.tajo.catalog.Schema;
        +import org.apache.tajo.catalog.SortSpec;
        +import org.apache.tajo.catalog.TableDesc;
        +import org.apache.tajo.catalog.partition.PartitionMethodDesc;
        +import org.apache.tajo.catalog.proto.CatalogProtos;
        +import org.apache.tajo.exception.UnimplementedException;
        +import org.apache.tajo.plan.Target;
        +import org.apache.tajo.plan.expr.AggregationFunctionCallEval;
        +import org.apache.tajo.plan.expr.EvalNode;
        +import org.apache.tajo.plan.expr.FieldEval;
        +import org.apache.tajo.plan.expr.WindowFunctionEval;
        +import org.apache.tajo.plan.logical.*;
        +import org.apache.tajo.util.KeyValueSet;
        +import org.apache.tajo.util.TUtil;
        +
        +import java.util.*;
        +
        +/**
        + * It deserializes a list of serialized logical nodes into a logical node tree.
        + */
        +public class LogicalNodeDeserializer {
        + private static final LogicalNodeDeserializer instance;
        +
        + static

        { + instance = new LogicalNodeDeserializer(); + }

        +
        + /**
        + * Deserialize a list of nodes into a logical node tree.
        + *
        + * @param context QueryContext
        + * @param tree LogicalNodeTree which contains a list of serialized logical nodes.
        + * @return A logical node tree
        + */
        + public static LogicalNode deserialize(OverridableConf context, PlanProto.LogicalNodeTree tree) {
        + Map<Integer, LogicalNode> nodeMap = Maps.newHashMap();
        +
        + // sort serialized logical nodes in an ascending order of their sids
        + List<PlanProto.LogicalNode> nodeList = Lists.newArrayList(tree.getNodesList());
        + Collections.sort(nodeList, new Comparator<PlanProto.LogicalNode>() {
        + @Override
        + public int compare(PlanProto.LogicalNode o1, PlanProto.LogicalNode o2)

        { + return o1.getSid() - o2.getSid(); + }

        + });
        +
        + LogicalNode current = null;
        +
        + // The sorted order is the same of a postfix traverse order.
        + // So, it sequentially transforms each serialized node into a LogicalNode instance in a postfix order of
        + // the original logical node tree.
        +
        + Iterator<PlanProto.LogicalNode> it = nodeList.iterator();
        + while (it.hasNext()) {
        + PlanProto.LogicalNode protoNode = it.next();
        +
        + switch (protoNode.getType())

        { + case ROOT: + current = convertRoot(nodeMap, protoNode); + break; + case SET_SESSION: + current = convertSetSession(protoNode); + break; + case EXPRS: + current = convertEvalExpr(context, protoNode); + break; + case PROJECTION: + current = convertProjection(context, nodeMap, protoNode); + break; + case LIMIT: + current = convertLimit(nodeMap, protoNode); + break; + case SORT: + current = convertSort(nodeMap, protoNode); + break; + case WINDOW_AGG: + current = convertWindowAgg(context, nodeMap, protoNode); + break; + case HAVING: + current = convertHaving(context, nodeMap, protoNode); + break; + case GROUP_BY: + current = convertGroupby(context, nodeMap, protoNode); + break; + case DISTINCT_GROUP_BY: + current = convertDistinctGroupby(context, nodeMap, protoNode); + break; + case SELECTION: + current = convertFilter(context, nodeMap, protoNode); + break; + case JOIN: + current = convertJoin(context, nodeMap, protoNode); + break; + case TABLE_SUBQUERY: + current = convertTableSubQuery(context, nodeMap, protoNode); + break; + case UNION: + current = convertUnion(nodeMap, protoNode); + break; + case PARTITIONS_SCAN: + current = convertPartitionScan(context, protoNode); + break; + case SCAN: + current = convertScan(context, protoNode); + break; + + case CREATE_TABLE: + current = convertCreateTable(nodeMap, protoNode); + break; + case INSERT: + current = convertInsert(nodeMap, protoNode); + break; + case DROP_TABLE: + current = convertDropTable(protoNode); + break; + + case CREATE_DATABASE: + current = convertCreateDatabase(protoNode); + break; + case DROP_DATABASE: + current = convertDropDatabase(protoNode); + break; + + case ALTER_TABLESPACE: + current = convertAlterTablespace(protoNode); + break; + case ALTER_TABLE: + current = convertAlterTable(protoNode); + break; + case TRUNCATE_TABLE: + current = convertTruncateTable(protoNode); + break; + + default: + throw new RuntimeException("Unknown NodeType: " + protoNode.getType().name()); + }

        +
        + nodeMap.put(protoNode.getSid(), current);
        + }
        +
        + return current;
        + }
        +
        + private static LogicalRootNode convertRoot(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.RootNode rootProto = protoNode.getRoot();
        +
        + LogicalRootNode root = new LogicalRootNode(protoNode.getPid());
        + root.setChild(nodeMap.get(rootProto.getChildId()));
        + if (protoNode.hasInSchema())

        { + root.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + root.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        +
        + return root;
        + }
        +
        + private static SetSessionNode convertSetSession(PlanProto.LogicalNode protoNode)

        { + PlanProto.SetSessionNode setSessionProto = protoNode.getSetSession(); + + SetSessionNode setSession = new SetSessionNode(protoNode.getPid()); + setSession.init(setSessionProto.getName(), setSessionProto.hasValue() ? setSessionProto.getValue() : null); + + return setSession; + }

        +
        + private static EvalExprNode convertEvalExpr(OverridableConf context, PlanProto.LogicalNode protoNode)

        { + PlanProto.EvalExprNode evalExprProto = protoNode.getExprEval(); + + EvalExprNode evalExpr = new EvalExprNode(protoNode.getPid()); + evalExpr.setInSchema(convertSchema(protoNode.getInSchema())); + evalExpr.setTargets(convertTargets(context, evalExprProto.getTargetsList())); + + return evalExpr; + }

        +
        + private static ProjectionNode convertProjection(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.ProjectionNode projectionProto = protoNode.getProjection(); + + ProjectionNode projectionNode = new ProjectionNode(protoNode.getPid()); + projectionNode.init(projectionProto.getDistinct(), convertTargets(context, projectionProto.getTargetsList())); + projectionNode.setChild(nodeMap.get(projectionProto.getChildId())); + projectionNode.setInSchema(convertSchema(protoNode.getInSchema())); + projectionNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return projectionNode; + }

        +
        + private static LimitNode convertLimit(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.LimitNode limitProto = protoNode.getLimit(); + + LimitNode limitNode = new LimitNode(protoNode.getPid()); + limitNode.setChild(nodeMap.get(limitProto.getChildId())); + limitNode.setInSchema(convertSchema(protoNode.getInSchema())); + limitNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + limitNode.setFetchFirst(limitProto.getFetchFirstNum()); + + return limitNode; + }

        +
        + private static SortNode convertSort(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.SortNode sortProto = protoNode.getSort(); + + SortNode sortNode = new SortNode(protoNode.getPid()); + sortNode.setChild(nodeMap.get(sortProto.getChildId())); + sortNode.setInSchema(convertSchema(protoNode.getInSchema())); + sortNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + sortNode.setSortSpecs(convertSortSpecs(sortProto.getSortSpecsList())); + + return sortNode; + }

        +
        + private static HavingNode convertHaving(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.FilterNode havingProto = protoNode.getFilter(); + + HavingNode having = new HavingNode(protoNode.getPid()); + having.setChild(nodeMap.get(havingProto.getChildId())); + having.setQual(EvalNodeDeserializer.deserialize(context, havingProto.getQual())); + having.setInSchema(convertSchema(protoNode.getInSchema())); + having.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return having; + }

        +
        + private static WindowAggNode convertWindowAgg(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.WindowAggNode windowAggProto = protoNode.getWindowAgg();
        +
        + WindowAggNode windowAgg = new WindowAggNode(protoNode.getPid());
        + windowAgg.setChild(nodeMap.get(windowAggProto.getChildId()));
        +
        + if (windowAggProto.getPartitionKeysCount() > 0)

        { + windowAgg.setPartitionKeys(convertColumns(windowAggProto.getPartitionKeysList())); + }

        +
        + if (windowAggProto.getWindowFunctionsCount() > 0)

        { + windowAgg.setWindowFunctions(convertWindowFunccEvals(context, windowAggProto.getWindowFunctionsList())); + }

        +
        + windowAgg.setDistinct(windowAggProto.getDistinct());
        +
        + if (windowAggProto.getSortSpecsCount() > 0)

        { + windowAgg.setSortSpecs(convertSortSpecs(windowAggProto.getSortSpecsList())); + }

        +
        + if (windowAggProto.getTargetsCount() > 0)

        { + windowAgg.setTargets(convertTargets(context, windowAggProto.getTargetsList())); + }

        +
        + windowAgg.setInSchema(convertSchema(protoNode.getInSchema()));
        + windowAgg.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return windowAgg;
        + }
        +
        + private static GroupbyNode convertGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.GroupbyNode groupbyProto = protoNode.getGroupby();
        +
        + GroupbyNode groupby = new GroupbyNode(protoNode.getPid());
        + groupby.setChild(nodeMap.get(groupbyProto.getChildId()));
        + groupby.setDistinct(groupbyProto.getDistinct());
        +
        + if (groupbyProto.getGroupingKeysCount() > 0)

        { + groupby.setGroupingColumns(convertColumns(groupbyProto.getGroupingKeysList())); + }

        + if (groupbyProto.getAggFunctionsCount() > 0)

        { + groupby.setAggFunctions(convertAggFuncCallEvals(context, groupbyProto.getAggFunctionsList())); + }

        + if (groupbyProto.getTargetsCount() > 0)

        { + groupby.setTargets(convertTargets(context, groupbyProto.getTargetsList())); + }

        +
        + groupby.setInSchema(convertSchema(protoNode.getInSchema()));
        + groupby.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return groupby;
        + }
        +
        + private static DistinctGroupbyNode convertDistinctGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.DistinctGroupbyNode distinctGroupbyProto = protoNode.getDistinctGroupby();
        +
        + DistinctGroupbyNode distinctGroupby = new DistinctGroupbyNode(protoNode.getPid());
        + distinctGroupby.setChild(nodeMap.get(distinctGroupbyProto.getChildId()));
        +
        + if (distinctGroupbyProto.hasGroupbyNode())

        { + distinctGroupby.setGroupbyPlan(convertGroupby(context, nodeMap, distinctGroupbyProto.getGroupbyNode())); + }

        +
        + if (distinctGroupbyProto.getSubPlansCount() > 0) {
        + List<GroupbyNode> subPlans = TUtil.newList();
        + for (int i = 0; i < distinctGroupbyProto.getSubPlansCount(); i++)

        { + subPlans.add(convertGroupby(context, nodeMap, distinctGroupbyProto.getSubPlans(i))); + }

        + distinctGroupby.setSubPlans(subPlans);
        + }
        +
        + if (distinctGroupbyProto.getGroupingKeysCount() > 0)

        { + distinctGroupby.setGroupingColumns(convertColumns(distinctGroupbyProto.getGroupingKeysList())); + }

        + if (distinctGroupbyProto.getAggFunctionsCount() > 0)

        { + distinctGroupby.setAggFunctions(convertAggFuncCallEvals(context, distinctGroupbyProto.getAggFunctionsList())); + }

        + if (distinctGroupbyProto.getTargetsCount() > 0)

        { + distinctGroupby.setTargets(convertTargets(context, distinctGroupbyProto.getTargetsList())); + }

        + int [] resultColumnIds = new int[distinctGroupbyProto.getResultIdCount()];
        + for (int i = 0; i < distinctGroupbyProto.getResultIdCount(); i++)

        { + resultColumnIds[i] = distinctGroupbyProto.getResultId(i); + }

        + distinctGroupby.setResultColumnIds(resultColumnIds);
        +
        + // TODO - in distinct groupby, output and target are not matched to each other. It does not follow the convention.
        + distinctGroupby.setInSchema(convertSchema(protoNode.getInSchema()));
        + distinctGroupby.setOutSchema(convertSchema(protoNode.getOutSchema()));
        +
        + return distinctGroupby;
        + }
        +
        + private static JoinNode convertJoin(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.JoinNode joinProto = protoNode.getJoin();
        +
        + JoinNode join = new JoinNode(protoNode.getPid());
        + join.setLeftChild(nodeMap.get(joinProto.getLeftChildId()));
        + join.setRightChild(nodeMap.get(joinProto.getRightChildId()));
        + join.setJoinType(convertJoinType(joinProto.getJoinType()));
        + join.setInSchema(convertSchema(protoNode.getInSchema()));
        + join.setOutSchema(convertSchema(protoNode.getOutSchema()));
        + if (joinProto.hasJoinQual())

        { + join.setJoinQual(EvalNodeDeserializer.deserialize(context, joinProto.getJoinQual())); + }

        + if (joinProto.getExistsTargets())

        { + join.setTargets(convertTargets(context, joinProto.getTargetsList())); + }

        +
        + return join;
        + }
        +
        + private static SelectionNode convertFilter(OverridableConf context, Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode)

        { + PlanProto.FilterNode filterProto = protoNode.getFilter(); + + SelectionNode selection = new SelectionNode(protoNode.getPid()); + selection.setInSchema(convertSchema(protoNode.getInSchema())); + selection.setOutSchema(convertSchema(protoNode.getOutSchema())); + selection.setChild(nodeMap.get(filterProto.getChildId())); + selection.setQual(EvalNodeDeserializer.deserialize(context, filterProto.getQual())); + + return selection; + }

        +
        + private static UnionNode convertUnion(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode)

        { + PlanProto.UnionNode unionProto = protoNode.getUnion(); + + UnionNode union = new UnionNode(protoNode.getPid()); + union.setInSchema(convertSchema(protoNode.getInSchema())); + union.setOutSchema(convertSchema(protoNode.getOutSchema())); + union.setLeftChild(nodeMap.get(unionProto.getLeftChildId())); + union.setRightChild(nodeMap.get(unionProto.getRightChildId())); + + return union; + }

        +
        + private static ScanNode convertScan(OverridableConf context, PlanProto.LogicalNode protoNode)

        { + ScanNode scan = new ScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, scan); + + return scan; + }

        +
        + private static void fillScanNode(OverridableConf context, PlanProto.LogicalNode protoNode, ScanNode scan) {
        + PlanProto.ScanNode scanProto = protoNode.getScan();
        + if (scanProto.hasAlias())

        { + scan.init(new TableDesc(scanProto.getTable()), scanProto.getAlias()); + }

        else

        { + scan.init(new TableDesc(scanProto.getTable())); + }

        +
        + if (scanProto.getExistTargets())

        { + scan.setTargets(convertTargets(context, scanProto.getTargetsList())); + }

        +
        + if (scanProto.hasQual())

        { + scan.setQual(EvalNodeDeserializer.deserialize(context, scanProto.getQual())); + }

        +
        + scan.setInSchema(convertSchema(protoNode.getInSchema()));
        + scan.setOutSchema(convertSchema(protoNode.getOutSchema()));
        + }
        +
        + private static PartitionedTableScanNode convertPartitionScan(OverridableConf context, PlanProto.LogicalNode protoNode) {
        + PartitionedTableScanNode partitionedScan = new PartitionedTableScanNode(protoNode.getPid());
        + fillScanNode(context, protoNode, partitionedScan);
        +
        + PlanProto.PartitionScanSpec partitionScanProto = protoNode.getPartitionScan();
        + Path [] paths = new Path[partitionScanProto.getPathsCount()];
        + for (int i = 0; i < partitionScanProto.getPathsCount(); i++)

        { + paths[i] = new Path(partitionScanProto.getPaths(i)); + }

        + partitionedScan.setInputPaths(paths);
        + return partitionedScan;
        + }
        +
        + private static TableSubQueryNode convertTableSubQuery(OverridableConf context,
        + Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.TableSubQueryNode proto = protoNode.getTableSubQuery();
        +
        + TableSubQueryNode tableSubQuery = new TableSubQueryNode(protoNode.getPid());
        + tableSubQuery.init(proto.getTableName(), nodeMap.get(proto.getChildId()));
        + tableSubQuery.setInSchema(convertSchema(protoNode.getInSchema()));
        + if (proto.getTargetsCount() > 0)

        { + tableSubQuery.setTargets(convertTargets(context, proto.getTargetsList())); + }

        +
        + return tableSubQuery;
        + }
        +
        + private static CreateTableNode convertCreateTable(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore();
        + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable();
        + PlanProto.CreateTableNodeSpec createTableNodeSpec = protoNode.getCreateTable();
        +
        + CreateTableNode createTable = new CreateTableNode(protoNode.getPid());
        + if (protoNode.hasInSchema())

        { + createTable.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + createTable.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        + createTable.setChild(nodeMap.get(persistentStoreProto.getChildId()));
        + createTable.setStorageType(persistentStoreProto.getStorageType());
        + createTable.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties()));
        +
        + createTable.setTableName(storeTableNodeSpec.getTableName());
        + if (storeTableNodeSpec.hasPartitionMethod())

        { + createTable.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + }

        +
        + createTable.setTableSchema(convertSchema(createTableNodeSpec.getSchema()));
        + createTable.setExternal(createTableNodeSpec.getExternal());
        + if (createTableNodeSpec.getExternal() && createTableNodeSpec.hasPath())

        { + createTable.setPath(new Path(createTableNodeSpec.getPath())); + }

        + createTable.setIfNotExists(createTableNodeSpec.getIfNotExists());
        +
        + return createTable;
        + }
        +
        + private static InsertNode convertInsert(Map<Integer, LogicalNode> nodeMap,
        + PlanProto.LogicalNode protoNode) {
        + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore();
        + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable();
        + PlanProto.InsertNodeSpec insertNodeSpec = protoNode.getInsert();
        +
        + InsertNode insertNode = new InsertNode(protoNode.getPid());
        + if (protoNode.hasInSchema())

        { + insertNode.setInSchema(convertSchema(protoNode.getInSchema())); + }

        + if (protoNode.hasOutSchema())

        { + insertNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + }

        + insertNode.setChild(nodeMap.get(persistentStoreProto.getChildId()));
        + insertNode.setStorageType(persistentStoreProto.getStorageType());
        + insertNode.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties()));
        +
        + if (storeTableNodeSpec.hasTableName())

        { + insertNode.setTableName(storeTableNodeSpec.getTableName()); + }

        + if (storeTableNodeSpec.hasPartitionMethod())

        { + insertNode.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + }

        +
        + insertNode.setOverwrite(insertNodeSpec.getOverwrite());
        + insertNode.setTableSchema(convertSchema(insertNodeSpec.getTableSchema()));
        + if (insertNodeSpec.hasTargetSchema())

        { + insertNode.setTargetSchema(convertSchema(insertNodeSpec.getTargetSchema())); + }

        + if (insertNodeSpec.hasProjectedSchema())

        { + insertNode.setProjectedSchema(convertSchema(insertNodeSpec.getProjectedSchema())); + }

        + if (insertNodeSpec.hasPath())

        { + insertNode.setPath(new Path(insertNodeSpec.getPath())); + }

        +
        + return insertNode;
        + }
        +
        + private static DropTableNode convertDropTable(PlanProto.LogicalNode protoNode)

        { + DropTableNode dropTable = new DropTableNode(protoNode.getPid()); + + PlanProto.DropTableNode dropTableProto = protoNode.getDropTable(); + dropTable.init(dropTableProto.getTableName(), dropTableProto.getIfExists(), dropTableProto.getPurge()); + + return dropTable; + }

        +
        + private static CreateDatabaseNode convertCreateDatabase(PlanProto.LogicalNode protoNode)

        { + CreateDatabaseNode createDatabase = new CreateDatabaseNode(protoNode.getPid()); + + PlanProto.CreateDatabaseNode createDatabaseProto = protoNode.getCreateDatabase(); + createDatabase.init(createDatabaseProto.getDbName(), createDatabaseProto.getIfNotExists()); + + return createDatabase; + }

        +
        + private static DropDatabaseNode convertDropDatabase(PlanProto.LogicalNode protoNode)

        { + DropDatabaseNode dropDatabase = new DropDatabaseNode(protoNode.getPid()); + + PlanProto.DropDatabaseNode dropDatabaseProto = protoNode.getDropDatabase(); + dropDatabase.init(dropDatabaseProto.getDbName(), dropDatabaseProto.getIfExists()); + + return dropDatabase; + }

        +
        + private static AlterTablespaceNode convertAlterTablespace(PlanProto.LogicalNode protoNode) {
        + AlterTablespaceNode alterTablespace = new AlterTablespaceNode(protoNode.getPid());
        +
        + PlanProto.AlterTablespaceNode alterTablespaceProto = protoNode.getAlterTablespace();
        + alterTablespace.setTablespaceName(alterTablespaceProto.getTableSpaceName());
        +
        + switch (alterTablespaceProto.getSetType())

        { + case LOCATION: + alterTablespace.setLocation(alterTablespaceProto.getSetLocation().getLocation()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTablespaceProto.getSetType().name()); + }

        +
        + return alterTablespace;
        + }
        +
        + private static AlterTableNode convertAlterTable(PlanProto.LogicalNode protoNode) {
        + AlterTableNode alterTable = new AlterTableNode(protoNode.getPid());
        +
        + PlanProto.AlterTableNode alterTableProto = protoNode.getAlterTable();
        + alterTable.setTableName(alterTableProto.getTableName());
        +
        + switch (alterTableProto.getSetType())

        { + case RENAME_TABLE: + alterTable.setNewTableName(alterTableProto.getRenameTable().getNewName()); + break; + case ADD_COLUMN: + alterTable.setAddNewColumn(new Column(alterTableProto.getAddColumn().getAddColumn())); + break; + case RENAME_COLUMN: + alterTable.setColumnName(alterTableProto.getRenameColumn().getOldName()); + alterTable.setNewColumnName(alterTableProto.getRenameColumn().getNewName()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTableProto.getSetType().name()); + }

        +
        + return alterTable;
        + }
        +
        + private static TruncateTableNode convertTruncateTable(PlanProto.LogicalNode protoNode)

        { + TruncateTableNode truncateTable = new TruncateTableNode(protoNode.getPid()); + + PlanProto.TruncateTableNode truncateTableProto = protoNode.getTruncateTableNode(); + truncateTable.setTableNames(truncateTableProto.getTableNamesList()); + + return truncateTable; + }

        +
        + private static AggregationFunctionCallEval [] convertAggFuncCallEvals(OverridableConf context,
        + List<PlanProto.EvalNodeTree> evalTrees) {
        + AggregationFunctionCallEval [] aggFuncs = new AggregationFunctionCallEval[evalTrees.size()];
        + for (int i = 0; i < aggFuncs.length; i++)

        { + aggFuncs[i] = (AggregationFunctionCallEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + }

        + return aggFuncs;
        + }
        +
        + private static WindowFunctionEval[] convertWindowFunccEvals(OverridableConf context,
        + List<PlanProto.EvalNodeTree> evalTrees) {
        + WindowFunctionEval[] winFuncEvals = new WindowFunctionEval[evalTrees.size()];
        + for (int i = 0; i < winFuncEvals.length; i++)

        { + winFuncEvals[i] = (WindowFunctionEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + }

        + return winFuncEvals;
        + }
        +
        + public static Schema convertSchema(CatalogProtos.SchemaProto proto)

        { + return new Schema(proto); + }

        +
        + public static Column[] convertColumns(List<CatalogProtos.ColumnProto> columnProtos) {
        + Column [] columns = new Column[columnProtos.size()];
        + for (int i = 0; i < columns.length; i++)

        { + columns[i] = new Column(columnProtos.get(i)); + }

        + return columns;
        + }
        +
        + public static Target[] convertTargets(OverridableConf context, List<PlanProto.Target> targetsProto) {
        + Target [] targets = new Target[targetsProto.size()];
        + for (int i = 0; i < targets.length; i++) {
        + PlanProto.Target targetProto = targetsProto.get;
        + EvalNode evalNode = EvalNodeDeserializer.deserialize(context, targetProto.getExpr());
        + if (targetProto.hasAlias())

        { + targets[i] = new Target(evalNode, targetProto.getAlias()); + }

        else

        { + targets[i] = new Target((FieldEval) evalNode); + }

        + }
        + return targets;
        + }
        +
        + public static SortSpec[] convertSortSpecs(List<CatalogProtos.SortSpecProto> sortSpecProtos) {
        + SortSpec[] sortSpecs = new SortSpec[sortSpecProtos.size()];
        + int i = 0;
        + for (CatalogProtos.SortSpecProto proto : sortSpecProtos)

        { + sortSpecs[i++] = new SortSpec(proto); + }

        + return sortSpecs;
        + }
        +
        + public static JoinType convertJoinType(PlanProto.JoinType type) {
        + switch (type) {
        + case CROSS_JOIN:
        — End diff –

        How about keep only the enum variables of PB?
        It will avoid unnecessary type conversions and reduce maintenance overhead.

        Show
        githubbot ASF GitHub Bot added a comment - Github user jihoonson commented on a diff in the pull request: https://github.com/apache/tajo/pull/322#discussion_r22340959 — Diff: tajo-plan/src/main/java/org/apache/tajo/plan/serder/LogicalNodeDeserializer.java — @@ -0,0 +1,678 @@ +/* + * Lisensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.tajo.plan.serder; + +import com.google.common.collect.Lists; +import com.google.common.collect.Maps; +import org.apache.hadoop.fs.Path; +import org.apache.tajo.OverridableConf; +import org.apache.tajo.algebra.JoinType; +import org.apache.tajo.catalog.Column; +import org.apache.tajo.catalog.Schema; +import org.apache.tajo.catalog.SortSpec; +import org.apache.tajo.catalog.TableDesc; +import org.apache.tajo.catalog.partition.PartitionMethodDesc; +import org.apache.tajo.catalog.proto.CatalogProtos; +import org.apache.tajo.exception.UnimplementedException; +import org.apache.tajo.plan.Target; +import org.apache.tajo.plan.expr.AggregationFunctionCallEval; +import org.apache.tajo.plan.expr.EvalNode; +import org.apache.tajo.plan.expr.FieldEval; +import org.apache.tajo.plan.expr.WindowFunctionEval; +import org.apache.tajo.plan.logical.*; +import org.apache.tajo.util.KeyValueSet; +import org.apache.tajo.util.TUtil; + +import java.util.*; + +/** + * It deserializes a list of serialized logical nodes into a logical node tree. + */ +public class LogicalNodeDeserializer { + private static final LogicalNodeDeserializer instance; + + static { + instance = new LogicalNodeDeserializer(); + } + + /** + * Deserialize a list of nodes into a logical node tree. + * + * @param context QueryContext + * @param tree LogicalNodeTree which contains a list of serialized logical nodes. + * @return A logical node tree + */ + public static LogicalNode deserialize(OverridableConf context, PlanProto.LogicalNodeTree tree) { + Map<Integer, LogicalNode> nodeMap = Maps.newHashMap(); + + // sort serialized logical nodes in an ascending order of their sids + List<PlanProto.LogicalNode> nodeList = Lists.newArrayList(tree.getNodesList()); + Collections.sort(nodeList, new Comparator<PlanProto.LogicalNode>() { + @Override + public int compare(PlanProto.LogicalNode o1, PlanProto.LogicalNode o2) { + return o1.getSid() - o2.getSid(); + } + }); + + LogicalNode current = null; + + // The sorted order is the same of a postfix traverse order. + // So, it sequentially transforms each serialized node into a LogicalNode instance in a postfix order of + // the original logical node tree. + + Iterator<PlanProto.LogicalNode> it = nodeList.iterator(); + while (it.hasNext()) { + PlanProto.LogicalNode protoNode = it.next(); + + switch (protoNode.getType()) { + case ROOT: + current = convertRoot(nodeMap, protoNode); + break; + case SET_SESSION: + current = convertSetSession(protoNode); + break; + case EXPRS: + current = convertEvalExpr(context, protoNode); + break; + case PROJECTION: + current = convertProjection(context, nodeMap, protoNode); + break; + case LIMIT: + current = convertLimit(nodeMap, protoNode); + break; + case SORT: + current = convertSort(nodeMap, protoNode); + break; + case WINDOW_AGG: + current = convertWindowAgg(context, nodeMap, protoNode); + break; + case HAVING: + current = convertHaving(context, nodeMap, protoNode); + break; + case GROUP_BY: + current = convertGroupby(context, nodeMap, protoNode); + break; + case DISTINCT_GROUP_BY: + current = convertDistinctGroupby(context, nodeMap, protoNode); + break; + case SELECTION: + current = convertFilter(context, nodeMap, protoNode); + break; + case JOIN: + current = convertJoin(context, nodeMap, protoNode); + break; + case TABLE_SUBQUERY: + current = convertTableSubQuery(context, nodeMap, protoNode); + break; + case UNION: + current = convertUnion(nodeMap, protoNode); + break; + case PARTITIONS_SCAN: + current = convertPartitionScan(context, protoNode); + break; + case SCAN: + current = convertScan(context, protoNode); + break; + + case CREATE_TABLE: + current = convertCreateTable(nodeMap, protoNode); + break; + case INSERT: + current = convertInsert(nodeMap, protoNode); + break; + case DROP_TABLE: + current = convertDropTable(protoNode); + break; + + case CREATE_DATABASE: + current = convertCreateDatabase(protoNode); + break; + case DROP_DATABASE: + current = convertDropDatabase(protoNode); + break; + + case ALTER_TABLESPACE: + current = convertAlterTablespace(protoNode); + break; + case ALTER_TABLE: + current = convertAlterTable(protoNode); + break; + case TRUNCATE_TABLE: + current = convertTruncateTable(protoNode); + break; + + default: + throw new RuntimeException("Unknown NodeType: " + protoNode.getType().name()); + } + + nodeMap.put(protoNode.getSid(), current); + } + + return current; + } + + private static LogicalRootNode convertRoot(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.RootNode rootProto = protoNode.getRoot(); + + LogicalRootNode root = new LogicalRootNode(protoNode.getPid()); + root.setChild(nodeMap.get(rootProto.getChildId())); + if (protoNode.hasInSchema()) { + root.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + root.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + + return root; + } + + private static SetSessionNode convertSetSession(PlanProto.LogicalNode protoNode) { + PlanProto.SetSessionNode setSessionProto = protoNode.getSetSession(); + + SetSessionNode setSession = new SetSessionNode(protoNode.getPid()); + setSession.init(setSessionProto.getName(), setSessionProto.hasValue() ? setSessionProto.getValue() : null); + + return setSession; + } + + private static EvalExprNode convertEvalExpr(OverridableConf context, PlanProto.LogicalNode protoNode) { + PlanProto.EvalExprNode evalExprProto = protoNode.getExprEval(); + + EvalExprNode evalExpr = new EvalExprNode(protoNode.getPid()); + evalExpr.setInSchema(convertSchema(protoNode.getInSchema())); + evalExpr.setTargets(convertTargets(context, evalExprProto.getTargetsList())); + + return evalExpr; + } + + private static ProjectionNode convertProjection(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.ProjectionNode projectionProto = protoNode.getProjection(); + + ProjectionNode projectionNode = new ProjectionNode(protoNode.getPid()); + projectionNode.init(projectionProto.getDistinct(), convertTargets(context, projectionProto.getTargetsList())); + projectionNode.setChild(nodeMap.get(projectionProto.getChildId())); + projectionNode.setInSchema(convertSchema(protoNode.getInSchema())); + projectionNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return projectionNode; + } + + private static LimitNode convertLimit(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.LimitNode limitProto = protoNode.getLimit(); + + LimitNode limitNode = new LimitNode(protoNode.getPid()); + limitNode.setChild(nodeMap.get(limitProto.getChildId())); + limitNode.setInSchema(convertSchema(protoNode.getInSchema())); + limitNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + limitNode.setFetchFirst(limitProto.getFetchFirstNum()); + + return limitNode; + } + + private static SortNode convertSort(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.SortNode sortProto = protoNode.getSort(); + + SortNode sortNode = new SortNode(protoNode.getPid()); + sortNode.setChild(nodeMap.get(sortProto.getChildId())); + sortNode.setInSchema(convertSchema(protoNode.getInSchema())); + sortNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + sortNode.setSortSpecs(convertSortSpecs(sortProto.getSortSpecsList())); + + return sortNode; + } + + private static HavingNode convertHaving(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.FilterNode havingProto = protoNode.getFilter(); + + HavingNode having = new HavingNode(protoNode.getPid()); + having.setChild(nodeMap.get(havingProto.getChildId())); + having.setQual(EvalNodeDeserializer.deserialize(context, havingProto.getQual())); + having.setInSchema(convertSchema(protoNode.getInSchema())); + having.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return having; + } + + private static WindowAggNode convertWindowAgg(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.WindowAggNode windowAggProto = protoNode.getWindowAgg(); + + WindowAggNode windowAgg = new WindowAggNode(protoNode.getPid()); + windowAgg.setChild(nodeMap.get(windowAggProto.getChildId())); + + if (windowAggProto.getPartitionKeysCount() > 0) { + windowAgg.setPartitionKeys(convertColumns(windowAggProto.getPartitionKeysList())); + } + + if (windowAggProto.getWindowFunctionsCount() > 0) { + windowAgg.setWindowFunctions(convertWindowFunccEvals(context, windowAggProto.getWindowFunctionsList())); + } + + windowAgg.setDistinct(windowAggProto.getDistinct()); + + if (windowAggProto.getSortSpecsCount() > 0) { + windowAgg.setSortSpecs(convertSortSpecs(windowAggProto.getSortSpecsList())); + } + + if (windowAggProto.getTargetsCount() > 0) { + windowAgg.setTargets(convertTargets(context, windowAggProto.getTargetsList())); + } + + windowAgg.setInSchema(convertSchema(protoNode.getInSchema())); + windowAgg.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return windowAgg; + } + + private static GroupbyNode convertGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.GroupbyNode groupbyProto = protoNode.getGroupby(); + + GroupbyNode groupby = new GroupbyNode(protoNode.getPid()); + groupby.setChild(nodeMap.get(groupbyProto.getChildId())); + groupby.setDistinct(groupbyProto.getDistinct()); + + if (groupbyProto.getGroupingKeysCount() > 0) { + groupby.setGroupingColumns(convertColumns(groupbyProto.getGroupingKeysList())); + } + if (groupbyProto.getAggFunctionsCount() > 0) { + groupby.setAggFunctions(convertAggFuncCallEvals(context, groupbyProto.getAggFunctionsList())); + } + if (groupbyProto.getTargetsCount() > 0) { + groupby.setTargets(convertTargets(context, groupbyProto.getTargetsList())); + } + + groupby.setInSchema(convertSchema(protoNode.getInSchema())); + groupby.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return groupby; + } + + private static DistinctGroupbyNode convertDistinctGroupby(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.DistinctGroupbyNode distinctGroupbyProto = protoNode.getDistinctGroupby(); + + DistinctGroupbyNode distinctGroupby = new DistinctGroupbyNode(protoNode.getPid()); + distinctGroupby.setChild(nodeMap.get(distinctGroupbyProto.getChildId())); + + if (distinctGroupbyProto.hasGroupbyNode()) { + distinctGroupby.setGroupbyPlan(convertGroupby(context, nodeMap, distinctGroupbyProto.getGroupbyNode())); + } + + if (distinctGroupbyProto.getSubPlansCount() > 0) { + List<GroupbyNode> subPlans = TUtil.newList(); + for (int i = 0; i < distinctGroupbyProto.getSubPlansCount(); i++) { + subPlans.add(convertGroupby(context, nodeMap, distinctGroupbyProto.getSubPlans(i))); + } + distinctGroupby.setSubPlans(subPlans); + } + + if (distinctGroupbyProto.getGroupingKeysCount() > 0) { + distinctGroupby.setGroupingColumns(convertColumns(distinctGroupbyProto.getGroupingKeysList())); + } + if (distinctGroupbyProto.getAggFunctionsCount() > 0) { + distinctGroupby.setAggFunctions(convertAggFuncCallEvals(context, distinctGroupbyProto.getAggFunctionsList())); + } + if (distinctGroupbyProto.getTargetsCount() > 0) { + distinctGroupby.setTargets(convertTargets(context, distinctGroupbyProto.getTargetsList())); + } + int [] resultColumnIds = new int [distinctGroupbyProto.getResultIdCount()] ; + for (int i = 0; i < distinctGroupbyProto.getResultIdCount(); i++) { + resultColumnIds[i] = distinctGroupbyProto.getResultId(i); + } + distinctGroupby.setResultColumnIds(resultColumnIds); + + // TODO - in distinct groupby, output and target are not matched to each other. It does not follow the convention. + distinctGroupby.setInSchema(convertSchema(protoNode.getInSchema())); + distinctGroupby.setOutSchema(convertSchema(protoNode.getOutSchema())); + + return distinctGroupby; + } + + private static JoinNode convertJoin(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.JoinNode joinProto = protoNode.getJoin(); + + JoinNode join = new JoinNode(protoNode.getPid()); + join.setLeftChild(nodeMap.get(joinProto.getLeftChildId())); + join.setRightChild(nodeMap.get(joinProto.getRightChildId())); + join.setJoinType(convertJoinType(joinProto.getJoinType())); + join.setInSchema(convertSchema(protoNode.getInSchema())); + join.setOutSchema(convertSchema(protoNode.getOutSchema())); + if (joinProto.hasJoinQual()) { + join.setJoinQual(EvalNodeDeserializer.deserialize(context, joinProto.getJoinQual())); + } + if (joinProto.getExistsTargets()) { + join.setTargets(convertTargets(context, joinProto.getTargetsList())); + } + + return join; + } + + private static SelectionNode convertFilter(OverridableConf context, Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.FilterNode filterProto = protoNode.getFilter(); + + SelectionNode selection = new SelectionNode(protoNode.getPid()); + selection.setInSchema(convertSchema(protoNode.getInSchema())); + selection.setOutSchema(convertSchema(protoNode.getOutSchema())); + selection.setChild(nodeMap.get(filterProto.getChildId())); + selection.setQual(EvalNodeDeserializer.deserialize(context, filterProto.getQual())); + + return selection; + } + + private static UnionNode convertUnion(Map<Integer, LogicalNode> nodeMap, PlanProto.LogicalNode protoNode) { + PlanProto.UnionNode unionProto = protoNode.getUnion(); + + UnionNode union = new UnionNode(protoNode.getPid()); + union.setInSchema(convertSchema(protoNode.getInSchema())); + union.setOutSchema(convertSchema(protoNode.getOutSchema())); + union.setLeftChild(nodeMap.get(unionProto.getLeftChildId())); + union.setRightChild(nodeMap.get(unionProto.getRightChildId())); + + return union; + } + + private static ScanNode convertScan(OverridableConf context, PlanProto.LogicalNode protoNode) { + ScanNode scan = new ScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, scan); + + return scan; + } + + private static void fillScanNode(OverridableConf context, PlanProto.LogicalNode protoNode, ScanNode scan) { + PlanProto.ScanNode scanProto = protoNode.getScan(); + if (scanProto.hasAlias()) { + scan.init(new TableDesc(scanProto.getTable()), scanProto.getAlias()); + } else { + scan.init(new TableDesc(scanProto.getTable())); + } + + if (scanProto.getExistTargets()) { + scan.setTargets(convertTargets(context, scanProto.getTargetsList())); + } + + if (scanProto.hasQual()) { + scan.setQual(EvalNodeDeserializer.deserialize(context, scanProto.getQual())); + } + + scan.setInSchema(convertSchema(protoNode.getInSchema())); + scan.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + + private static PartitionedTableScanNode convertPartitionScan(OverridableConf context, PlanProto.LogicalNode protoNode) { + PartitionedTableScanNode partitionedScan = new PartitionedTableScanNode(protoNode.getPid()); + fillScanNode(context, protoNode, partitionedScan); + + PlanProto.PartitionScanSpec partitionScanProto = protoNode.getPartitionScan(); + Path [] paths = new Path [partitionScanProto.getPathsCount()] ; + for (int i = 0; i < partitionScanProto.getPathsCount(); i++) { + paths[i] = new Path(partitionScanProto.getPaths(i)); + } + partitionedScan.setInputPaths(paths); + return partitionedScan; + } + + private static TableSubQueryNode convertTableSubQuery(OverridableConf context, + Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.TableSubQueryNode proto = protoNode.getTableSubQuery(); + + TableSubQueryNode tableSubQuery = new TableSubQueryNode(protoNode.getPid()); + tableSubQuery.init(proto.getTableName(), nodeMap.get(proto.getChildId())); + tableSubQuery.setInSchema(convertSchema(protoNode.getInSchema())); + if (proto.getTargetsCount() > 0) { + tableSubQuery.setTargets(convertTargets(context, proto.getTargetsList())); + } + + return tableSubQuery; + } + + private static CreateTableNode convertCreateTable(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore(); + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable(); + PlanProto.CreateTableNodeSpec createTableNodeSpec = protoNode.getCreateTable(); + + CreateTableNode createTable = new CreateTableNode(protoNode.getPid()); + if (protoNode.hasInSchema()) { + createTable.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + createTable.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + createTable.setChild(nodeMap.get(persistentStoreProto.getChildId())); + createTable.setStorageType(persistentStoreProto.getStorageType()); + createTable.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties())); + + createTable.setTableName(storeTableNodeSpec.getTableName()); + if (storeTableNodeSpec.hasPartitionMethod()) { + createTable.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + } + + createTable.setTableSchema(convertSchema(createTableNodeSpec.getSchema())); + createTable.setExternal(createTableNodeSpec.getExternal()); + if (createTableNodeSpec.getExternal() && createTableNodeSpec.hasPath()) { + createTable.setPath(new Path(createTableNodeSpec.getPath())); + } + createTable.setIfNotExists(createTableNodeSpec.getIfNotExists()); + + return createTable; + } + + private static InsertNode convertInsert(Map<Integer, LogicalNode> nodeMap, + PlanProto.LogicalNode protoNode) { + PlanProto.PersistentStoreNode persistentStoreProto = protoNode.getPersistentStore(); + PlanProto.StoreTableNodeSpec storeTableNodeSpec = protoNode.getStoreTable(); + PlanProto.InsertNodeSpec insertNodeSpec = protoNode.getInsert(); + + InsertNode insertNode = new InsertNode(protoNode.getPid()); + if (protoNode.hasInSchema()) { + insertNode.setInSchema(convertSchema(protoNode.getInSchema())); + } + if (protoNode.hasOutSchema()) { + insertNode.setOutSchema(convertSchema(protoNode.getOutSchema())); + } + insertNode.setChild(nodeMap.get(persistentStoreProto.getChildId())); + insertNode.setStorageType(persistentStoreProto.getStorageType()); + insertNode.setOptions(new KeyValueSet(persistentStoreProto.getTableProperties())); + + if (storeTableNodeSpec.hasTableName()) { + insertNode.setTableName(storeTableNodeSpec.getTableName()); + } + if (storeTableNodeSpec.hasPartitionMethod()) { + insertNode.setPartitionMethod(new PartitionMethodDesc(storeTableNodeSpec.getPartitionMethod())); + } + + insertNode.setOverwrite(insertNodeSpec.getOverwrite()); + insertNode.setTableSchema(convertSchema(insertNodeSpec.getTableSchema())); + if (insertNodeSpec.hasTargetSchema()) { + insertNode.setTargetSchema(convertSchema(insertNodeSpec.getTargetSchema())); + } + if (insertNodeSpec.hasProjectedSchema()) { + insertNode.setProjectedSchema(convertSchema(insertNodeSpec.getProjectedSchema())); + } + if (insertNodeSpec.hasPath()) { + insertNode.setPath(new Path(insertNodeSpec.getPath())); + } + + return insertNode; + } + + private static DropTableNode convertDropTable(PlanProto.LogicalNode protoNode) { + DropTableNode dropTable = new DropTableNode(protoNode.getPid()); + + PlanProto.DropTableNode dropTableProto = protoNode.getDropTable(); + dropTable.init(dropTableProto.getTableName(), dropTableProto.getIfExists(), dropTableProto.getPurge()); + + return dropTable; + } + + private static CreateDatabaseNode convertCreateDatabase(PlanProto.LogicalNode protoNode) { + CreateDatabaseNode createDatabase = new CreateDatabaseNode(protoNode.getPid()); + + PlanProto.CreateDatabaseNode createDatabaseProto = protoNode.getCreateDatabase(); + createDatabase.init(createDatabaseProto.getDbName(), createDatabaseProto.getIfNotExists()); + + return createDatabase; + } + + private static DropDatabaseNode convertDropDatabase(PlanProto.LogicalNode protoNode) { + DropDatabaseNode dropDatabase = new DropDatabaseNode(protoNode.getPid()); + + PlanProto.DropDatabaseNode dropDatabaseProto = protoNode.getDropDatabase(); + dropDatabase.init(dropDatabaseProto.getDbName(), dropDatabaseProto.getIfExists()); + + return dropDatabase; + } + + private static AlterTablespaceNode convertAlterTablespace(PlanProto.LogicalNode protoNode) { + AlterTablespaceNode alterTablespace = new AlterTablespaceNode(protoNode.getPid()); + + PlanProto.AlterTablespaceNode alterTablespaceProto = protoNode.getAlterTablespace(); + alterTablespace.setTablespaceName(alterTablespaceProto.getTableSpaceName()); + + switch (alterTablespaceProto.getSetType()) { + case LOCATION: + alterTablespace.setLocation(alterTablespaceProto.getSetLocation().getLocation()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTablespaceProto.getSetType().name()); + } + + return alterTablespace; + } + + private static AlterTableNode convertAlterTable(PlanProto.LogicalNode protoNode) { + AlterTableNode alterTable = new AlterTableNode(protoNode.getPid()); + + PlanProto.AlterTableNode alterTableProto = protoNode.getAlterTable(); + alterTable.setTableName(alterTableProto.getTableName()); + + switch (alterTableProto.getSetType()) { + case RENAME_TABLE: + alterTable.setNewTableName(alterTableProto.getRenameTable().getNewName()); + break; + case ADD_COLUMN: + alterTable.setAddNewColumn(new Column(alterTableProto.getAddColumn().getAddColumn())); + break; + case RENAME_COLUMN: + alterTable.setColumnName(alterTableProto.getRenameColumn().getOldName()); + alterTable.setNewColumnName(alterTableProto.getRenameColumn().getNewName()); + break; + default: + throw new UnimplementedException("Unknown SET type in ALTER TABLE: " + alterTableProto.getSetType().name()); + } + + return alterTable; + } + + private static TruncateTableNode convertTruncateTable(PlanProto.LogicalNode protoNode) { + TruncateTableNode truncateTable = new TruncateTableNode(protoNode.getPid()); + + PlanProto.TruncateTableNode truncateTableProto = protoNode.getTruncateTableNode(); + truncateTable.setTableNames(truncateTableProto.getTableNamesList()); + + return truncateTable; + } + + private static AggregationFunctionCallEval [] convertAggFuncCallEvals(OverridableConf context, + List<PlanProto.EvalNodeTree> evalTrees) { + AggregationFunctionCallEval [] aggFuncs = new AggregationFunctionCallEval [evalTrees.size()] ; + for (int i = 0; i < aggFuncs.length; i++) { + aggFuncs[i] = (AggregationFunctionCallEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + } + return aggFuncs; + } + + private static WindowFunctionEval[] convertWindowFunccEvals(OverridableConf context, + List<PlanProto.EvalNodeTree> evalTrees) { + WindowFunctionEval[] winFuncEvals = new WindowFunctionEval [evalTrees.size()] ; + for (int i = 0; i < winFuncEvals.length; i++) { + winFuncEvals[i] = (WindowFunctionEval) EvalNodeDeserializer.deserialize(context, evalTrees.get(i)); + } + return winFuncEvals; + } + + public static Schema convertSchema(CatalogProtos.SchemaProto proto) { + return new Schema(proto); + } + + public static Column[] convertColumns(List<CatalogProtos.ColumnProto> columnProtos) { + Column [] columns = new Column [columnProtos.size()] ; + for (int i = 0; i < columns.length; i++) { + columns[i] = new Column(columnProtos.get(i)); + } + return columns; + } + + public static Target[] convertTargets(OverridableConf context, List<PlanProto.Target> targetsProto) { + Target [] targets = new Target [targetsProto.size()] ; + for (int i = 0; i < targets.length; i++) { + PlanProto.Target targetProto = targetsProto.get ; + EvalNode evalNode = EvalNodeDeserializer.deserialize(context, targetProto.getExpr()); + if (targetProto.hasAlias()) { + targets[i] = new Target(evalNode, targetProto.getAlias()); + } else { + targets[i] = new Target((FieldEval) evalNode); + } + } + return targets; + } + + public static SortSpec[] convertSortSpecs(List<CatalogProtos.SortSpecProto> sortSpecProtos) { + SortSpec[] sortSpecs = new SortSpec [sortSpecProtos.size()] ; + int i = 0; + for (CatalogProtos.SortSpecProto proto : sortSpecProtos) { + sortSpecs[i++] = new SortSpec(proto); + } + return sortSpecs; + } + + public static JoinType convertJoinType(PlanProto.JoinType type) { + switch (type) { + case CROSS_JOIN: — End diff – How about keep only the enum variables of PB? It will avoid unnecessary type conversions and reduce maintenance overhead.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user jihoonson commented on a diff in the pull request:

        https://github.com/apache/tajo/pull/322#discussion_r22339914

        — Diff: tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/Schema.java —
        @@ -199,6 +199,12 @@ private RuntimeException throwAmbiguousFieldException(Collection<Integer> idList
        }

        public int getColumnId(String name) {
        + // if the same column exists, immediately return that column.
        + if (fieldsByQualifiedName.containsKey(name)) {
        — End diff –

        The same condition is checked at Line 210.
        Furthermore, the below line also checks whether the given name is qualified or not.

        Show
        githubbot ASF GitHub Bot added a comment - Github user jihoonson commented on a diff in the pull request: https://github.com/apache/tajo/pull/322#discussion_r22339914 — Diff: tajo-catalog/tajo-catalog-common/src/main/java/org/apache/tajo/catalog/Schema.java — @@ -199,6 +199,12 @@ private RuntimeException throwAmbiguousFieldException(Collection<Integer> idList } public int getColumnId(String name) { + // if the same column exists, immediately return that column. + if (fieldsByQualifiedName.containsKey(name)) { — End diff – The same condition is checked at Line 210. Furthermore, the below line also checks whether the given name is qualified or not.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user jihoonson commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68329683

        Interesting benchmark results.
        It looks that the deserialization process of Protocol Buffers requires CPU processing as much as that of Json.
        I'll review this patch.

        Show
        githubbot ASF GitHub Bot added a comment - Github user jihoonson commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68329683 Interesting benchmark results. It looks that the deserialization process of Protocol Buffers requires CPU processing as much as that of Json. I'll review this patch.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user hyunsik commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68273863

        Here is a benchmark code.
        https://gist.github.com/hyunsik/756449060cbeb254c4d7

        Show
        githubbot ASF GitHub Bot added a comment - Github user hyunsik commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68273863 Here is a benchmark code. https://gist.github.com/hyunsik/756449060cbeb254c4d7
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user hyunsik commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68272663

        I carried out some simple benchmark in terms of serialized data size and (de)serialization speed. Protobuf-based serialization outperforms both serialized size and serialization speed. They also have similar deserialization speed.

        Test query:
        ```
        create table store1 as select p.deptName, sumtest(score) from dept as p, score group by p.deptName.
        ```

          1. Size
        • JSON serialized size: 9,597 bytes
        • Protobuf serialized size: 2,131 bytes
          1. Speed
            I used 10,000 iteration to measure times of serialization and deserialization.

        Json

        • Serialization: 5,265 msec
        • Deserialization; 10,269 msec

        Protobuf

        • Serialization: 1,779 msec
        • Deserialization: 10244 msec
        Show
        githubbot ASF GitHub Bot added a comment - Github user hyunsik commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68272663 I carried out some simple benchmark in terms of serialized data size and (de)serialization speed. Protobuf-based serialization outperforms both serialized size and serialization speed. They also have similar deserialization speed. Test query: ``` create table store1 as select p.deptName, sumtest(score) from dept as p, score group by p.deptName. ``` Size JSON serialized size: 9,597 bytes Protobuf serialized size: 2,131 bytes Speed I used 10,000 iteration to measure times of serialization and deserialization. Json Serialization: 5,265 msec Deserialization; 10,269 msec Protobuf Serialization: 1,779 msec Deserialization: 10244 msec
        Hide
        tajoqa Tajo QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12689341/TAJO-269_2.patch
        against master revision release-0.9.0-rc0-112-gfd49bff.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 8 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The applied patch does not increase the total number of javadoc warnings.

        +1 checkstyle. The patch generated 0 code style errors.

        -1 findbugs. The patch appears to introduce 269 new Findbugs (version 2.0.3) warnings.

        -1 release audit. The applied patch generated 661 release audit warnings.

        +1 core tests. The patch passed unit tests in tajo-catalog/tajo-catalog-common tajo-common tajo-core tajo-plan tajo-storage/tajo-storage-common tajo-storage/tajo-storage-hbase.

        Test results: https://builds.apache.org/job/PreCommit-TAJO-Build/562//testReport/
        Release audit warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/patchReleaseAuditProblems.txt
        Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-common.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-core.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-plan.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-storage-hbase.html
        Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-storage-common.html
        Console output: https://builds.apache.org/job/PreCommit-TAJO-Build/562//console

        This message is automatically generated.

        Show
        tajoqa Tajo QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12689341/TAJO-269_2.patch against master revision release-0.9.0-rc0-112-gfd49bff. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 8 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The applied patch does not increase the total number of javadoc warnings. +1 checkstyle. The patch generated 0 code style errors. -1 findbugs. The patch appears to introduce 269 new Findbugs (version 2.0.3) warnings. -1 release audit. The applied patch generated 661 release audit warnings. +1 core tests. The patch passed unit tests in tajo-catalog/tajo-catalog-common tajo-common tajo-core tajo-plan tajo-storage/tajo-storage-common tajo-storage/tajo-storage-hbase. Test results: https://builds.apache.org/job/PreCommit-TAJO-Build/562//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-core.html Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-plan.html Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-storage-hbase.html Findbugs warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/562//artifact/incubator-tajo/patchprocess/newPatchFindbugsWarningstajo-storage-common.html Console output: https://builds.apache.org/job/PreCommit-TAJO-Build/562//console This message is automatically generated.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user hyunsik commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68266704

        I've added more comments and renamed some class names.

        Show
        githubbot ASF GitHub Bot added a comment - Github user hyunsik commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68266704 I've added more comments and renamed some class names.
        Hide
        tajoqa Tajo QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12689319/TAJO-269.patch
        against master revision release-0.9.0-rc0-112-gfd49bff.

        +1 @author. The patch does not contain any @author tags.

        +1 tests included. The patch appears to include 8 new or modified test files.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. The applied patch does not increase the total number of javadoc warnings.

        +1 checkstyle. The patch generated 0 code style errors.

        -1 findbugs. The patch appears to cause Findbugs (version 2.0.3) to fail.

        -1 release audit. The applied patch generated 661 release audit warnings.

        +1 core tests. The patch passed unit tests in tajo-catalog/tajo-catalog-common tajo-common tajo-core tajo-plan tajo-storage/tajo-storage-common tajo-storage/tajo-storage-hbase.

        Test results: https://builds.apache.org/job/PreCommit-TAJO-Build/561//testReport/
        Release audit warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/561//artifact/incubator-tajo/patchprocess/patchReleaseAuditProblems.txt
        Findbugs results: https://builds.apache.org/job/PreCommit-TAJO-Build/561//findbugsResult
        Console output: https://builds.apache.org/job/PreCommit-TAJO-Build/561//console

        This message is automatically generated.

        Show
        tajoqa Tajo QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12689319/TAJO-269.patch against master revision release-0.9.0-rc0-112-gfd49bff. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 8 new or modified test files. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The applied patch does not increase the total number of javadoc warnings. +1 checkstyle. The patch generated 0 code style errors. -1 findbugs. The patch appears to cause Findbugs (version 2.0.3) to fail. -1 release audit. The applied patch generated 661 release audit warnings. +1 core tests. The patch passed unit tests in tajo-catalog/tajo-catalog-common tajo-common tajo-core tajo-plan tajo-storage/tajo-storage-common tajo-storage/tajo-storage-hbase. Test results: https://builds.apache.org/job/PreCommit-TAJO-Build/561//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-TAJO-Build/561//artifact/incubator-tajo/patchprocess/patchReleaseAuditProblems.txt Findbugs results: https://builds.apache.org/job/PreCommit-TAJO-Build/561//findbugsResult Console output: https://builds.apache.org/job/PreCommit-TAJO-Build/561//console This message is automatically generated.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user hyunsik commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68247429

        @jihoonson Thank you for your comment. I fixed some compilation which occurs in jvm6.

        Show
        githubbot ASF GitHub Bot added a comment - Github user hyunsik commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68247429 @jihoonson Thank you for your comment. I fixed some compilation which occurs in jvm6.
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user hyunsik commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68246166

        In this patch, I mainly did as follows:

        • Change TaskRequestProto to use protobuf serialized plan instead JSON serialized plan
        • Implement LogicalNodeTreeSerializer and LogicalNodeTreeDeserializer

        In order to verify the de/serialization of logical plan, I injected some test codes to check equality between the original plan and a restored plan from serialized protobuf plan. For it, I've improved GlobalPlanner to have some rewrite engine, and I changed TajoTestingCluster to inject additional rewrite rules to check the equality. As a result, I added two rewrite rules:

        • GlobalPlanEqualityTester
        • LogicalPlanEqualityTester

        While I'm working on it, I corrected some wrong or bad names as follow.

        • Rename BasicLogicalPlanVisitor::visitDistinct to visitDistinctGroupby
        • Rename RelationNode::getLogicalSchema to getTableSchema
        • Rename DistingtGroupbyNode::getGroupByNodes to getSubPlans
        Show
        githubbot ASF GitHub Bot added a comment - Github user hyunsik commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68246166 In this patch, I mainly did as follows: Change TaskRequestProto to use protobuf serialized plan instead JSON serialized plan Implement LogicalNodeTreeSerializer and LogicalNodeTreeDeserializer In order to verify the de/serialization of logical plan, I injected some test codes to check equality between the original plan and a restored plan from serialized protobuf plan. For it, I've improved GlobalPlanner to have some rewrite engine, and I changed TajoTestingCluster to inject additional rewrite rules to check the equality. As a result, I added two rewrite rules: GlobalPlanEqualityTester LogicalPlanEqualityTester While I'm working on it, I corrected some wrong or bad names as follow. Rename BasicLogicalPlanVisitor::visitDistinct to visitDistinctGroupby Rename RelationNode::getLogicalSchema to getTableSchema Rename DistingtGroupbyNode::getGroupByNodes to getSubPlans
        Hide
        githubbot ASF GitHub Bot added a comment -

        Github user jihoonson commented on the pull request:

        https://github.com/apache/tajo/pull/322#issuecomment-68245200

        Great work!
        Travis CI build is failed with a compilation error.
        Would you check that error please?

        Show
        githubbot ASF GitHub Bot added a comment - Github user jihoonson commented on the pull request: https://github.com/apache/tajo/pull/322#issuecomment-68245200 Great work! Travis CI build is failed with a compilation error. Would you check that error please?
        Hide
        githubbot ASF GitHub Bot added a comment -

        GitHub user hyunsik opened a pull request:

        https://github.com/apache/tajo/pull/322

        TAJO-269: Protocol buffer De/Serialization for LogicalNode.

        It was a very long time work. This patch completely replaces json (de)serialization of logical plan by protocol buffer. This is the first patch. I'll do some cleanup and add more comments.

        You can merge this pull request into a Git repository by running:

        $ git pull https://github.com/hyunsik/tajo TAJO-269

        Alternatively you can review and apply these changes as the patch at:

        https://github.com/apache/tajo/pull/322.patch

        To close this pull request, make a commit to your master/trunk branch
        with (at least) the following in the commit message:

        This closes #322


        commit 2ec6e684d3f7df7cfda195a1355d9c1e45eedc4b
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-10T15:59:47Z

        initial work for JitVecTestBase.

        commit 7df2c6192a3158fc225b54b424f50f0272ceef5c
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-11T18:55:34Z

        Added more utility methods to Eval, and Added basic layout of LogicalPlanConvertor.

        commit 0e910132b3d3349e8af18632f6759fbbb5a04429
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-12T16:55:44Z

        Added (de)serializer for const eval.

        commit 1bde69303e69582b81ecaeec7f08aa0551abe8fb
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-12T17:20:15Z

        Added (de)serializer for field eval.

        commit a5e3317f88f87a1182a2a1dbc9d6a805bc47b5a4
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-12T18:12:27Z

        Added (de)serializer for function.

        commit ecbdb340954940b01ab48a602191c5d579bc7a84
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-13T22:33:22Z

        Added (de)serializer for in clause and rowconstant.

        commit 95cd8ffa358da732d81c927a0dedb20742376d63
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-15T03:50:03Z

        Fixed (de)serialization bug for function, and implemented IntervalDatum.

        commit c58f469aa8aa31df1109374ceb9c158d9a77213c
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-15T11:25:14Z

        Added (de)serialization for between.

        commit 621967232d90dd0768fcfbd9e78aa64497fb0478
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-15T12:41:30Z

        Added (de)serialization for case when.

        commit 654b8b946b64b4572de41d59f3ad585e4b32f0b3
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-15T13:01:08Z

        Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/tajo into JitVecTestBase

        Conflicts:
        tajo-core/src/main/java/org/apache/tajo/engine/eval/BinaryEval.java
        tajo-core/src/main/java/org/apache/tajo/engine/eval/FunctionEval.java

        commit 0b257fd924a747075c68200a676e29e352b15c99
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-15T13:03:32Z

        Renamed setExpr to setChild.

        commit f5e994df1f64980ae48a2d9cb5e13c68dc0fe611
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-15T13:21:43Z

        Refactored eval tree proto.

        commit ade69f784e2369aa2f6950ce4f95dd316f4526e3
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-15T13:51:40Z

        TAJO-1008: Protocol buffer De/Serialization for EvalNode.

        commit c9929b946e2aae762165c5583d475d06f039238f
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-16T20:53:35Z

        Merge branch 'TAJO-1008' of github.com:hyunsik/tajo into JitVecTestBase

        Conflicts:
        tajo-core/src/main/java/org/apache/tajo/engine/eval/BetweenPredicateEval.java
        tajo-core/src/main/java/org/apache/tajo/engine/eval/BinaryEval.java
        tajo-core/src/main/java/org/apache/tajo/engine/plan/EvalTreeProtoDeserializer.java
        tajo-core/src/main/java/org/apache/tajo/engine/plan/EvalTreeProtoSerializer.java
        tajo-core/src/test/java/org/apache/tajo/engine/eval/ExprTestBase.java
        tajo-core/src/test/java/org/apache/tajo/engine/eval/TestEvalTree.java

        commit 56df752d142d0cf0d2bf1d968ba1f99fc2a89428
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-17T06:54:21Z

        Refactor some methods and variables.

        commit b5a1ed853e8342c89872b9a312756264ec1f4630
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-17T06:56:34Z

        Refactor ScanNode and added (de)serializeion for ScanNode.

        commit 059c10be3efa7c4d91d37fecaa5ba9d4ea0897cf
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-17T06:57:11Z

        Merge branch 'TAJO-1008' of github.com:hyunsik/tajo into JitVecTestBase

        commit 5d30cef87977e5d7582857525e37512881934a56
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-17T12:39:40Z

        Add (de)serialization scan, filter, groupby, sort, and limit nodes.

        commit 5a69b8512eefd5f73f02360976a7cfb649af7a3e
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-17T17:10:03Z

        Add (de)serialization root, having, and exprs.

        commit eb50db0b6e73dde90945c43273e59cc1fe472bab
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-17T17:10:51Z

        Fixed compilation error.

        commit b1eb0e9482bdd5e41a698f96738e1fe5af0b1a61
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-18T02:10:18Z

        Add (de)serialization for other nodes.

        commit 3fd70b59d769b1263025708a839793df1492db7a
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-08-18T09:46:32Z

        Updated NodeType, and added more (de)serialization code.

        commit 800e8068f2768a2ec51d77ac0a7923e259af7c94
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-09-16T10:26:04Z

        Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/tajo into JitVecTestBase

        Conflicts:
        pom.xml
        tajo-core/src/main/java/org/apache/tajo/engine/eval/EvalType.java
        tajo-core/src/main/java/org/apache/tajo/engine/eval/FunctionEval.java
        tajo-core/src/main/java/org/apache/tajo/engine/plan/EvalTreeProtoSerializer.java
        tajo-core/src/main/java/org/apache/tajo/engine/planner/logical/ScanNode.java
        tajo-core/src/main/proto/Plan.proto
        tajo-core/src/test/java/org/apache/tajo/engine/eval/ExprTestBase.java
        tajo-project/pom.xml

        commit 8c55224f668b12287b434fb83467f67e79c77c7c
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-12-24T06:34:17Z

        Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/tajo into JitVecTestBase

        Conflicts:
        tajo-plan/src/main/java/org/apache/tajo/plan/Target.java
        tajo-plan/src/main/java/org/apache/tajo/plan/expr/CaseWhenEval.java
        tajo-plan/src/main/java/org/apache/tajo/plan/expr/EvalNode.java
        tajo-plan/src/main/java/org/apache/tajo/plan/logical/EvalExprNode.java
        tajo-plan/src/main/java/org/apache/tajo/plan/visitor/BasicLogicalPlanVisitor.java
        tajo-plan/src/main/java/org/apache/tajo/plan/visitor/LogicalPlanVisitor.java
        tajo-plan/src/main/proto/Plan.proto

        commit 8a69a46cfc2825704f6bd689e80aac25a543d99c
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-12-25T20:30:29Z

        All unit tests are passed.

        commit e55e168161628812a37b2ed82a9c71d9f34086a7
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-12-26T08:06:47Z

        Fix test failures.

        commit 5ac3f67cd89922e9263d13b092a4a2e5bf1ca199
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-12-27T09:12:53Z

        Fixed manu test failures.

        commit ef71d30f92ea83e9311d2e3d289733ab1cd5f880
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-12-27T13:45:01Z

        Fix test failures.

        commit 8fe6b7e042bcd80d4332b2c12e2fbe3127cbdb6a
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-12-27T17:11:21Z

        Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/tajo into JitVecTestBase

        commit dbfab9721dd7225a7f855617c9ac0c1660eec67c
        Author: Hyunsik Choi <hyunsik@apache.org>
        Date: 2014-12-28T13:06:16Z

        Injected test code.


        Show
        githubbot ASF GitHub Bot added a comment - GitHub user hyunsik opened a pull request: https://github.com/apache/tajo/pull/322 TAJO-269 : Protocol buffer De/Serialization for LogicalNode. It was a very long time work. This patch completely replaces json (de)serialization of logical plan by protocol buffer. This is the first patch. I'll do some cleanup and add more comments. You can merge this pull request into a Git repository by running: $ git pull https://github.com/hyunsik/tajo TAJO-269 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/tajo/pull/322.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #322 commit 2ec6e684d3f7df7cfda195a1355d9c1e45eedc4b Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-10T15:59:47Z initial work for JitVecTestBase. commit 7df2c6192a3158fc225b54b424f50f0272ceef5c Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-11T18:55:34Z Added more utility methods to Eval, and Added basic layout of LogicalPlanConvertor. commit 0e910132b3d3349e8af18632f6759fbbb5a04429 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-12T16:55:44Z Added (de)serializer for const eval. commit 1bde69303e69582b81ecaeec7f08aa0551abe8fb Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-12T17:20:15Z Added (de)serializer for field eval. commit a5e3317f88f87a1182a2a1dbc9d6a805bc47b5a4 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-12T18:12:27Z Added (de)serializer for function. commit ecbdb340954940b01ab48a602191c5d579bc7a84 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-13T22:33:22Z Added (de)serializer for in clause and rowconstant. commit 95cd8ffa358da732d81c927a0dedb20742376d63 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-15T03:50:03Z Fixed (de)serialization bug for function, and implemented IntervalDatum. commit c58f469aa8aa31df1109374ceb9c158d9a77213c Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-15T11:25:14Z Added (de)serialization for between. commit 621967232d90dd0768fcfbd9e78aa64497fb0478 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-15T12:41:30Z Added (de)serialization for case when. commit 654b8b946b64b4572de41d59f3ad585e4b32f0b3 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-15T13:01:08Z Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/tajo into JitVecTestBase Conflicts: tajo-core/src/main/java/org/apache/tajo/engine/eval/BinaryEval.java tajo-core/src/main/java/org/apache/tajo/engine/eval/FunctionEval.java commit 0b257fd924a747075c68200a676e29e352b15c99 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-15T13:03:32Z Renamed setExpr to setChild. commit f5e994df1f64980ae48a2d9cb5e13c68dc0fe611 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-15T13:21:43Z Refactored eval tree proto. commit ade69f784e2369aa2f6950ce4f95dd316f4526e3 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-15T13:51:40Z TAJO-1008 : Protocol buffer De/Serialization for EvalNode. commit c9929b946e2aae762165c5583d475d06f039238f Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-16T20:53:35Z Merge branch ' TAJO-1008 ' of github.com:hyunsik/tajo into JitVecTestBase Conflicts: tajo-core/src/main/java/org/apache/tajo/engine/eval/BetweenPredicateEval.java tajo-core/src/main/java/org/apache/tajo/engine/eval/BinaryEval.java tajo-core/src/main/java/org/apache/tajo/engine/plan/EvalTreeProtoDeserializer.java tajo-core/src/main/java/org/apache/tajo/engine/plan/EvalTreeProtoSerializer.java tajo-core/src/test/java/org/apache/tajo/engine/eval/ExprTestBase.java tajo-core/src/test/java/org/apache/tajo/engine/eval/TestEvalTree.java commit 56df752d142d0cf0d2bf1d968ba1f99fc2a89428 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-17T06:54:21Z Refactor some methods and variables. commit b5a1ed853e8342c89872b9a312756264ec1f4630 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-17T06:56:34Z Refactor ScanNode and added (de)serializeion for ScanNode. commit 059c10be3efa7c4d91d37fecaa5ba9d4ea0897cf Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-17T06:57:11Z Merge branch ' TAJO-1008 ' of github.com:hyunsik/tajo into JitVecTestBase commit 5d30cef87977e5d7582857525e37512881934a56 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-17T12:39:40Z Add (de)serialization scan, filter, groupby, sort, and limit nodes. commit 5a69b8512eefd5f73f02360976a7cfb649af7a3e Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-17T17:10:03Z Add (de)serialization root, having, and exprs. commit eb50db0b6e73dde90945c43273e59cc1fe472bab Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-17T17:10:51Z Fixed compilation error. commit b1eb0e9482bdd5e41a698f96738e1fe5af0b1a61 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-18T02:10:18Z Add (de)serialization for other nodes. commit 3fd70b59d769b1263025708a839793df1492db7a Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-08-18T09:46:32Z Updated NodeType, and added more (de)serialization code. commit 800e8068f2768a2ec51d77ac0a7923e259af7c94 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-09-16T10:26:04Z Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/tajo into JitVecTestBase Conflicts: pom.xml tajo-core/src/main/java/org/apache/tajo/engine/eval/EvalType.java tajo-core/src/main/java/org/apache/tajo/engine/eval/FunctionEval.java tajo-core/src/main/java/org/apache/tajo/engine/plan/EvalTreeProtoSerializer.java tajo-core/src/main/java/org/apache/tajo/engine/planner/logical/ScanNode.java tajo-core/src/main/proto/Plan.proto tajo-core/src/test/java/org/apache/tajo/engine/eval/ExprTestBase.java tajo-project/pom.xml commit 8c55224f668b12287b434fb83467f67e79c77c7c Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-12-24T06:34:17Z Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/tajo into JitVecTestBase Conflicts: tajo-plan/src/main/java/org/apache/tajo/plan/Target.java tajo-plan/src/main/java/org/apache/tajo/plan/expr/CaseWhenEval.java tajo-plan/src/main/java/org/apache/tajo/plan/expr/EvalNode.java tajo-plan/src/main/java/org/apache/tajo/plan/logical/EvalExprNode.java tajo-plan/src/main/java/org/apache/tajo/plan/visitor/BasicLogicalPlanVisitor.java tajo-plan/src/main/java/org/apache/tajo/plan/visitor/LogicalPlanVisitor.java tajo-plan/src/main/proto/Plan.proto commit 8a69a46cfc2825704f6bd689e80aac25a543d99c Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-12-25T20:30:29Z All unit tests are passed. commit e55e168161628812a37b2ed82a9c71d9f34086a7 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-12-26T08:06:47Z Fix test failures. commit 5ac3f67cd89922e9263d13b092a4a2e5bf1ca199 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-12-27T09:12:53Z Fixed manu test failures. commit ef71d30f92ea83e9311d2e3d289733ab1cd5f880 Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-12-27T13:45:01Z Fix test failures. commit 8fe6b7e042bcd80d4332b2c12e2fbe3127cbdb6a Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-12-27T17:11:21Z Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/tajo into JitVecTestBase commit dbfab9721dd7225a7f855617c9ac0c1660eec67c Author: Hyunsik Choi <hyunsik@apache.org> Date: 2014-12-28T13:06:16Z Injected test code.
        Hide
        hyunsik Hyunsik Choi added a comment -

        I'll submit the patch soon.

        Show
        hyunsik Hyunsik Choi added a comment - I'll submit the patch soon.
        Hide
        hyunsik Hyunsik Choi added a comment -

        Thanks Jihoon!

        Show
        hyunsik Hyunsik Choi added a comment - Thanks Jihoon!
        Hide
        jihoonson Jihoon Son added a comment -

        Sure.
        Please go ahead.

        Show
        jihoonson Jihoon Son added a comment - Sure. Please go ahead.
        Hide
        hyunsik Hyunsik Choi added a comment -

        If you haven't started this issue yet, could I take this issue? Actually, I need this work shortly.

        Show
        hyunsik Hyunsik Choi added a comment - If you haven't started this issue yet, could I take this issue? Actually, I need this work shortly.
        Hide
        hyunsik Hyunsik Choi added a comment -

        I reschedule it to future release.

        Show
        hyunsik Hyunsik Choi added a comment - I reschedule it to future release.
        Hide
        jihoonson Jihoon Son added a comment -

        Right.

        This work includes de/serialization of all kinds of LogicalNode as well as EvalNode, Target, and so on.
        It will take quite long time.

        Show
        jihoonson Jihoon Son added a comment - Right. This work includes de/serialization of all kinds of LogicalNode as well as EvalNode, Target, and so on. It will take quite long time.
        Hide
        hyunsik Hyunsik Choi added a comment -

        +1

        Actually, I've thought this way is better than the current implementation. However, I couldn't start this work because this way requires a bunch of works.

        Show
        hyunsik Hyunsik Choi added a comment - +1 Actually, I've thought this way is better than the current implementation. However, I couldn't start this work because this way requires a bunch of works.

          People

          • Assignee:
            hyunsik Hyunsik Choi
            Reporter:
            jihoonson Jihoon Son
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development