Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
None
-
None
-
None
Description
for order by query having limit, spark optimizes the plan.
But since we put Decoder in between Limit and TungstenSort plan, check the plan as below, its not able to optimize the plan
== Physical Plan == |
Limit 2 |
ConvertToSafe |
CarbonDictionaryDecoder CarbonDecoderRelation(Map(name#3 -> name#3),CarbonDatasourceRelation(`default`.`dict`,None)), ExcludeProfile(ArrayBuffer(name#3)), CarbonAliasDecoderRelation() |
TungstenSort name#3 ASC, true, 0 |
ConvertToUnsafe |
Exchange rangepartitioning(name#3 ASC) |
ConvertToSafe |
CarbonDictionaryDecoder CarbonDecoderRelation(Map(name#3 -> name#3),CarbonDatasourceRelation(`default`.`dict`,None)), IncludeProfile(ArrayBuffer(name#3)), CarbonAliasDecoderRelation() |
CarbonScan name#3, (CarbonRelation default, dict, CarbonMetaData(ArrayBuffer(name),ArrayBuffer(default_dummy_measure),org.apache.carbondata.core.carbon.metadata.schema.table.CarbonTable@6021d179,DictionaryMap(Map(name -> true))), org.apache.carbondata.spark.merger.TableMeta@4c3f903d, None), (name#3 = hello), false |
Code Generation: true |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We should put outer decoder on top of limit