The ORC scanners uses an external library to read ORC files. The library reads the file contents into its own memory representation. It is a vectorized representation similar to the Arrow format.
Impala needs to convert the ORC row batch to an Impala row batch. Currently the conversion happens row-wise via virtual function calls:
Instead of this approach it could work similarly to the Parquet scanner that fills the columns one-by-one into a scratch batch, then evaluate the conjuncts on the scratch batch. For more details see HdfsParquetScanner::AssembleRows():
This way we'll need a lot less virtual function calls, also the memory reads/writes will be much more localized and predictable.