Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-11073

ORC FileDump utility ignores errors when writing output



    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 1.2.0
    • Fix Version/s: 1.3.0, 2.0.0
    • Component/s: Hive
    • Labels:
    • Release Note:
      orcfiledump exits if errors are detected when writing to stdout.
    • Flags:


      The Hive command line provides the --orcfiledump utility for dumping data contained within ORC files, specifically when using the -d option. Generally, it is useful to be able to pipe the data extracted into other commands and utilities to transform and control the data so that it is more manageable by the CLI user. A classic example is less.

      When such command pipelines are currently constructed, the underlying implementation in org.apache.hadoop.hive.ql.io.orc.FileDump#printJsonData is oblivious to errors occurring when writing to its output stream. Such errors are common place when a user issues Ctrl+C to kill the leaf process. In this event the leaf process terminates immediately but the Hive CLI process continues to execute until the full contents of the ORC file has been read.

      By making FileDump considerate of output stream errors the process will terminate as soon as the destination process exits (i.e. when the user kills less) and control will be returned to the user as expected.


        1. HIVE-11073.1.patch
          5 kB
          Elliot West



            • Assignee:
              teabot Elliot West
              teabot Elliot West
            • Votes:
              0 Vote for this issue
              3 Start watching this issue


              • Created: