Flume
  1. Flume
  2. FLUME-776

Create generic APIs for input / output formats and serialization

    Details

    • Type: New Feature New Feature
    • Status: Resolved
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: NG alpha 1
    • Fix Version/s: v1.2.0
    • Component/s: None
    • Labels:
      None

      Description

      Flume should have a generic set of APIs to handle input and output formats as well as event serialization.

      These APIs should offer the same level of abstraction as Hadoop's InputFormat, OutputFormat, RecordReader, RecordWriter, and serializer interfaces / classes. The only rationale for not using Hadoop's specific implementation of these APIs is because we want to avoid that dependency and everything that comes with it. Examples of API usage would be:

      • HDFS sink, text file output, events serialized as JSON
      • HDFS sink, text file output, events serialized as text, Snappy compressed
      • HDFS sink, Avro file output, events serialized as Avro records, GZIP compressed.
      • HBase sink, event fields[1] serialized as Thrift

      [1] The case of HBase is odd in that the event needs to be broken into individual fields (i.e. extracted to a complex type). This means some kind of custom mapping / extraction code or configuration needs to supplied by the user; we're not overly concerned with that for this issue.

      The implementations of the formats (text file, Avro), serializations (JSON, Avro, Thrift), and compression codecs (Snappy, GZIP) listed above are just examples. We'll open separate JIRAs for implementations. The scope of this JIRA is the framework / infrastructure.

        Activity

        Hide
        Mingjie Lai added a comment -

        @esammer. Will pre-NG sources/sinks be compatible with NG? Sounds like no.

        Show
        Mingjie Lai added a comment - @esammer. Will pre-NG sources/sinks be compatible with NG? Sounds like no.
        Hide
        E. Sammer added a comment -

        Mingjie:

        We're not aiming for backward compatibility, no. I think we'd like to make sure we capture what is really important to people. Is there anything specific you're thinking of?

        Show
        E. Sammer added a comment - Mingjie: We're not aiming for backward compatibility, no. I think we'd like to make sure we capture what is really important to people. Is there anything specific you're thinking of?
        Hide
        Mingjie Lai added a comment -

        Not really for existing source/sinks. But we're using customized hbase sink and udp source now, so I hope the API won't be changed too much.

        Show
        Mingjie Lai added a comment - Not really for existing source/sinks. But we're using customized hbase sink and udp source now, so I hope the API won't be changed too much.
        Hide
        E. Sammer added a comment -

        Moving to the next milestone. Unlikely to happen by the end of this week.

        Show
        E. Sammer added a comment - Moving to the next milestone. Unlikely to happen by the end of this week.
        Hide
        Joe Crobak added a comment -

        We are generating Avro Events at the client, encoding these as bytes, and storing them in the body of a FlumeEvent. When these Events get to HDFS, it would be great to write out an avro data file with the schema of events in the body of the FlumeEvent (or as a Record with a nested Record in the body). I was thinking we could give the sink a pointer to the avsc file with schema to use for writing the data file.

        Perhaps it's a special case, but I thought I'd throw that out there as a use-case to consider.

        Show
        Joe Crobak added a comment - We are generating Avro Events at the client, encoding these as bytes, and storing them in the body of a FlumeEvent. When these Events get to HDFS, it would be great to write out an avro data file with the schema of events in the body of the FlumeEvent (or as a Record with a nested Record in the body). I was thinking we could give the sink a pointer to the avsc file with schema to use for writing the data file. Perhaps it's a special case, but I thought I'd throw that out there as a use-case to consider.
        Show
        Mike Percy added a comment - I think this issue is resolved. File serialization: http://flume.apache.org/releases/content/1.2.0/apidocs/org/apache/flume/serialization/EventSerializer.html Avro file serialization: http://flume.apache.org/releases/content/1.2.0/apidocs/org/apache/flume/serialization/AbstractAvroEventSerializer.html HBase serialization: http://flume.apache.org/releases/content/1.2.0/apidocs/org/apache/flume/sink/hbase/HbaseEventSerializer.html
        Hide
        Brock Noland added a comment -

        I agree with Mike. Marking this as resolved for the 1.2 release since that is when those APIs were introduced.

        Show
        Brock Noland added a comment - I agree with Mike. Marking this as resolved for the 1.2 release since that is when those APIs were introduced.

          People

          • Assignee:
            Unassigned
            Reporter:
            E. Sammer
          • Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development