Details
-
Bug
-
Status: Resolved
-
P2
-
Resolution: Duplicate
-
None
-
None
Description
For portability, and in the spirit of Apache Beam, the SparkRunner should use the Beam implementation of KafkaIO instead of it's own.
Having said that, the runner will translate the KafkaIO as defined in the pipeline into it's own internal implementation, but should still map the properties the user defined in the pipeline in a way that the IO behaves the same - i.e., brokers, topic, etc.
Eventually, the SparkRunner will implement reading from Kafka using Spark's KafakUtils.createDirectStream() as described here: http://spark.apache.org/docs/1.6.2/streaming-kafka-integration.html#approach-2-direct-approach-no-receivers
Attachments
Issue Links
- duplicates
-
BEAM-658 Support Read.Unbounded primitive
- Resolved
- links to