Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-4120

Incorrect results with LEAD() analytic function

    Details

      Description

      for the following query

      SELECT
        FROM_UNIXTIME( UNIX_TIMESTAMP( CONCAT(CAST( ssm.ymd as STRING ),CAST( pl.time AS string )), 'yyyyMMddHH:mm:ss' ) ) AS datetime_click,
        FROM_UNIXTIME( LEAD( UNIX_TIMESTAMP( CONCAT( CAST( ssm.ymd as STRING ), CAST( pl.time AS string ) ), 'yyyyMMddHH:mm:ss' ), 1 ) OVER (PARTITION BY ssm.tracking_int_id ORDER BY cl.date_id) ) AS datetime_next_click_v1,
        LEAD( FROM_UNIXTIME( UNIX_TIMESTAMP( CONCAT( CAST( ssm.ymd as STRING ), CAST( pl.time AS string ) ), 'yyyyMMddHH:mm:ss' ) ), 1 ) OVER (PARTITION BY ssm.tracking_int_id ORDER BY cl.date_id) AS datetime_next_click_v2,
        LEAD( ssm.ymd, 1 ) OVER (PARTITION BY ssm.tracking_int_id ORDER BY cl.date_id) AS ymd_next_click,
        LEAD( pl.time, 1 ) OVER (PARTITION BY ssm.tracking_int_id ORDER BY cl.date_id) AS time_next_click
      FROM
        trivago_analytic.session_stats_master ssm
      JOIN ssm.co_log_entries AS cl
      JOIN ssm.page_log_entries AS pl
        ON pl.date_id = cl.date_id
      WHERE
        ssm.ymd BETWEEN 20160501 AND 20160503
        AND ssm.crawler_id = 0
        AND cl.page_id = 8001
      ORDER BY ssm.ymd, cl.date_id;
      

      datetime_next_click_v1 returns different values than datetime_next_click_v2 even though they should return the same one to my understanding. datetime_next_click_v1 is the correct one.

      I attached (reduced to relevant columns) table structure, query, query plan and profile.
      impalad version 2.6.0-cdh5.8.0 RELEASE

      please let me know if you need more information

      1. explain
        1 kB
        Clemens Valiente
      2. query_profile
        613 kB
        Clemens Valiente
      3. query_result.csv
        7 kB
        Clemens Valiente
      4. query.sql
        1 kB
        Clemens Valiente
      5. table_definition.hql
        0.4 kB
        Clemens Valiente
      6. thrift_profile_b64a3852190d9ac2-412186998b9ee4a1.txt
        102 kB
        Clemens Valiente

        Issue Links

          Activity

          Hide
          alex.behm Alexander Behm added a comment -

          I could reproduce this issue on master as follows:

          CREATE TABLE test AS
          SELECT CAST(timestamp_col AS STRING) AS S, int_col, id FROM functional.alltypes;
          
          SELECT
            FROM_UNIXTIME(LEAD(UNIX_TIMESTAMP(s), 1) OVER (PARTITION BY int_col ORDER BY id), 'yyyyMMddHH:mm:ss') AS a,
            LEAD(FROM_UNIXTIME(UNIX_TIMESTAMP(s), 'yyyyMMddHH:mm:ss'), 1) OVER (PARTITION BY int_col ORDER BY id) AS b
          FROM test
          

          Most a,b values are identical, but not all of them. Here's a snippet of the results:

          | 2010060300:22:00 | 2010102003:12:08 |
          | 2010060400:32:01 | 2010101903:02:08 | <--- b matches a below 
          | 2010060500:42:01 | 2010101802:52:07 |
          | 2010060600:52:02 | 2010101702:42:07 |
          | 2010060701:02:02 | 2010101602:32:06 |
          | 2010060801:12:03 | 2010101502:22:06 |
          | 2010060901:22:03 | 2010101402:12:05 |
          | 2010061001:32:04 | 2010101302:02:05 |
          | 2010061101:42:04 | 2010101201:52:04 |
          | 2010061201:52:04 | 2010101101:42:04 |
          | 2010061302:02:05 | 2010101001:32:04 |
          | 2010061402:12:05 | 2010100901:22:03 |
          | 2010061502:22:06 | 2010100801:12:03 |
          | 2010061602:32:06 | 2010100701:02:02 |
          | 2010061702:42:07 | 2010100600:52:02 |
          | 2010061802:52:07 | 2010100500:42:01 |
          | 2010061903:02:08 | 2010100400:32:01 | <--- a matches above b
          | 2010062003:12:08 | 2010100300:22:00 |
          

          What's interesting is that all values appear to be there, but shifted.

          Show
          alex.behm Alexander Behm added a comment - I could reproduce this issue on master as follows: CREATE TABLE test AS SELECT CAST(timestamp_col AS STRING) AS S, int_col, id FROM functional.alltypes; SELECT FROM_UNIXTIME(LEAD(UNIX_TIMESTAMP(s), 1) OVER (PARTITION BY int_col ORDER BY id), 'yyyyMMddHH:mm:ss') AS a, LEAD(FROM_UNIXTIME(UNIX_TIMESTAMP(s), 'yyyyMMddHH:mm:ss'), 1) OVER (PARTITION BY int_col ORDER BY id) AS b FROM test Most a,b values are identical, but not all of them. Here's a snippet of the results: | 2010060300:22:00 | 2010102003:12:08 | | 2010060400:32:01 | 2010101903:02:08 | <--- b matches a below | 2010060500:42:01 | 2010101802:52:07 | | 2010060600:52:02 | 2010101702:42:07 | | 2010060701:02:02 | 2010101602:32:06 | | 2010060801:12:03 | 2010101502:22:06 | | 2010060901:22:03 | 2010101402:12:05 | | 2010061001:32:04 | 2010101302:02:05 | | 2010061101:42:04 | 2010101201:52:04 | | 2010061201:52:04 | 2010101101:42:04 | | 2010061302:02:05 | 2010101001:32:04 | | 2010061402:12:05 | 2010100901:22:03 | | 2010061502:22:06 | 2010100801:12:03 | | 2010061602:32:06 | 2010100701:02:02 | | 2010061702:42:07 | 2010100600:52:02 | | 2010061802:52:07 | 2010100500:42:01 | | 2010061903:02:08 | 2010100400:32:01 | <--- a matches above b | 2010062003:12:08 | 2010100300:22:00 | What's interesting is that all values appear to be there, but shifted.
          Hide
          mjacobs Matthew Jacobs added a comment -

          Looks to me like there's a bug in the memory management around row batches. I tried this on an even smaller data set (alltypessmall) and set batch_size=10; and was able to easily reproduce it. Most of the results looked wonky, but it's interesting to note that the last 10 results in the output looked correct, indicating to me some failure to be handling the memory correctly. There's some annoying logic in analytic-eval-node around copying memory between MemPools (double buffering) for it to be returned with the output row batches, and there could be an issue there. I'd also look at whether the analytic node is considering the correct data types, which may have ended up incorrect somehow between the conversions in FROM_UNIXTIME(LEAD(UNIX_TIMESTAMP())) vs LEAD(FROM_UNIXTIME(UNIX_TIMESTAMP())) .

          [localhost:21000] > SELECT
            FROM_UNIXTIME(LEAD(UNIX_TIMESTAMP(s), 1) OVER ( ORDER BY id), 'yyyyMMddHH:mm:ss') AS a,
            LEAD(FROM_UNIXTIME(UNIX_TIMESTAMP(s), 'yyyyMMddHH:mm:ss'), 1) OVER ( ORDER BY id) AS b, 
          id, from_unixtime(unix_timestamp(s)) FROM test ;
          
          +------------------+------------------+----+----------------------------------+
          | a                | b                | id | from_unixtime(unix_timestamp(s)) |
          +------------------+------------------+----+----------------------------------+
          | 2009010100:01:00 | 2009010200:18:00 | 0  | 2009-01-01 00:00:00              |  <--- wonky
          | 2009010100:02:00 | 2009010200:17:00 | 1  | 2009-01-01 00:01:00              |
          
          ...
          | 2009040100:06:00 | 2009040300:23:00 | 80 | 2009-04-01 00:05:00              |
          | 2009040100:07:00 | 2009040300:22:00 | 81 | 2009-04-01 00:06:00              |
          | 2009040100:08:00 | 2009040300:21:00 | 82 | 2009-04-01 00:07:00              |
          | 2009040100:09:00 | 2009040300:20:00 | 83 | 2009-04-01 00:08:00              |
          | 2009040200:10:00 | 2009040200:19:00 | 84 | 2009-04-01 00:09:00              |
          | 2009040200:11:00 | 2009040200:18:00 | 85 | 2009-04-02 00:10:00              |
          | 2009040200:12:00 | 2009040200:17:00 | 86 | 2009-04-02 00:11:00              |
          | 2009040200:13:00 | 2009040200:16:00 | 87 | 2009-04-02 00:12:00              |
          | 2009040200:14:00 | 2009040200:15:00 | 88 | 2009-04-02 00:13:00              |
          | 2009040200:15:00 | 2009040200:15:00 | 89 | 2009-04-02 00:14:00              |   <---- correct from here on
          | 2009040200:16:00 | 2009040200:16:00 | 90 | 2009-04-02 00:15:00              |
          | 2009040200:17:00 | 2009040200:17:00 | 91 | 2009-04-02 00:16:00              |
          | 2009040200:18:00 | 2009040200:18:00 | 92 | 2009-04-02 00:17:00              |
          | 2009040200:19:00 | 2009040200:19:00 | 93 | 2009-04-02 00:18:00              |
          | 2009040300:20:00 | 2009040300:20:00 | 94 | 2009-04-02 00:19:00              |
          | 2009040300:21:00 | 2009040300:21:00 | 95 | 2009-04-03 00:20:00              |
          | 2009040300:22:00 | 2009040300:22:00 | 96 | 2009-04-03 00:21:00              |
          | 2009040300:23:00 | 2009040300:23:00 | 97 | 2009-04-03 00:22:00              |
          | 2009040300:24:00 | 2009040300:24:00 | 98 | 2009-04-03 00:23:00              |
          | NULL             | NULL             | 99 | 2009-04-03 00:24:00              |
          +------------------+------------------+----+----------------------------------+
          Fetched 100 row(s) in 0.36s
          
          Show
          mjacobs Matthew Jacobs added a comment - Looks to me like there's a bug in the memory management around row batches. I tried this on an even smaller data set (alltypessmall) and set batch_size=10; and was able to easily reproduce it. Most of the results looked wonky, but it's interesting to note that the last 10 results in the output looked correct, indicating to me some failure to be handling the memory correctly. There's some annoying logic in analytic-eval-node around copying memory between MemPools (double buffering) for it to be returned with the output row batches, and there could be an issue there. I'd also look at whether the analytic node is considering the correct data types, which may have ended up incorrect somehow between the conversions in FROM_UNIXTIME(LEAD(UNIX_TIMESTAMP())) vs LEAD(FROM_UNIXTIME(UNIX_TIMESTAMP())) . [localhost:21000] > SELECT FROM_UNIXTIME(LEAD(UNIX_TIMESTAMP(s), 1) OVER ( ORDER BY id), 'yyyyMMddHH:mm:ss') AS a, LEAD(FROM_UNIXTIME(UNIX_TIMESTAMP(s), 'yyyyMMddHH:mm:ss'), 1) OVER ( ORDER BY id) AS b, id, from_unixtime(unix_timestamp(s)) FROM test ; +------------------+------------------+----+----------------------------------+ | a | b | id | from_unixtime(unix_timestamp(s)) | +------------------+------------------+----+----------------------------------+ | 2009010100:01:00 | 2009010200:18:00 | 0 | 2009-01-01 00:00:00 | <--- wonky | 2009010100:02:00 | 2009010200:17:00 | 1 | 2009-01-01 00:01:00 | ... | 2009040100:06:00 | 2009040300:23:00 | 80 | 2009-04-01 00:05:00 | | 2009040100:07:00 | 2009040300:22:00 | 81 | 2009-04-01 00:06:00 | | 2009040100:08:00 | 2009040300:21:00 | 82 | 2009-04-01 00:07:00 | | 2009040100:09:00 | 2009040300:20:00 | 83 | 2009-04-01 00:08:00 | | 2009040200:10:00 | 2009040200:19:00 | 84 | 2009-04-01 00:09:00 | | 2009040200:11:00 | 2009040200:18:00 | 85 | 2009-04-02 00:10:00 | | 2009040200:12:00 | 2009040200:17:00 | 86 | 2009-04-02 00:11:00 | | 2009040200:13:00 | 2009040200:16:00 | 87 | 2009-04-02 00:12:00 | | 2009040200:14:00 | 2009040200:15:00 | 88 | 2009-04-02 00:13:00 | | 2009040200:15:00 | 2009040200:15:00 | 89 | 2009-04-02 00:14:00 | <---- correct from here on | 2009040200:16:00 | 2009040200:16:00 | 90 | 2009-04-02 00:15:00 | | 2009040200:17:00 | 2009040200:17:00 | 91 | 2009-04-02 00:16:00 | | 2009040200:18:00 | 2009040200:18:00 | 92 | 2009-04-02 00:17:00 | | 2009040200:19:00 | 2009040200:19:00 | 93 | 2009-04-02 00:18:00 | | 2009040300:20:00 | 2009040300:20:00 | 94 | 2009-04-02 00:19:00 | | 2009040300:21:00 | 2009040300:21:00 | 95 | 2009-04-03 00:20:00 | | 2009040300:22:00 | 2009040300:22:00 | 96 | 2009-04-03 00:21:00 | | 2009040300:23:00 | 2009040300:23:00 | 97 | 2009-04-03 00:22:00 | | 2009040300:24:00 | 2009040300:24:00 | 98 | 2009-04-03 00:23:00 | | NULL | NULL | 99 | 2009-04-03 00:24:00 | +------------------+------------------+----+----------------------------------+ Fetched 100 row(s) in 0.36s
          Hide
          mjacobs Matthew Jacobs added a comment -

          Ok, I'm pretty sure that the issue is that the function that implements lead/lag needs to be copying the memory and isn't. The problem is that from_unixtime() allocates local memory from the udf context and lead() (implemented by OffsetFnUpdate()) doesn't actually copy the string data, so the next time the local ctx memory is freed (in QueryMaintenance(), every row batch), the result of the from_unixtime() is freed.

          Here's the current implementation of OffsetFnUpdate in be/src/exprs/aggregate-functions-ir.cc :

          template <typename T>
          void AggregateFunctions::OffsetFnUpdate(FunctionContext* ctx, const T& src,
              const BigIntVal&, const T& default_value, T* dst) {
            *dst = src;
          }
          

          It should specialize string vals and copy src's string data.

          Show
          mjacobs Matthew Jacobs added a comment - Ok, I'm pretty sure that the issue is that the function that implements lead/lag needs to be copying the memory and isn't. The problem is that from_unixtime() allocates local memory from the udf context and lead() (implemented by OffsetFnUpdate()) doesn't actually copy the string data, so the next time the local ctx memory is freed (in QueryMaintenance(), every row batch), the result of the from_unixtime() is freed. Here's the current implementation of OffsetFnUpdate in be/src/exprs/aggregate-functions-ir.cc : template <typename T> void AggregateFunctions::OffsetFnUpdate(FunctionContext* ctx, const T& src, const BigIntVal&, const T& default_value, T* dst) { *dst = src; } It should specialize string vals and copy src's string data.
          Hide
          mjacobs Matthew Jacobs added a comment -

          Here's a simple repro that doesn't require creating any temp tables:

          [localhost:21000] > select id, from_unixtime(unix_timestamp(c)), lead(from_unixtime(unix_timestamp(c)), 1) over (order by id) from (select id, cast(timestamp_col as string) c from alltypestiny) t1;
          Query: select id, from_unixtime(unix_timestamp(c)), lead(from_unixtime(unix_timestamp(c)), 1) over (order by id) from (select id, cast(timestamp_col as string) c from alltypestiny) t1
          Query submitted at: 2016-09-15 14:20:30 (Coordinator: http://mj-desktop.ca.cloudera.com:25000)
          Query progress can be monitored at: http://mj-desktop.ca.cloudera.com:25000/query_plan?query_id=2b43eba99635cf4c:921672b100000000
          +----+----------------------------------+-----------------------------------------------------------+
          | id | from_unixtime(unix_timestamp(c)) | lead(from_unixtime(unix_timestamp(c)), 1, NULL) OVER(...) |
          +----+----------------------------------+-----------------------------------------------------------+
          | 0  | 2009-01-01 00:00:00              | 2009-02-01 00:01:00                                       |
          | 1  | 2009-01-01 00:01:00              | 2009-03-01 00:00:00                                       |
          | 2  | 2009-02-01 00:00:00              | 2009-03-01 00:01:00                                       |
          | 3  | 2009-02-01 00:01:00              | 2009-04-01 00:00:00                                       |
          | 4  | 2009-03-01 00:00:00              | 2009-04-01 00:01:00                                       |
          | 5  | 2009-03-01 00:01:00              | 2009-04-01 00:01:00                                       |
          | 6  | 2009-04-01 00:00:00              | 2009-04-01 00:01:00                                       |
          | 7  | 2009-04-01 00:01:00              | NULL                                                      |
          +----+----------------------------------+-----------------------------------------------------------+
          

          The last column is clearly wrong, 2009-04-01 00:01:00 is repeated 3 times.

          Show
          mjacobs Matthew Jacobs added a comment - Here's a simple repro that doesn't require creating any temp tables: [localhost:21000] > select id, from_unixtime(unix_timestamp(c)), lead(from_unixtime(unix_timestamp(c)), 1) over (order by id) from (select id, cast (timestamp_col as string) c from alltypestiny) t1; Query: select id, from_unixtime(unix_timestamp(c)), lead(from_unixtime(unix_timestamp(c)), 1) over (order by id) from (select id, cast (timestamp_col as string) c from alltypestiny) t1 Query submitted at: 2016-09-15 14:20:30 (Coordinator: http: //mj-desktop.ca.cloudera.com:25000) Query progress can be monitored at: http: //mj-desktop.ca.cloudera.com:25000/query_plan?query_id=2b43eba99635cf4c:921672b100000000 +----+----------------------------------+-----------------------------------------------------------+ | id | from_unixtime(unix_timestamp(c)) | lead(from_unixtime(unix_timestamp(c)), 1, NULL) OVER(...) | +----+----------------------------------+-----------------------------------------------------------+ | 0 | 2009-01-01 00:00:00 | 2009-02-01 00:01:00 | | 1 | 2009-01-01 00:01:00 | 2009-03-01 00:00:00 | | 2 | 2009-02-01 00:00:00 | 2009-03-01 00:01:00 | | 3 | 2009-02-01 00:01:00 | 2009-04-01 00:00:00 | | 4 | 2009-03-01 00:00:00 | 2009-04-01 00:01:00 | | 5 | 2009-03-01 00:01:00 | 2009-04-01 00:01:00 | | 6 | 2009-04-01 00:00:00 | 2009-04-01 00:01:00 | | 7 | 2009-04-01 00:01:00 | NULL | +----+----------------------------------+-----------------------------------------------------------+ The last column is clearly wrong, 2009-04-01 00:01:00 is repeated 3 times.
          Hide
          mjacobs Matthew Jacobs added a comment -

          I think we can just change OffsetFnUpdate to use the new-ish UpdateVal fn:

          template <typename T>
          void AggregateFunctions::OffsetFnUpdate(FunctionContext* ctx, const T& src,
              const BigIntVal&, const T& default_value, T* dst) {
            UpdateVal(ctx, src, dst);
          }
          

          But the Init fn needs to be specialized for StringVal to make a copy of the input parameter. E.g. something like the following (I haven't tested this):

          template <>
          void AggregateFunctions::OffsetFnInit(FunctionContext* ctx, StringVal* dst) {
            DCHECK_EQ(ctx->GetNumArgs(), 3);
            DCHECK(ctx->IsArgConstant(1));
            DCHECK(ctx->IsArgConstant(2));
            DCHECK_EQ(ctx->GetArgType(0)->type, ctx->GetArgType(2)->type);
          
            CopyStringVal(ctx, *static_cast<StringVal*>(ctx->GetConstantArg(2)), dst);
          }
          

          Now we'll need to also call a finalize/serialize fn to free the memory, and StringValSerializeOrFinalize() should work.

          Show
          mjacobs Matthew Jacobs added a comment - I think we can just change OffsetFnUpdate to use the new-ish UpdateVal fn: template <typename T> void AggregateFunctions::OffsetFnUpdate(FunctionContext* ctx, const T& src, const BigIntVal&, const T& default_value, T* dst) { UpdateVal(ctx, src, dst); } But the Init fn needs to be specialized for StringVal to make a copy of the input parameter. E.g. something like the following (I haven't tested this): template <> void AggregateFunctions::OffsetFnInit(FunctionContext* ctx, StringVal* dst) { DCHECK_EQ(ctx->GetNumArgs(), 3); DCHECK(ctx->IsArgConstant(1)); DCHECK(ctx->IsArgConstant(2)); DCHECK_EQ(ctx->GetArgType(0)->type, ctx->GetArgType(2)->type); CopyStringVal(ctx, *static_cast<StringVal*>(ctx->GetConstantArg(2)), dst); } Now we'll need to also call a finalize/serialize fn to free the memory, and StringValSerializeOrFinalize() should work.
          Hide
          kwho Michael Ho added a comment -

          The proposed solution works but it uncovers other existing problem with the memory management of analytic-eval node. In particular, the local allocations in fn_ctxs_ is not freed periodically by QueryMaintenance() as they are apparently not attached to evaluators_[i]->input_expr_ctxs(). Oddly enough, AnalyticEvalNode::AddResultTuple() also doesn't seem to copy the underlying string in a StringVal returned from AggFnEvaluator::GetValue() so the existing code seems to ship strings allocated via local allocations upstream along with the row batch. They don't seem to be freed until fn_ctxs_ are destroyed.

          I may be misunderstanding the code so Matthew Jacobs please feel free to correct any mistakes in my analysis above.

          Show
          kwho Michael Ho added a comment - The proposed solution works but it uncovers other existing problem with the memory management of analytic-eval node. In particular, the local allocations in fn_ctxs_ is not freed periodically by QueryMaintenance() as they are apparently not attached to evaluators_ [i] ->input_expr_ctxs() . Oddly enough, AnalyticEvalNode::AddResultTuple() also doesn't seem to copy the underlying string in a StringVal returned from AggFnEvaluator::GetValue() so the existing code seems to ship strings allocated via local allocations upstream along with the row batch. They don't seem to be freed until fn_ctxs_ are destroyed. I may be misunderstanding the code so Matthew Jacobs please feel free to correct any mistakes in my analysis above.
          Hide
          kwho Michael Ho added a comment -

          https://github.com/apache/incubator-impala/commit/51268c053ffe41dc1aa9f1b250878113d4225258

          IMPALA-4120: Incorrect results with LEAD() analytic function
          This change fixes a memory management problem with LEAD()/LAG()
          analytic functions which led to incorrect result. In particular,
          the update functions specified for these analytic functions only
          make a shallow copy of StringVal (i.e. copying only the pointer
          and the length of the string) without copying the string itself.
          This may lead to problem if the string is created from some UDFs
          which do local allocations whose buffer may be freed and reused
          before the result tuple is copied out. This change fixes the problem
          above by allocating a buffer at the Init() functions of these
          analytic functions to track the intermediate value. In addition,
          when the value is copied out in GetValue(), it will be copied into
          the MemPool belonging to the AnalyticEvalNode and attached to the
          outgoing row batches. This change also fixes a missing free of
          local allocations in QueryMaintenance().

          Change-Id: I85bb1745232d8dd383a6047c86019c6378ab571f
          Reviewed-on: http://gerrit.cloudera.org:8080/4740
          Reviewed-by: Michael Ho <kwho@cloudera.com>
          Tested-by: Internal Jenkins

          Show
          kwho Michael Ho added a comment - https://github.com/apache/incubator-impala/commit/51268c053ffe41dc1aa9f1b250878113d4225258 IMPALA-4120 : Incorrect results with LEAD() analytic function This change fixes a memory management problem with LEAD()/LAG() analytic functions which led to incorrect result. In particular, the update functions specified for these analytic functions only make a shallow copy of StringVal (i.e. copying only the pointer and the length of the string) without copying the string itself. This may lead to problem if the string is created from some UDFs which do local allocations whose buffer may be freed and reused before the result tuple is copied out. This change fixes the problem above by allocating a buffer at the Init() functions of these analytic functions to track the intermediate value. In addition, when the value is copied out in GetValue(), it will be copied into the MemPool belonging to the AnalyticEvalNode and attached to the outgoing row batches. This change also fixes a missing free of local allocations in QueryMaintenance(). Change-Id: I85bb1745232d8dd383a6047c86019c6378ab571f Reviewed-on: http://gerrit.cloudera.org:8080/4740 Reviewed-by: Michael Ho <kwho@cloudera.com> Tested-by: Internal Jenkins

            People

            • Assignee:
              kwho Michael Ho
              Reporter:
              clemens.valiente@trivago.com Clemens Valiente
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development