Description
When displaying large integers in the table display, it rounds as if it was double precision.
This does not happen with spark. See below for an example:
// This code produces the correct results spark.sql("""select 2010090230001410131 , cast("2010090230001410131" as string) , cast("S2010090230001410131" as string) , cast(2010090230001410131 as bigint)""").show +-------------------+-----------------------------------+------------------------------------+-----------------------------------+ |2010090230001410131|CAST(2010090230001410131 AS STRING)|CAST(S2010090230001410131 AS STRING)|CAST(2010090230001410131 AS BIGINT)| +-------------------+-----------------------------------+------------------------------------+-----------------------------------+ |2010090230001410131| 2010090230001410131| S2010090230001410131| 2010090230001410131| +-------------------+-----------------------------------+------------------------------------+-----------------------------------+
However if we display the dataframe above using Zeppelin's table mode, we get a different result.
%sql select 2010090230001410131 , cast("2010090230001410131" as string) , cast("S2010090230001410131" as string) , cast(2010090230001410131 as bigint)
Attachments
Attachments
Issue Links
- relates to
-
ZEPPELIN-1915 [Umbrella] Improve built-in visualizations
- Open