Uploaded image for project: 'Phoenix'
  1. Phoenix
  2. PHOENIX-877

Snappy native library is not available

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Cannot Reproduce
    • 3.1.0, 4.1.0, 4.2.0, 3.2.0
    • None
    • None
    • None

    Description

      still getting this error with the most recent phoenix v3.0 (i think it has been fixed in 2.2.3)

      "Snappy native library is not available" when running SELECT DISTINCT on large table (>300k rows) in sqlline, on linux 64bit (intel)

      in order to fix had to add to incubator-phoenix/bin/sqlline.py:
      ' -Djava.library.path= /var/lib/hadoop/lib/native/Linux-amd64-64'+\

      snappy binaries were installed
      sudo yum install snappy snappy-devel
      ln -sf /usr/lib64/libsnappy.so /var/lib/hadoop/lib/native/Linux-amd64-64/.
      ln -sf /usr/lib64/libsnappy.so /var/lib/hbase/lib/native/Linux-amd64-64/.
      -------------------------------------------------------------------------------------------

      Edit (Dec 2014): still getting this error with phoenix 3.1 and 3.2.
      Pls eliminate this dependency or package it with phoenix core and client jars

      here is the exception and steps to make it work:

      jdbc:phoenix:localhost> SELECT COUNT (DISTINCT ROWKEY) FROM table_with_1000000_rows;
      ------------------------

      DISTINCT_COUNT(ROWKEY)

      ------------------------
      java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
      at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
      at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:62)
      at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:185)
      at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:131)
      at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getDecompressor(Compression.java:331)
      at org.apache.phoenix.expression.aggregator.DistinctValueWithCountClientAggregator.aggregate(DistinctValueWithCountClientAggregator.java:66)
      at org.apache.phoenix.expression.aggregator.ClientAggregators.aggregate(ClientAggregators.java:63)
      at org.apache.phoenix.iterate.GroupedAggregatingResultIterator.next(GroupedAggregatingResultIterator.java:75)
      at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
      at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:732)
      at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2429)
      at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074)
      at sqlline.SqlLine.print(SqlLine.java:1735)
      at sqlline.SqlLine$Commands.execute(SqlLine.java:3683)
      at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
      at sqlline.SqlLine.dispatch(SqlLine.java:821)
      at sqlline.SqlLine.begin(SqlLine.java:699)
      at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
      at sqlline.SqlLine.main(SqlLine.java:424)

      to fix need to update several configuration files to enable snappy compression in phoenix, hadoop and hbase:

      vim phoenix/hadoop2/bin/sqlline.py
      extrajars="/etc/hadoop/conf:/etc/hbase/conf:/etc/zookeeper/conf:/usr/lib/hbase/hbase-0.94.15-cdh4.7.0-security.jar:/opt/app/extlib/hadoop-common-2.0.0-cdh4.7.0.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh4.7.0.jar:/opt/app/extlib/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/app/extlib/commons-collections-3.2.1.jar:/opt/app/phoenix/common/phoenix-core-3.1.0.jar:/opt/app/extlib/snappy-java-1.1.1.3.jar"

      someflags="-Djava.library.path=/usr/lib/hadoop/lib/native"
      java_cmd = 'java '+ someflags+' -cp ".' + os.pathsep + extrajars+ os.pathsep+ phoenix_utils.phoenix_client_jar + \

      vim systemd/hbase-regionserver.service
      ExecStartPre=/usr/bin/mkdir -p /usr/lib/hbase/lib/native/Linux-amd64-64
      ExecStartPre=/usr/bin/ln -sf /usr/lib64/libsnappy.so /usr/lib/hbase/lib/native/Linux-amd64-64/.
      ExecStartPre=/usr/bin/chown -R hbase:hbase /usr/lib/hbase/lib

      vim systemd/hadoop-hdfs-datanode.service
      ExecStartPre=/usr/bin/mkdir -p /usr/lib/hadoop/lib/native/Linux-amd64-64
      ExecStartPre=/usr/bin/ln -sf /usr/lib64/libsnappy.so /usr/lib/hadoop/lib/native/Linux-amd64-64/.
      ExecStartPre=/usr/bin/chown -R hdfs:hdfs /usr/lib/hadoop/lib

      vim hadoop/core-site.xml
      <property>
      <name>io.compression.codecs</name>
      <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
      </property>

      <property>
      <name>io.compression.codec.lzo.class</name>
      <value>com.hadoop.compression.lzo.LzoCodec</value>
      </property>

      vim hbase/hbase-site.xml
      <property>
      <name>hbase.regionserver.codecs</name>
      <value>snappy</value>
      </property>

      vim hadoop/hadoop-env.sh
      export HADOOP_HOME=/usr/lib/hadoop
      export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native/Linux-amd64-64:$HADOOP_HOME/lib/native
      export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_YARN_HOME:$HADOOP_HOME:$HADOOP_CONF_DIR:$YARN_CONF_DIR:$JSVC_HOME:$HADOOP_HOME/lib/native/Linux-amd64-64:$HADOOP_HOME/lib/native

      vim hbase/hbase-env.sh
      export HBASE_HOME=/usr/lib/hbase
      export HBASE_LIBRARY_PATH=$HBASE_HOME/lib/native/Linux-amd64-64:$HBASE_HOME/lib/native
      export HBASE_CLASSPATH_PREFIX=/opt/app/phoenix/common/phoenix-core-3.1.0.jar

      Attachments

        Activity

          People

            mujtabachohan Mujtaba Chohan
            alexdl alex kamil
            Votes:
            2 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: