java - Hadoop Snappy native library error with flume agents -


hey getting error flume using snappy compression library , returning error.

java.lang.unsatisfiedlinkerror: org.apache.hadoop.util.nativecodeloader.buildsupportssnappy()z     @ org.apache.hadoop.util.nativecodeloader.buildsupportssnappy(native method)     @ org.apache.hadoop.io.compress.snappycodec.checknativecodeloaded(snappycodec.java:63)     @ org.apache.hadoop.io.compress.snappycodec.getcompressortype(snappycodec.java:136)     @ org.apache.hadoop.io.compress.codecpool.getcompressor(codecpool.java:150)     @ org.apache.hadoop.io.compress.codecpool.getcompressor(codecpool.java:165)     @ org.apache.hadoop.io.sequencefile$writer.init(sequencefile.java:1273)     @ org.apache.hadoop.io.sequencefile$writer.<init>(sequencefile.java:1166)     @ org.apache.hadoop.io.sequencefile$blockcompresswriter.<init>(sequencefile.java:1521)     @ org.apache.hadoop.io.sequencefile.createwriter(sequencefile.java:284)     @ org.apache.hadoop.io.sequencefile.createwriter(sequencefile.java:589)     @ org.apache.flume.sink.hdfs.hdfssequencefile.open(hdfssequencefile.java:97)     @ org.apache.flume.sink.hdfs.hdfssequencefile.open(hdfssequencefile.java:78)     @ com.mapquest.daas.flume.components.sink.s3.bucketwriter$1.call(bucketwriter.java:244)     @ com.mapquest.daas.flume.components.sink.s3.bucketwriter$1.call(bucketwriter.java:227)     @ com.mapquest.daas.flume.components.sink.s3.bucketwriter$9$1.run(bucketwriter.java:658)     @ org.apache.flume.auth.simpleauthenticator.execute(simpleauthenticator.java:50)     @ com.mapquest.daas.flume.components.sink.s3.bucketwriter$9.call(bucketwriter.java:655)     @ java.util.concurrent.futuretask.run(futuretask.java:266)     @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1149)     @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:624)     @ java.lang.thread.run(thread.java:748) 

i have tried other top google pages have similar issues.

i.e. create core-site.xml, , mapred-site.xml

core-site.xml

<configuration>   <property>     <name>io.compression.codecs</name>     <value>org.apache.hadoop.io.compress.gzipcodec,org.apache.hadoop.io.compress.defaultcodec,org.apache.hadoop.io.compress.snappycodec</value>   </property>  </configuration> 

mapred-site.xml

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration>  <property>   <name>mapreduce.map.output.compress</name>   <value>true</value>  </property>   <property>   <name>mapred.map.output.compress.codec</name>   <value>org.apache.hadoop.io.compress.snappycodec</value>  </property>   <property>   <name>mapreduce.admin.user.env</name>   <value>ld_library_path=/opt/flume/plugins.d/hadoop/native</value>  </property>  </configuration> 

i have tried add snappy.so , hadoop.so files .../jre/amd64/ path

i have tried specify environment variables inside of flume-ng.sh script handles starting jar.

export java_library_path=$java_library_path:/opt/flume/plugins.d/hadoop/native export ld_library_path=$ld_library_path:/opt/flume/plugins.d/hadoop/native export spark_yarn_user_env="java_library_path=$java_library_path,ld_library_path=$ld_library_path" 

i have tried adding snappy/ hadoop libs directly in java command. blurb flume-ng.sh script.

daas_native_libs="/opt/flume/plugins.d/hadoop/native/*:/opt/flume/plugins.d/snappy/native/*"  $exec $java_home/bin/java $java_opts $flume_java_opts "${arr_java_props[@]}" -cp "$flume_classpath:$daas_flume_home" \       -djava.library.path=$daas_native_libs:$flume_java_library_path "$flume_application_class" $* 

i completley out of ideas on how fix this. if has idea how solve appreciate it.


Comments

Popular posts from this blog

node.js - Node js - Trying to send POST request, but it is not loading javascript content -

javascript - Replicate keyboard event with html button -

javascript - Web audio api 5.1 surround example not working in firefox -