mapred-site.xml: <property> <name>mapred.compress.map.output</name> <value>true</value> </property> <property> <name>mapred.map.output.compression.codec</name> <value>org.apache.hadoop.io.compress.SnappyCodec</value> </property>
你也可以对每个job设置
如下参数对mapreduce的最终结果进行设置,可以对每个job设置
和mapreduce的设置的属性一样
对hive输出的SequenceFile时进行压缩
Depending on the architecture of the machine you are installing on, add one of the following lines to/usr/lib/flume/bin/flume-env.sh: For 32-bit platforms: export JAVA_LIBRARY_PATH=/usr/lib/hadoop/lib/native/Linux-i386-32 For 64-bit platforms: export JAVA_LIBRARY_PATH=/usr/lib/hadoop/lib/native/Linux-amd64-64 The following section explains how to take advantage of Snappy compression. Using Snappy compression in Flume Sinks You can specify Snappy as a compression codec in Flume's configuration language. For example, the following specifies a Snappy-compressed SequenceFile sink on HDFS: customdfs("hdfs://namenode/path", seqfile("snappy"))
在命令行使用如下命令开启 Snappy compression:
–compression-codec org.apache.hadoop.io.compress.SnappyCodec
It is a good idea to use the –as-sequencefile option with this compression option.
You need to configure HBase to use Snappy only if you installed Hadoop and HBase from tarballs; if you installed them from RPM or Debian packages, Snappy requires no HBase configuration. Depending on the architecture of the machine you are installing on, add one of the following lines to/etc/hbase/conf/hbase-env.sh:
To use Snappy compression in HBase Tables, specify the column family compression as snappy. For example, in the shell: