Common compression algorithms and differences in HDFS

Big data is fun 2021-01-21 19:58:38
common compression algorithms differences hdfs


Cloudera The basic principle of data compression has been put forward :

  • Whether to compress data and which compression format to use have an important impact on performance .
  • You need to balance the ability to compress and decompress data 、 The disk needed to read and write data IO, And the network bandwidth needed to send data in the network .

Besides , Which compression formats are used , Why use these compression formats instead of other compression formats ?

Mainly considering :

  • Whether the combination of file and compression algorithm supports fragmentation , MapReduce You need to read data in parallel , This requires that the compressed file can be read in pieces .
  • io Read performance , Read the same amount of information , The compressed file not only takes up less storage space , And it's going to improve the disk io Read efficiency . Increase the speed of the program
  • CPU Resources are also factors that have to be considered to enable which compression algorithm , Generally speaking, the more efficient the compression algorithm is io The more efficiency and storage utilization are promoted , But on the other hand, it will consume more CPU resources . So we need to find a balance in the middle .
  • Commonality , Whether the file format supports multiple languages , Read of service . such as Hadoop The main serialization format is Writables, however Writables Only support Java, So it's followed by Avro, Thrift Equiform . And so on OrcFile It's right Hive Design of a column storage format , But he didn't support it Impala, The commonality of data is constrained .
  • Error handling capabilities , Some files will affect the whole table if a part of the file is broken , Some will only affect subsequent data , Some will only affect the broken data block itself (Avro).
  • Read and load efficiency , RCFile The loading speed of is slow , But the corresponding query speed is fast , It is relatively more suitable for data warehouse to insert and read multiple times at one time .

HDFS File types in

  • File based storage
  • Serialization and column storage , for example :Avro、RCFile and Parquet
  • Compressed storage , for example Snappy、LZO etc.

Let's introduce .

file-based SequenceFile

sequenceFile File is Hadoop Used to store binary forms [Key,Value] It's a kind of flat file designed for you (Flat File). You can put SequenceFile Think of it as a container , Pack all the files into SequenceFile Class can efficiently store and process small files .SequenceFile Files are not stored according to their Key Sort storage ,SequenceFile The inner class of Writer Provides append function .SequenceFile Medium Key and Value It can be any type Writable Or custom Writable.

On the storage structure ,SequenceFile Mainly by a Header More than one heel Record form ,Header Mainly consists of Key classname,value classname, Storage compression algorithm , User defined metadata and other information , Besides , There are also some synchronization IDS , Used to quickly locate to the boundary of the record . Every one of them Record Store in the form of key value pairs , The array of characters used to represent it can be parsed into : The length of the record 、Key The length of 、Key Values and value value , also Value The structure of the value depends on whether the record is compressed .

SequenceFile It supports three record storage methods :

  • No compression , io Poor efficiency . Compared to compression , There is no advantage without compression .
  • Record level compression , Compress every record . This kind of compression efficiency is relatively general .
  • Block level compression , The block here is different from hdfs The concept of blocks in . This method compresses binary data up to the specified block size into a block . Relative record level compression , Block level compression has higher compression efficiency . Generally speaking, use SequenceFile Will use block level compression .

however SequenceFile Only support Java, SequenceFile It is generally used as a container for small files , Prevent small files from taking up too much NameNode Memory space to store it in DataNode Metadata of location .

Serialization storage format and column storage

Serialization refers to the process of converting data format into byte stream , Mainly used for remote transmission or storage . hadoop The main serialization format used is Writables. But it can only support Java Language , So then it came out Thrift, Avro Equiform .

  • Thrift

Thrift yes Facebook Development framework , It is used to provide services and interfaces across languages , Meet cross platform communication . however Thrift Fragmentation is not supported , And missing MapReduce Native support for . So we can ignore this compression algorithm .

  • Avro

Avro yes Hadoop A sub project in , It's also Apache A separate project in ,Avro It is a high performance middleware based on binary data transmission . stay Hadoop In other projects of , for example HBase and Hive Of Client This tool is also used for data transmission between client and server .

Avro Is a language independent data serialization system , It appears mainly to solve Writables Lack of cross language porting defects .Avro Store the schema in the file header , So every file is self describing , and Avro It also supports pattern evolution (schema evolution), in other words , The pattern of reading the file does not need to match the pattern of writing the file , When there is a new need , You can add new fields to the schema .

  • Avro Support fragmentation , Even if it's going to be Gzip After the compression
  • Support cross language support
  • ORCFile

ORC The full name is (Optimized Row Columnar),ORC File format is a kind of Hadoop Column storage formats in the ecosystem , It came into being as early as 2013 Beginning of the year , It originated from Apache Hive, For lowering Hadoop Data storage space and acceleration Hive Query speed . and Parquet similar , It's not a pure columnar storage format , It's still the first step to split the entire table according to row groups , Store by column in each row group .ORC The document is self describing , Its metadata uses Protocol Buffers serialize , And the data in the file should be compressed as much as possible to reduce the consumption of storage space , At present, it is also Spark SQL、Presto Query engine support .2015 year ORC The project was Apache The project foundation was upgraded to Apache Top projects .ORC There are some advantages :

  • ORC It's a column store , There are many ways to compress files , And it has a high compression ratio
  • Documents are sharable (Split) Of . therefore , stay Hive Use in ORC File storage format as a table , Not only save HDFS Storage resources , The amount of input data of query task is reduced , The use of MapTask It's less
  • Provides a variety of indexes ,row group index、bloom filter index
  • ORC Can support complex data structures ( such as Map etc. )
  • Support all hive type , Including composite types : structs, lists, maps and unions
  • Support fragmentation
  • You can return only the columns of the query , Reduce io Consume , Lifting performance
  • It can be done with Zlib, LZO and Snappy Combined with further compression
Compression algorithm

gzip Compress

advantage : High compression rate , And compression / Decompression speed is also relatively fast ;hadoop It supports , Deal with in the application gzip The format of a file is just like processing text directly ; Yes hadoop native library ; Most of the linux The system comes with it gzip command , Easy to use . shortcoming : I won't support it split. Application scenarios : When each file is compressed, in 130M Inside (1 Within a block size ), You can consider using gzip Compressed format . For example, a day or an hour's log is compressed into one gzip file , function mapreduce When programming, through multiple gzip File concurrency .hive Program ,streaming Program , and java Written mapreduce The program is exactly like text processing , After compression, the original program does not need to make any changes .

lzo Compress

advantage : Compress / Decompression speed is also relatively fast , Reasonable compression ratio ; Support split, yes hadoop The most popular compression format in ; Support hadoop native library ; Can be in linux Installation under system lzop command , Easy to use . shortcoming : Compression ratio than gzip Be a little lower. ;hadoop Itself does not support , Need to install ; In the application of lzo The format of the file needs to do some special processing ( For support split Need to index , You also need to specify inputformat by lzo Format ). Application scenarios : A large text file , After compression, it is larger than 200M The above can be considered , And the bigger a single file ,lzo The more obvious the advantages .

snappy Compress advantage : High compression speed and reasonable compression rate ; Support hadoop native library . shortcoming : I won't support it split; Compression ratio than gzip Be low ;hadoop Itself does not support , Need to install ;linux There is no corresponding command in the system . Application scenarios : When mapreduce Operational map When the output data is large , As map To reduce The compressed format of the intermediate data of ; Or as a mapreduce The output of the job and another mapreduce Job input .

bzip2 Compress

advantage : Support split; It has a high compression rate , Than gzip Compression ratio is high ;hadoop It supports , But does not support native; stay linux The system comes with bzip2 command , Easy to use . shortcoming : Compress / Decompression is slow ; I won't support it native. Application scenarios : Suitable for speed requirements are not high , But when a higher compression ratio is needed , It can be used as mapreduce The output format of the job ; Or the output data is relatively large , The processed data needs to be compressed and archived to reduce the disk space, and the data will be used less in the future ; Or for a single large text file to compress to reduce storage space , At the same time, we need support split, And compatible with previous applications ( That is, the application does not need to be modified ) The situation of .

Last , Let's take a picture of this 4 Compare two compression formats :

This article is from WeChat official account. - Big data is fun (havefun_bigdata) , author : Big data is fun

The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the yunjia_community@tencent.com Delete .

Original publication time : 2021-01-11

Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .

版权声明
本文为[Big data is fun]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210121195549723o.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云