HDFS distributed file system

chain_xx_wdm 2020-11-09 23:49:01
hdfs distributed file


HDFS brief introduction

HDFS yes Hadoop The core of , It's a distributed storage service

HDFS It's one of the distributed file systems

HDFS An important concept of

HDFS Locate files through a unified namespace directory tree
in addition , It's distributed , A lot of servers are united to implement their functions . The servers in the cluster have their own roles ( The essence of distribution is to split , Attend to each one's own duties )
  • Typical Master/Slave framework
    HDFS The architecture of is typical Master/Slave structure
    HDFS Clusters tend to be One NameNode+ Multiple DataNode form
    NameNode Is the master node of the cluster , DataNode Is the slave node of the cluster
  • Block storage (block Mechanism )
    HDFS The file in physical is Block storage (block) Of , The size of the block can be specified by configuration parameters
    Hadoop2.x Default... In version block Size is 128M
  • Namespace (NameSpace)
    HDFS Support traditional hierarchical file organization structure .
    Users or applications can create directories , Then save the files in these directories .
    The hierarchy of the file system namespace is similar to most existing file systems : Users can create , Delete , Move or rename files

    NameNode Responsible for maintaining the file system's namespace , Any changes to the file system namespace or properties will be NameNode recorded

  • NameNode Metadata management
    We put Directory structure And File block location It's called metadata
    NameNode The metadata of each file records the corresponding block Information (block Of id, And where they are DataNode node )
  • DataNode data storage
    Each of the documents block The specific storage management of DataNode Node to undertake
    One block There will be multiple DataNode To store , DataNode Will be timed to NameNode To report what you have block Information
  • Replica mechanism
    For fault tolerance , All of the documents block There will be copies
    For each file block Size and replica factor are configurable
    The application can specify the number of copies of a file
    The copy factor can be specified when the file is created , It can also be changed later
    The default number of copies is 3 individual
  • Write once , Multiple readout
    HDFS Designed to accommodate one write , Multiple read scenes , Random modification of files is not supported . ( Support append write , Random updates are not supported )
    Because of that , HDFS Low level storage suitable for big data analysis , Not suitable for network disk and other applications ( It's inconvenient to modify , Big delay , Network overhead is high , The cost is too high )

HDFS framework

image.png

  • NameNode: HDFS The manager of the cluster , Master

    • Maintenance Management HDFS The namespace of (NameSpace)
    • Maintain replica policy
    • Record file block (Block) Mapping information for
    • Responsible for processing client read and write requests
  • DataNode: NameNode give a command , DataNode Perform the actual operation , Slave

    • Save the actual block of data
    • Responsible for reading and writing data blocks
  • Client: client

    • Upload files to HDFS When , Client Be responsible for cutting the document into Block, Then upload
    • request NameNode Interaction , Get file location information
    • Read or write files , And DataNode Interaction
    • Client You can use some commands to manage HDFS Or visit HDFS

image.png

HDFS Reading and writing analysis

HDFS Read data flow

image.png

  • Client pass Distributed FileSystem towards NameNode Request file download , NameNode Find the location of the file block by querying the metadata DataNode Address
  • Choose a DataNode( Nearby principle , Then a random ) The server , Request read data
  • DataNode Start transferring data to client ( Reads the data input stream from the disk , With Packet I'm going to do the check for units )
  • The client to Packet Is unit reception , Cache locally first , Then write to the target file

HDFS Write data flow

image.png

  • Client pass Distributed FileSystem Module to NameNode Request file upload , NameNode Check if the destination file already exists , Does the parent directory exist
  • NameNode Returns whether you can upload
  • The client requests the first one Block Which ones to upload DataNode Server
  • NameNode return 3 individual DataNode node , Namely dn1, dn2, dn3
  • Client pass FSDataOutputStream Module request dn1 Upload data , dn1 Upon receipt of the request, the call continues dn2, then dn2 call dn3, Set up the communication channel
  • dn1, dn2, dn3 Step by step reply client

NN And 2NN

NN Fault handling

Hadoop Limit and archive and cluster security mode of

版权声明
本文为[chain_xx_wdm]所创,转载请带上原文链接,感谢

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云