Do you know all the 40 commands commonly used in HDFS?

Alice's bacteria 2021-01-20 21:53:35
know commands commonly used hdfs


Preface

         as everyone knows ,Hadoop Provides a command line interface , Yes HDFS File management operation in , Such as Read the file new directory Moving files Copy file Delete directory Upload files Download the file List the contents etc. . This article , Junge is going to introduce Hadoop Command line interface for ! I hope that after you read it , To be able to gain something
|ू・ω・` )
        
 Insert picture description here

        HDFS The format of the command line is as follows :

Hadoop fs -cmd <args>

         among ,cmd It's a specific order to execute ; Is the parameter to execute the command , But not limited to one parameter .

         To see help for the command line interface , Just enter the following command on the command line :

hadoop fs

         That is, the specific execution command of the task is not added ,Hadoop The help information for the command line interface is listed , As shown below :

[root@node01 ~]# hadoop fs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] [-h] [-v] [-x] <path> ...]
[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] [-x] <path> ...]
[-expunge]
[-find <path> ... <expression> ...]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {
-n name | -d} [-e en] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{
-b|-k} {
-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {
-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-usage [cmd ...]]
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]

1、 Document preparation

         Create... Locally on the server data.txt File for testing , The contents of the document are as follows :

hello hadoop

2、-appendToFile

         Append the local files of the server to HDFS In the specified file , If you run the same parameter more than once , Will be in HDFS Add multiple lines of the same content to the file . The example code is as follows :

hadoop fs -appendToFile data.txt /data/data.txt

3、-cat

         It is mainly used to check HDFS The contents of the uncompressed file in . The example code is as follows :

[root@node01 ~]# hadoop fs -cat /data/data.txt
hello hadoop
hello hadoop

4、-checksum

         see HDFS Checksums of files in . The example code is as follows :

[root@node01 ~]# hadoop fs -checksum /data/data.txt
/data/data.txt MD5-of-0MD5-of-512CRC32C 000002000000000000000000c8e21d30c9ed5817cd5ff40768a34389

5、-chgrp

         change HDFS The group to which a file or directory belongs in ,-R Option to change the group of all subdirectories under the directory , The user executing this command must be the owner or superuser of the file or directory . The example code is as follows :

hadoop fs -chgrp hadoop /data/data.txt

6、-chmod

         modify HDFS Access to files or directories in ,-R Option to modify the access rights of all subdirectories under the directory , The user executing this command must be the owner or superuser of the file or directory . The example code is as follows :

hadoop fs -chmod 700 /data/data.txt

         here ,data.txt The current access to the file has been modified to “ -rwx------”

7、chown

         Modify the owner of a file or directory ,-R Option to modify the owner of all subdirectories in the directory , The user of this command must be a superuser . The example code is as follows :

hadoop fs -chown alice:alice /data/data.txt

8、-copyFromLocal

         Copy files from the local server to HDFS in . The example code is as follows :

hadoop fs -copyFromLocal a.txt /data/

9、-copyToLocal

         take HDFS Copy the files in to the local server . The example code is as follows :

hadoop fs -copyToLocal /data/data.txt /home/hadoop/input

10、-count

         Show the number of subdirectories under the directory 、 Number of files 、 Bytes occupied 、 All file and directory names ,-q Option to display quota information for directory and space . The example code is as follows :

[root@node01 zwj]# hadoop fs -count /data/
4 9 456 /data

11、-cp

         Copy files or directories , If there are more than one source file or directory , Then the target must be Directory . The example code is as follows :

hadoop fs -cp /data/data.txt /data/data.tmp

12、-createSnapshot

         by HDFS Create a snapshot of a file in , The example code is as follows :
         First, in the HDFS Create directory in /sn, And will /sn The directory is set to snapshot , As shown below :

[root@node01 zwj]# hadoop fs -mkdir /sn
[root@node01 zwj]# hdfs dfsadmin -allowSnapshot /sn
Allowing snaphot on /sn succeeded

         Next, create a snapshot , As shown below :

[root@node01 zwj]# hadoop fs -createSnapshot /sn s1
Created snapshot /sn/.snapshot/s1

         The snapshot is created successfully .

13、-deleteSnapshot

         Delete HDFS File snapshot in , The example code is as follows :

hadoop fs -deleteSnapshot /sn sn1

         Delete /sn A snapshot of the directory sn1

14、-df

         see HDFS The usage of directory space in . The example code is as follows :

[root@node01 zwj]# hadoop fs -df -h /data
Filesystem Size Used Available Use%
hdfs://node01:8020 130.1 G 13.7 G 57.8 G 11%

15、-du

         see HDFS Or the size of the file in the directory . The example code is as follows :

[root@node01 zwj]# hadoop fs -du -h -s -x /data
456 1.3 K /data

16、-expunge

         Empty HDFS Recycle bin in , The example code is as follows :

[root@node01 zwj]# hadoop fs -expunge
20/12/27 20:41:48 INFO fs.TrashPolicyDefault: TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://node01:8020/user/root/.Trash
20/12/27 20:41:48 INFO fs.TrashPolicyDefault: TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://node01:8020/user/root/.Trash
20/12/27 20:41:48 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: /user/root/.Trash/201028063715
20/12/27 20:41:48 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: /user/root/.Trash/201031181139
20/12/27 20:41:48 INFO fs.TrashPolicyDefault: TrashPolicyDefault#createCheckpoint for trashRoot: hdfs://node01:8020/user/root/.Trash
20/12/27 20:41:48 INFO fs.TrashPolicyDefault: Created trash checkpoint: /user/root/.Trash/201227204148

17、-find

         lookup HDFS In the specified directory . The example code is as follows :

[root@node01 zwj]# hadoop fs -find /data /data/data.txt
/data
/data/a.txt
/data/data.txt

18、-get

         take HDFS Copy the files in to the local server . The example code is as follows :

hadoop fs -get /data/data.txt /home/hadoop/input

19、-getfacl

         see HDFS Access control list of files in the specified directory in ,-R Option to view the file access control list under all subdirectories . The example code is as follows :

[root@node01 zwj]# hadoop fs -getfacl /data
# file: /data
# owner: root
# group: supergroup

20、-getfattr

         see HDFS File extension attribute information on ,-R Option to view the extended attribute information of files in all subdirectories of the current directory or the extended attribute information of files in subdirectories . The example code is as follows :

[root@node01 zwj]# hadoop fs -getfattr -R -d /data
# file: /data
# file: /data/a.txt
# file: /data/data.txt
# file: /data/input

21、-getmerge

         take HDFS Multiple files in are merged into one file , Copy to local server . The example code is as follows :

hadoop fs -getmerge /data/a.txt /data/b.txt /home/hadoop/input/data.local

22、-head

         With head Way to view HDFS Documents in , The file following this command can only be a file , Can't be for catalog , The example code is as follows :

[root@node01 zwj]# hadoop fs -head /data/data.txt
hello hadoop
hello hadoop

23、-help

         see Hadoop Help information for specific commands . The example code is as follows :

[root@node01 zwj]# hadoop fs -help cat
-cat [-ignoreCrc] <src> ... :
Fetch all files that match the file pattern <src> and display their content on
stdout.

24、-ls

         List HDFS Information under the specified directory in , The example code is as follows :

[root@node01 zwj]# hadoop fs -ls /data
Found 3 items
-rw-r--r-- 3 root supergroup 6 2020-12-27 20:11 /data/a.txt
-rw-r--r-- 3 root supergroup 26 2020-12-27 18:59 /data/data.txt
drwxr-xr-x - root supergroup 0 2020-09-18 19:16 /data/input

25、-mkdir

         stay HDFS Create directory on , The example code is as follows :

hadoop fs -mkdir /test/data

26、-moveFromLocal

         Move a file on the local server to HDFS in . The example code is as follows :

hadoop fs -moveFromLocal /home/hadoop/input/data.local /data/

27、-moveToLocal

         Move HDFS To a directory on the local server .

hadoop fs -moveToLocal /data/data.txt /home/hadoop/input/

Be careful :| This command is on the Hadoop3.2.0 The version has not yet implemented

28、-mv

         Move HDFS To HDFS In another directory in . The example code is as follows :

hadoop fs -mv /data/data.local /test

29、-put

         Copy local files to HDFS Under a directory in . The example code is as follows :

hadoop fs -put /home/hadoop/input/data.local /data

30、-renameSnapshot

         rename HDFS File snapshot on . The example code is as follows :
         First, in the HDFS Create directory in /sn, And will /sn The directory is set to snapshot , As shown below :

[root@node01 zwj]# hadoop fs -mkdir /sn
[root@node01 zwj]# hdfs dfsadmin -allowSnapshot /sn
Allowing snaphot on /sn succeeded

         Next, create a snapshot , As shown below :

[root@node01 zwj]# hadoop fs -createSnapshot /sn s1
Created snapshot /sn/.snapshot/s1

         The snapshot is created successfully .
         The following will /sn The snapshot name of the directory sn1 Rename it to sn2, As shown below :

hadoop fs -renameSnapshot /sn sn1 sn2

31、-rm

         Delete files or directories . The example code is as follows :

hadoop fs -rm /data/data.local

32、-rmkdir

         Delete HDFS Directory on , This directory must be empty . The example code is as follows :

hadoop fs -mkdir /test

33、-setrep

         Set up HDFS Number of target copies of files on ,-R Option can perform the same operation step by step for subdirectories , -w Option to wait for the replica to reach the set value . The example code is as follows :

hadoop fs -setrep 5 /data/data.txt

34、-stat

         see HDFS Statistics of files or directories on the , With format In the format of . Optional format The format is as follows :

  1. %b: The number of blocks the file occupies
  2. %g: The user group to which the file belongs
  3. %n: file name
  4. %o: File block size
  5. %r: Number of backups
  6. %u: The user to which the file belongs
  7. %y: File modification time

         The example code is as follows :

[root@node01 zwj]$ hadoop fs -stat %b,%g,%n,%o,%r,%u,%y /data
0,hive,data,0,0,hive,2020-11-16 07:54:04

35、-tail

         Display the end data of a file , It's usually the last... Of the display file 1KB The data of .-f Option to listen for changes in the file , When something is added to a file ,-f Option to display additional content in real time . The example code is as follows :

[root@node01 zwj]# hadoop fs -tail /data/data.txt
hello hadoop
hello hadoop

36、-test

         Check the file information , The parameter options are as follows :

  1. -d: If the path is a directory, return 0
  2. -e: If the path exists, return 0
  3. -f: If the path is a file, return 0
  4. -s: If the file in the path is larger than 0 Bytes return 0
  5. -w: If the path exists and has write permission, return 0
  6. -r: If the path exists and has read permission, return 0
  7. -z: If the file in the path is 0 Bytes return 0, Otherwise return to 1

         The example code is as follows :

hadoop fs -test -d /data

37、-text

         View file contents .text In addition to being able to view the contents of uncompressed text files , You can also view the contents of the compressed text file ;cat The command can only view the contents of uncompressed text files . The example code is as follows :

[root@node01 zwj]# hadoop fs -text /data/data.txt
hello hadoop
hello hadoop

38、touch

         stay HDFS Create file on , If the file does not exist, no error will be reported , The example code is as follows :

hadoop fs -touch /data/data.touch

39、-truncate

         Cut off the HDFS File on , The example code is as follows :

[root@node01 zwj]# hadoop fs -truncate 26 /data/data.txt
Truncate /data/data.txt to length: 26

40、-usage

         Lists the format used for the specified command , The example code is as follows :

[[root@node01 zwj]# hadoop fs -usage cat
Usage: hadoop fs [generic options] -cat [-ignoreCrc] <src> ...

Summary

         This issue introduces 40 individual HDFS Common commands , There are also some commands that are not commonly used that I have not listed , Waiting for interested partners to explore by themselves . Later articles , I'll put FlinkSQL The content is more complete , Then I will take notes according to my daily work , Give some hardcore A summary of our knowledge , When the review is almost done , Start a more Real time data warehouse Project , If you are interested, please pay attention to it in time , First time to acquire technology dry goods ! The more you know , The more you don't know , I am a Alice, I'll see you in the next issue !!!

        

Articles are constantly updated , You can search through wechat 「 Ape man fungus 」 First time reading , Mind mapping , Big data books , Big data high frequency interview questions , A large number of first-line and large-scale factories face to face … Looking forward to your attention !

 Insert picture description here

版权声明
本文为[Alice's bacteria]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210120215156743Y.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云