Hadoop 2.7.7 alicloud installation and deployment

itread01 2021-01-23 14:21:29
hadoop 2.7.7 alicloud installation deployment

Alicloud's network environment doesn't need to be configured by us , If it's a virtual machine on your own computer , Virtual machine installation steps can be Baidu . This is a stand-alone installation ( There is also an introduction to cluster mode )
Use Xshell Connect to alicloud , Upload the downloaded installation package to the server with the command

# Install the program first , Easy to use in the back [[email protected] ~]# yum -y install lrzsz# rz It's upload sz Add a file name It's downloading # The following command, enter, will let you choose the file you want to upload [[email protected] ~]# rz

Turn off firewall

Alibaba cloud's firewall is closed , If it is not closed, execute the following related commands

# View firewall open status [[email protected] ~]# systemctl status firewalld# Turn off firewall [[email protected] ~]# systemctl stop firewalld# It is forbidden to start the firewall [[email protected] ~]# systemctl disable firewalld# Turn on the firewall [[email protected] ~]# systemctl start firewalld# Set boot firewall [[email protected] ~]# systemctl enable firewalld# Restart the firewall [[email protected] ~]# systemctl restart firewalld

Configuration free

modify hosts Archives , Add the following ( I have several new ones )

[[email protected] ~]# vim /etc/hosts# Add the following fda

To configure SSH

# Each machine first uses ssh Perform the following , To generate a .ssh Folder # ssh Followed by the host name [[email protected] ~]# ssh fda# Then enter no that will do # Every machine goes into ~/.ssh Directory operation [[email protected] ~]# cd ~/.ssh# Enter the following command , All the way back , To generate public and secret keys [[email protected] .ssh]# ssh-keygen -t rsa -P ''# The following information indicates that the generation is successful Generating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa):Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:6YO1h1emM9gcWvv9OT6ftHxLnjP9u8p25x1o30oq3No [email protected]The key's randomart image is:+---[RSA 2048]----+| || || || . || S o o || + O * . || . B.X. o.+.|| +o=+=**%|| .oEo*^^|+----[SHA256]-----+# Put... On each machine id_rsa.pub Copy the public key content to authorized_keys In Archives [[email protected] .ssh]# cp id_rsa.pub authorized_keys# Will all authorized_keys To merge files, if there are multiple files, you can use the following command to merge files [[email protected] .ssh]# cat ~/.ssh/authorized_keys | ssh [email protected] 'cat >>~/.ssh/authorized_keys'# Look at master Upper authorized_keys File content , Similar to the following [[email protected] .ssh]# more authorized_keysssh-rsaAAAAB3NzaC1yc2EAAAADAQABAAABAQC5iw8+LlLxo0d77uaTChOKKJqfMHzp2jgzqV2hFAneFXqqWmrZ4/FrMUPenmdss19bP4Up9G7PGbJu29yZDvkDwlmuqnVajYyDOsCl7PPXPWXMIlxMGUHgSXLnQQi6QnWp04vJKDs0EbiRTd0ZYCSQefzJcZ8jbQ7bLYt6jtil7FfUupTdHTeexKKd8Mq3K7YFZHumKvhzs6wWiM+n41jANS083ss3OYmAdO2cU0w1BhLVvJhdzd6fNG3RXVCXI2v0XxCUHiqI9Oewl2qPOfKzeyy09bJxo371Ezjmt8GMrkA/Ecepkvx12qwNzC9bSPLfbnPWVo2gIxe4mMaFqCFJ [email protected] # If it's multiple, it will master Upper authorized_keys Files are distributed to other hosts [[email protected] .ssh]# scp ~/.ssh/authorized_keys [email protected] :~/.ssh/# In the case of multiple machines, each machine is connected to the other ssh Password free login operation , Including myself and myself [[email protected] ~]# ssh fda1 [[email protected] ~]# ssh fda 

Install Java The environment

primary JDK Uninstall
If CentOS What has been installed in is JDK, You can uninstall the current JDK, Re install the new JDK. It can also be installed without removing
Often use ⽤. If you want to uninstall , Just look at ⾯ You can use the code of

# Query all currently installed jdk edition [[email protected] ~]# rpm -qa|grep jdk# If nothing is shown, there is no installed jdk, You do not need to uninstall , If the following jdk, You can uninstall copy-jdk-configs-2.2-3.el7.noarchjava-1.8.0-openjdk- Uninstall jdk, Use the following method to uninstall [[email protected] ~]# yum -y remove copy-jdk-configs-2.2-3.el7.noarch# Check all currently installed jdk edition [[email protected] ~]# rpm -qa|grep jdk
# Create the specified directory on the master node [[email protected] ~]# mkdir -p /opt/module/Java[[email protected] ~]# mkdir -p /opt/module/Hadoop# Enter to Java Catalog [[email protected] ~]# cd /opt/module/Java# Use rz Order from windows Host upload jdk Compress the package to the host [[email protected] Java]# rz# Extract to the current directory [[email protected] Java]# tar -zxvf jdk-8u181-linux-x64.tar.gz# Configure environment variables [[email protected] Java]# vim /etc/profile# Add something to the file export JAVA_HOME=/opt/module/Java/jdk1.8.0_181export JRE_HOME=/opt/module/Java/jdk1.8.0_181/jreexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jarexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin# Make the setting come into effect [[email protected] Java]# source /etc/profile# Check whether the configuration is successful [[email protected] jdk1.8.0_181]# java -versionjava version "1.8.0_181"Java(TM) SE Runtime Environment (build 1.8.0_181-b13)Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

Install Hadoop The environment

Upload files to the server

# Upload [[email protected] ~]# cd /opt/module/Hadoop[[email protected] Hadoop]# rz# Unloading [[email protected] Hadoop]# tar -zxvf hadoop-2.7.7.tar.gz

Set up corresponding ⽬ Record : establish ⽬ The purpose of recording is to know who corresponds to whom when configuring . In fact, we can not establish ,Hadoop Meeting ⾃ Dynamic establishment

# Enter hadoop-2.7.7 Home directory [[email protected] Hadoop]# cd hadoop-2.7.7# Set up the following directory , For later use [[email protected] hadoop-2.7.7]# mkdir tmp[[email protected] hadoop-2.7.7]# mkdir logs[[email protected] hadoop-2.7.7]# mkdir -p dfs/name[[email protected] hadoop-2.7.7]# mkdir -p dfs/data[[email protected] hadoop-2.7.7]# mkdir -p dfs/namesecondary

Modify the configuration ⽂ Pieces of : stay Hadoop There are the following configurations in ⽂ It needs to be modified
Configuration of instruction code

[[email protected] hadoop-2.7.7]# vim etc/hadoop/hadoop-env.sh# modify JAVA_HOME For the following , Otherwise, it is easy to appear Hadoop Unable to start problem export JAVA_HOME=/opt/module/Java/jdk1.8.0_181[[email protected] hadoop-2.7.7]# vim etc/hadoop/yarn-env.sh# modify JAVA_HOME For the following export JAVA_HOME=/opt/module/Java/jdk1.8.0_181[[email protected] hadoop-2.7.7]# vim etc/hadoop/mapred-env.sh# modify JAVA_HOME For the following :export JAVA_HOME=/opt/module/Java/jdk1.8.0_181

Core configuration file modification

[[email protected] hadoop-2.7.7]# vim etc/hadoop/core-site.xml<!-- stay <configuration></configuration> Add something in the middle --><!-- Define Hadoop HDFS in namenode Of URI He bu 【 Must be configured 】--> <property> <name>fs.defaultFS</name> <value>hdfs://fda:9000</value> </property> <!--Hadoop Temporary storage directory during execution 【 Must be configured 】--> <property> <name>hadoop.tmp.dir</name> <value>file:/opt/module/Hadoop/hadoop-2.7.7/tmp</value> </property> <!-- It is used to read and write in the processing of sequenced files buffer Size 【 You can not configure 】--> <property> <name>io.file.buffer.size</name> <value>131702</value> </property> <!-- The following two configurations are not available for the time being 【 You don't have to configure 】--> <property> <name>hadoop.proxyuser.hadoopuser.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hadoopuser.groups</name> <value>*</value> </property>

HDFS To configure ⽂ Pieces of

[[email protected] hadoop-2.7.7]# vi etc/hadoop/hdfs-site.xml<!-- stay <configuration></configuration> Add something in the middle --><!--namenode Node Metadata storage directory 【 Must be configured 】--> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/name</value> </property> <!--datanode Real data storage directory 【 Must be configured 】--> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/data</value> </property> <!-- Appoint DataNode Store block Number of copies of , Less than DataNode Just a few of them , The default is 3【 Must be configured 】--> <property> <name>dfs.replication</name> <value>1</value> </property> <!-- Appoint SecondaryNamenode The working directory of 【 Must be configured 】--> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/namesecondary</value> </property> <!-- Appoint SecondaryNamenode Of http Protocol access address 【 Must be configured 】--> <property> <name>dfs.namenode.secondary.http-address</name> <value>fda:50090</value> </property> <!-- Appoint SecondaryNamenode Of https Protocol access address :【 It can be done without configuration 】--> <property> <name>dfs.namenode.secondary.https-address</name> <value>fda:50091</value> </property> <!-- Must be set to true, Otherwise, we can't pass web Visit hdfs File information on 【 Must be configured 】--> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property>

Yarn To configure ⽂ Pieces of

[[email protected] hadoop-2.7.7]# vim etc/hadoop/yarn-site.xml <!--Reducer Access to information 【 Must be configured 】--> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!--Reducer The way to get information shuffle The class corresponding to the procedure , Can be customized ,【 You can not configure 】, It's a preset --> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!--ResourceManager Host name , After configuration, others address You don't have to configure it , Unless you need a custom port 【 Must be configured 】--> <property> <name>yarn.resourcemanager.hostname</name> <value>fda</value> </property> <!--NodeManager The memory size of the node , The unit is MB【 Must be configured 】--> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>1024</value> </property> <!-- Log aggregation function 【 There is no need to configure 】 --> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <!-- Log retention time settings 7 God 【 There is no need to configure 】--> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property>

MapReduce To configure ⽂ Pieces of
But this ⽂ It doesn't exist , Copy first and then turn it on

# Use cp Make a copy of the order , Don't build your own [[email protected] hadoop-2.7.7]# cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml[[email protected] hadoop-2.7.7]# vim etc/hadoop/mapred-site.xml<!-- Use yarn Execute mapreduce The program 【 Must be configured 】--> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- Configure history server 【 There is no need to configure 】MapReduce JobHistory Server Address --> <property> <name>mapreduce.jobhistory.address</name> <value>fda:10020</value> </property> <!--MapReduce JobHistory Server Web Interface address --> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>fda:19888</value> </property>

slaves⽂ Pieces of

[[email protected] hadoop-2.7.7]# vim etc/hadoop/slaves# Add the following : All the data nodes are added here , Pay attention to delete the original localhost【 Must be configured 】 There are several new data nodes fda

If it's multiple , You can use the following command to distribute to each host

[[email protected] hadoop]# scp -r /opt/module [email protected]:/opt/

Set environment variable

# Each node is edited /etc/profile Archives [[email protected] hadoop-2.7.7]# vim /etc/profile# Add the following export HADOOP_HOME=/opt/module/Hadoop/hadoop-2.7.7export HADOOP_LOG_DIR=$HADOOP_HOME/logsexport YARN_LOG_DIR=$HADOOP_LOG_DIRexport PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin# Make the setting effective immediately [[email protected] hadoop-2.7.7]# source /etc/profile

format Hadoop

[[email protected] hadoop-2.7.7]# hdfs namenode -format21/01/22 23:43:54 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = fda/ args = [-format]STARTUP_MSG: version = 2.7.7STARTUP_MSG: classpath = /opt/module/Hadoop/hadoop-2.7.7/etc/hadoop:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-net-3.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jettison-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/junit-4.11.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/activation-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hadoop-annotations-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/avro-1.7.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/snappy-java-*.jarSTARTUP_MSG: build = Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac; compiled by 'stevel' on 2018-07-18T22:47ZSTARTUP_MSG: java = 1.8.0_181************************************************************/21/01/22 23:43:54 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]21/01/22 23:43:54 INFO namenode.NameNode: createNameNode [-format]Formatting using clusterid: CID-4f331720-6e78-42d2-80b8-54733c52f1be21/01/22 23:43:55 INFO namenode.FSNamesystem: No KeyProvider found.21/01/22 23:43:55 INFO namenode.FSNamesystem: fsLock is fair: true21/01/22 23:43:55 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false21/01/22 23:43:55 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100021/01/22 23:43:55 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true21/01/22 23:43:55 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00021/01/22 23:43:55 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Jan 22 23:43:5521/01/22 23:43:55 INFO util.GSet: Computing capacity for map BlocksMap21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit21/01/22 23:43:55 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB21/01/22 23:43:55 INFO util.GSet: capacity = 2^21 = 2097152 entries21/01/22 23:43:55 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false21/01/22 23:43:55 INFO blockmanagement.BlockManager: defaultReplication = 121/01/22 23:43:55 INFO blockmanagement.BlockManager: maxReplication = 51221/01/22 23:43:55 INFO blockmanagement.BlockManager: minReplication = 121/01/22 23:43:55 INFO blockmanagement.BlockManager: maxReplicationStreams = 221/01/22 23:43:55 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300021/01/22 23:43:55 INFO blockmanagement.BlockManager: encryptDataTransfer = false21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 100021/01/22 23:43:55 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)21/01/22 23:43:55 INFO namenode.FSNamesystem: supergroup = supergroup21/01/22 23:43:55 INFO namenode.FSNamesystem: isPermissionEnabled = true21/01/22 23:43:55 INFO namenode.FSNamesystem: HA Enabled: false21/01/22 23:43:55 INFO namenode.FSNamesystem: Append Enabled: true21/01/22 23:43:55 INFO util.GSet: Computing capacity for map INodeMap21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit21/01/22 23:43:55 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB21/01/22 23:43:55 INFO util.GSet: capacity = 2^20 = 1048576 entries21/01/22 23:43:55 INFO namenode.FSDirectory: ACLs enabled? false21/01/22 23:43:55 INFO namenode.FSDirectory: XAttrs enabled? true21/01/22 23:43:55 INFO namenode.FSDirectory: Maximum size of an xattr: 1638421/01/22 23:43:55 INFO namenode.NameNode: Caching file names occuring more than 10 times21/01/22 23:43:55 INFO util.GSet: Computing capacity for map cachedBlocks21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit21/01/22 23:43:55 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB21/01/22 23:43:55 INFO util.GSet: capacity = 2^18 = 262144 entries21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603321/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 021/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 3000021/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1021/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1021/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2521/01/22 23:43:55 INFO namenode.FSNamesystem: Retry cache on namenode is enabled21/01/22 23:43:55 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis21/01/22 23:43:55 INFO util.GSet: Computing capacity for map NameNodeRetryCache21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit21/01/22 23:43:55 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB21/01/22 23:43:55 INFO util.GSet: capacity = 2^15 = 32768 entries21/01/22 23:43:55 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1450880783- 23:43:55 INFO common.Storage: Storage directory /opt/module/Hadoop/hadoop-2.7.7/dfs/name has been successfully formatted.21/01/22 23:43:55 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/module/Hadoop/hadoop-2.7.7/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression21/01/22 23:43:55 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/Hadoop/hadoop-2.7.7/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.21/01/22 23:43:55 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 021/01/22 23:43:55 INFO util.ExitUtil: Exiting with status 021/01/22 23:43:55 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at fda/************************************************************/

Start Hadoop
Execute the following command at the master node

[[email protected] hadoop-2.7.7]# start-dfs.sh [[email protected] hadoop-2.7.7]# start-yarn.sh[[email protected] hadoop-2.7.7]# jps19587 SecondaryNameNode19429 DataNode19833 NodeManager19738 ResourceManager19308 NameNode20126 Jps

Here's the picture

Through Address access has the following bounds ⾯ Indicates that the configuration is successful Change to your alicloud public network ip


  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云