Alicloud's network environment does not need our configuration , If it's a virtual machine on your own computer , Virtual machine installation steps can be Baidu . This is a stand-alone installation ( There is also the introduction of cluster mode )
Use Xshell Connect to Alibaba cloud host , Upload the downloaded installation package to the server with the command

# Install the program first , Easy to use at the back 
[root@fda ~]# yum -y install lrzsz
# rz It's upload sz Add file name It's a download
# The following command, enter, will let you select the file you want to upload
[root@fda ~]# rz

Turn off firewall

Alibaba cloud's firewall is closed , If it is not closed, execute the following related commands

# View firewall on status 
[root@fda ~]# systemctl status firewalld
# Turn off firewall
[root@fda ~]# systemctl stop firewalld
# It is forbidden to start the firewall
[root@fda ~]# systemctl disable firewalld
# Turn on the firewall
[root@fda ~]# systemctl start firewalld
# Set the boot firewall
[root@fda ~]# systemctl enable firewalld
# service iptables restart
[root@fda ~]# systemctl restart firewalld

Configuration free

modify hosts file , Add the following ( I have a few of my own and add a few )

[root@fda ~]# vim /etc/hosts
# Add the following 172.22.110.228 fda

To configure SSH

# Each machine first uses ssh Perform the following , To generate a .ssh Folder 
# ssh Followed by the host name
[root@fda ~]# ssh fda
# Then input no that will do
# Every machine goes into ~/.ssh Directory operation
[root@fda ~]# cd ~/.ssh
# Enter the following command , Enter all the way , To generate public and secret keys
[root@fda .ssh]# ssh-keygen -t rsa -P ''
# The following information indicates that the generation is successful
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:6YO1h1emM9gcWvv9OT6ftHxLnjP9u8p25x1o30oq3No root@node01
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . |
| S o o |
| + O * . |
| . B.X. o.+.|
| +o=+=**%|
| .oEo*^^|
+----[SHA256]-----+
# Connect the id_rsa.pub Copy the public key content to authorized_keys In file
[root@fda .ssh]# cp id_rsa.pub authorized_keys
# Will all authorized_keys File merge
If it is multiple, you can use the following command to merge files
[root@fda1 .ssh]# cat ~/.ssh/authorized_keys | ssh root@fda 'cat >>
~/.ssh/authorized_keys'
# see master Upper authorized_keys The contents of the document , Similar to the following
[root@fda .ssh]# more authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQC5iw8+LlLxo0d77uaTChOKKJqfMHzp2jgzqV2hFAneFXqqWmr
Z4/FrMUPenmdss19bP4Up9G7PGbJu29yZDvkDwlmuqnVajYyDOsCl7PPXPWXMIlxMGUHgSXLnQQi6QnWp04v
JKD
s0EbiRTd0ZYCSQefzJcZ8jbQ7bLYt6jtil7FfUupTdHTeexKKd8Mq3K7YFZHumKvhzs6wWiM+n41jANS083s
s3O
YmAdO2cU0w1BhLVvJhdzd6fNG3RXVCXI2v0XxCUHiqI9Oewl2qPOfKzeyy09bJxo371Ezjmt8GMrkA/Ecepk
vx1
2qwNzC9bSPLfbnPWVo2gIxe4mMaFqCFJ root@fda
# If it's multiple, it will master Upper authorized_keys File distribution to other hosts
[root@fda .ssh]# scp ~/.ssh/authorized_keys root@fda1 :~/.ssh/
# In the case of multiple machines, each machine is connected to the other ssh Password free login operation , Including myself and myself
[root@fda ~]# ssh fda1
[root@fda1 ~]# ssh fda

install Java Environmental Science

primary JDK uninstall
If CentOS What has been installed in is JDK, You can uninstall the current JDK, Re install the new JDK. It's OK not to unload
Often use ⽤. If you want to uninstall , Just look at ⾯ Code of

# Query all currently installed jdk edition 
[root@fda ~]# rpm -qa|grep jdk
# If nothing is shown, there is no installed jdk, There is no need to unload , If the following jdk, You can uninstall
copy-jdk-configs-2.2-3.el7.noarch
java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64
java-1.8.0-openjdk-headless-1.8.0.131-11.b12.el7.x86_64
# uninstall jdk, Use the following method to unload
[root@fda ~]# yum -y remove copy-jdk-configs-2.2-3.el7.noarch
# Check all currently installed jdk edition
[root@fda ~]# rpm -qa|grep jdk
# Create the specified directory on the master node 
[root@fda ~]# mkdir -p /opt/module/Java
[root@fda ~]# mkdir -p /opt/module/Hadoop
# Enter into Java Catalog
[root@fda ~]# cd /opt/module/Java
# Use rz Command from the windows Host upload jdk Compress the package to the host
[root@fda Java]# rz
# Unzip to the current directory
[root@fda Java]# tar -zxvf jdk-8u181-linux-x64.tar.gz
# Configure environment variables
[root@fda Java]# vim /etc/profile
# Add something after the file
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
export JRE_HOME=/opt/module/Java/jdk1.8.0_181/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
# Make the settings work
[root@fda Java]# source /etc/profile
# Check whether the configuration is successful
[root@fda jdk1.8.0_181]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

install Hadoop Environmental Science

Upload files to the server

# Upload 
[root@fda ~]# cd /opt/module/Hadoop
[root@fda Hadoop]# rz
# decompression
[root@fda Hadoop]# tar -zxvf hadoop-2.7.7.tar.gz

Create the corresponding ⽬ record : establish ⽬ The purpose of recording is to know who corresponds to whom when configuring . You don't have to create ,Hadoop Meeting ⾃ Create

# Get into hadoop-2.7.7 Home directory 
[root@fda Hadoop]# cd hadoop-2.7.7
# Create the following directory , For later use
[root@fda hadoop-2.7.7]# mkdir tmp
[root@fda hadoop-2.7.7]# mkdir logs
[root@fda hadoop-2.7.7]# mkdir -p dfs/name
[root@fda hadoop-2.7.7]# mkdir -p dfs/data
[root@fda hadoop-2.7.7]# mkdir -p dfs/namesecondary

Modify the configuration ⽂ Pieces of : stay Hadoop There are the following configurations in ⽂ It needs to be modified
Script configuration

[root@fda hadoop-2.7.7]# vim etc/hadoop/hadoop-env.sh
# modify JAVA_HOME For , Otherwise, it is easy to appear Hadoop Can't start problem
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
[root@fda hadoop-2.7.7]# vim etc/hadoop/yarn-env.sh
# modify JAVA_HOME For
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
[root@fda hadoop-2.7.7]# vim etc/hadoop/mapred-env.sh
# modify JAVA_HOME For :
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181

Core profile modification

[root@fda hadoop-2.7.7]# vim etc/hadoop/core-site.xml
<!-- stay <configuration></configuration> Add something in the middle -->
<!-- Definition Hadoop HDFS in namenode Of URI And port 【 You have to configure 】-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://fda:9000</value>
</property>
<!--Hadoop Run time temporary storage directory 【 You have to configure 】-->
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/tmp</value>
</property>
<!-- It is used to read and write when processing sequenced files buffer Size 【 You can not configure 】-->
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<!-- The following two configurations are not available for the time being 【 You don't have to configure 】-->
<property>
<name>hadoop.proxyuser.hadoopuser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoopuser.groups</name>
<value>*</value>
</property>

HDFS To configure ⽂ Pieces of

[root@fda hadoop-2.7.7]# vi etc/hadoop/hdfs-site.xml
<!-- stay <configuration></configuration> Add something in the middle -->
<!--namenode node Metadata storage directory 【 You have to configure 】-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/name</value>
</property>
<!--datanode The real data storage directory 【 You have to configure 】-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/data</value>
</property>
<!-- Appoint DataNode Storage block Number of copies of , No more than DataNode Just a few of them , The default is 3【 You have to configure 】-->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!-- Appoint SecondaryNamenode Working directory of 【 You have to configure 】-->
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/namesecondary</value>
</property>
<!-- Appoint SecondaryNamenode Of http Protocol access address 【 You have to configure 】-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>fda:50090</value>
</property>
<!-- Appoint SecondaryNamenode Of https Protocol access address :【 It can be done without configuration 】-->
<property>
<name>dfs.namenode.secondary.https-address</name>
<value>fda:50091</value>
</property>
<!-- Must be set to true, Otherwise, we can't pass web visit hdfs File information on 【 You have to configure 】-->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

Yarn To configure ⽂ Pieces of

[root@fda hadoop-2.7.7]# vim etc/hadoop/yarn-site.xml
<!--Reducer How to get the data 【 You have to configure 】-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--Reducer The way to get data shuffle The class corresponding to the procedure , You can customize ,【 You can not configure 】, This is the default -->
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!--ResourceManager Host name , After configuration, others address You don't have to configure it , Unless you need a custom port 【 You have to configure 】-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>fda</value>
</property>
<!--NodeManager The memory size of the node , Unit is MB【 You have to configure 】-->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
<!-- Log aggregation function 【 There is no need to configure 】 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- Log retention time settings 7 God 【 There is no need to configure 】-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>

MapReduce To configure ⽂ Pieces of
But this ⽂ It doesn't exist , Copy before opening

# Use cp Order to make a copy of it , Don't create it yourself 
[root@fda hadoop-2.7.7]# cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
[root@fda hadoop-2.7.7]# vim etc/hadoop/mapred-site.xml
<!-- Use yarn function mapreduce Program 【 You have to configure 】-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- Configure history server 【 There is no need to configure 】MapReduce JobHistory Server Address -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>fda:10020</value>
</property>
<!--MapReduce JobHistory Server Web Interface address -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>fda:19888</value>
</property>

slaves⽂ Pieces of

[root@fda hadoop-2.7.7]# vim etc/hadoop/slaves
# Add the following : All the data nodes are added here , Pay attention to delete the original localhost【 You have to configure 】 Several data nodes are added to a few
fda

If it's more than one , You can use the following command to distribute to each host

[root@fda hadoop]# scp -r /opt/module root@fda1:/opt/

Set the environment variable

# Each node is edited /etc/profile file 
[root@fda hadoop-2.7.7]# vim /etc/profile
# Add the following
export HADOOP_HOME=/opt/module/Hadoop/hadoop-2.7.7
export HADOOP_LOG_DIR=$HADOOP_HOME/logs
export YARN_LOG_DIR=$HADOOP_LOG_DIR
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
# Make settings take effect immediately
[root@fda hadoop-2.7.7]# source /etc/profile

format Hadoop

[root@fda hadoop-2.7.7]# hdfs namenode -format
21/01/22 23:43:54 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = fda/172.22.110.228
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.7
STARTUP_MSG: classpath = /opt/module/Hadoop/hadoop-2.7.7/etc/hadoop:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-net-3.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jettison-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/junit-4.11.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/activation-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hadoop-annotations-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/avro-1.7.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hadoop-auth-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-nfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guice-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/activation-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-registry-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-api-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-client-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac; compiled by 'stevel' on 2018-07-18T22:47Z
STARTUP_MSG: java = 1.8.0_181
************************************************************/
21/01/22 23:43:54 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
21/01/22 23:43:54 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-4f331720-6e78-42d2-80b8-54733c52f1be
21/01/22 23:43:55 INFO namenode.FSNamesystem: No KeyProvider found.
21/01/22 23:43:55 INFO namenode.FSNamesystem: fsLock is fair: true
21/01/22 23:43:55 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
21/01/22 23:43:55 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
21/01/22 23:43:55 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
21/01/22 23:43:55 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
21/01/22 23:43:55 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Jan 22 23:43:55
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map BlocksMap
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^21 = 2097152 entries
21/01/22 23:43:55 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
21/01/22 23:43:55 INFO blockmanagement.BlockManager: defaultReplication = 1
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxReplication = 512
21/01/22 23:43:55 INFO blockmanagement.BlockManager: minReplication = 1
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
21/01/22 23:43:55 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
21/01/22 23:43:55 INFO blockmanagement.BlockManager: encryptDataTransfer = false
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
21/01/22 23:43:55 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
21/01/22 23:43:55 INFO namenode.FSNamesystem: supergroup = supergroup
21/01/22 23:43:55 INFO namenode.FSNamesystem: isPermissionEnabled = true
21/01/22 23:43:55 INFO namenode.FSNamesystem: HA Enabled: false
21/01/22 23:43:55 INFO namenode.FSNamesystem: Append Enabled: true
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map INodeMap
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^20 = 1048576 entries
21/01/22 23:43:55 INFO namenode.FSDirectory: ACLs enabled? false
21/01/22 23:43:55 INFO namenode.FSDirectory: XAttrs enabled? true
21/01/22 23:43:55 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
21/01/22 23:43:55 INFO namenode.NameNode: Caching file names occuring more than 10 times
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map cachedBlocks
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^18 = 262144 entries
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
21/01/22 23:43:55 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
21/01/22 23:43:55 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map NameNodeRetryCache
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^15 = 32768 entries
21/01/22 23:43:55 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1450880783-172.22.110.228-1611330235399
21/01/22 23:43:55 INFO common.Storage: Storage directory /opt/module/Hadoop/hadoop-2.7.7/dfs/name has been successfully formatted.
21/01/22 23:43:55 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/module/Hadoop/hadoop-2.7.7/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
21/01/22 23:43:55 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/Hadoop/hadoop-2.7.7/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
21/01/22 23:43:55 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
21/01/22 23:43:55 INFO util.ExitUtil: Exiting with status 0
21/01/22 23:43:55 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at fda/172.22.110.228
************************************************************/

start-up Hadoop
Execute the following command at the master node

[root@fda hadoop-2.7.7]# start-dfs.sh
[root@fda hadoop-2.7.7]# start-yarn.sh
[root@fda hadoop-2.7.7]# jps
19587 SecondaryNameNode
19429 DataNode
19833 NodeManager
19738 ResourceManager
19308 NameNode
20126 Jps

Here's the picture

adopt http://192.168.100.101:50070 Address access has the following bounds ⾯ Indicating successful configuration
192.168.100.101 Change to your alicloud public network ip

Hadoop2.7.7 More articles on alicloud installation and deployment

  1. Alicloud installation Nginx+vue Project deployment

    Alicloud installation Nginx+vue Project deployment nginx Installation package download http://nginx.org/en/download.html nginx install Install first PCRE pcre-devel and Zlib, because ...

  2. 【 Alibaba cloud product beta 】 Alibaba cloud ACE Deploy general complete tutorial and evaluation

    [ Alibaba cloud product beta ] Alibaba cloud ACE Deploy general complete tutorial and evaluation author : Alibaba cloud users bailimei ACE It should be the most widely used service in the public beta service at present . In the public beta cloud engine ACE I used Sina before SAE, and ACE My first impression ...

  3. Alibaba cloud centos Deploy javaweb application

    Today, we deployed a javaweb application , Record the steps here , For next use . Server version : 1.root Login server 2. Server installation FTP service , Or use it directly winscp Upload files ( Simple ), This article introduces the installation FTP clothing ...

  4. Alibaba cloud ECS Deploy ES

    background Recently, more and more companies are moving their business to the cloud , The company also has this plan , Take time to work in alicloud and Azure I've made some small attempts on the Internet , Now deploy Alibaba cloud ES and kibana recorded . For future reference , And hope to help others . this ...

  5. Alibaba cloud server deployment Office online matters needing attention

    Alibaba cloud server deployment Office online matters needing attention One . Reference configuration Example specifications :4 nucleus 8GB(IO Optimize ) network bandwidth :5Mbps System disk :40G Storage disk :200G OS:Windows Server 2016 ...

  6. Alibaba cloud Ubuntu Deploy java web - Folder

    Folder ( Click on the chapter title to read ): Alibaba cloud Ubuntu Deploy java web(1) - The system configuration         ssh link server( Use the terminal remote link )        Join users         Give the user the right to operate ...

  7. Alibaba cloud server Deployment architecture

    A project will be launched in the near future , The customer requires that all the servers deployed to alicloud server, Made a deployment plan for alicloud . Upper figure : watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvc21hbGx ...

  8. Alicloud installation mysql The solution to the problem of not being able to see the initial password

    Install in alicloud mysql After use grep 'A temporary password' /var/log/mysqld.log Command view MySQL Initial password , I don't see any wool , And then go straight to /var/log/mysq ...

  9. Alibaba cloud has deployed zabbix, The solution to the sudden failure to receive an alarm email

    Deployed on Alibaba cloud zabbix, I've been able to receive zbx An email from the police ( The alarm mailbox is 163 Of ), I don't know why , All of a sudden, I can't receive the alarm email . But manually on the server echo "hello"| ...

  10. Share alicloud recommendation code IC1L2A, If you buy a server, you can call 9 fold , The deployment of Alibaba cloud server is also attached ASP.NET MVC5 Key tutorials

    Alicloud's recommendation code is :IC1L2A Alicloud is still good . With windows server 2008 R2 For example , Describes how to deploy from a completely new server MVC5 Site . There is no new Alibaba cloud server IIS Of , To install IIS: control ...

Random recommendation

  1. destoon Second development basic code

    Label call rules http://help.destoon.com/develop/22.html The data dictionary http://help.destoon.com/dict.php destoon Summary of all kinds of calls h ...

  2. With the help of bool Judgment improves bubble sorting efficiency

    Sequencing is the most common problem in programming . Practical application , Computers spend nearly half of their time dealing with data permutations , Improving the efficiency of sorting helps to solve problems faster . Let's talk about the usual bubble algorithm , Use two loops , The outer loop is sorted as a whole , Every cycle makes ...

  3. table label , Know the forms on the web

    Sometimes we need to show some data on the web , For example, a company wants to display its inventory list on its website . The following table : If you want to show the effect of the above table on the web page, you can use the following code : Create four elements of the table : table.tbody.tr.th.td 1.& ...

  4. jQuery Notification plug-in noty

    jQuery Notice to check noty Easy to use official :http://ned.im/noty/ Other search recommendations NotifIt Demo http://js.itivy.com/jiaoben1852/inde ...

  5. Ruby Analysis of design pattern —— Combine (Composite)

    Reprint please indicate the source :http://blog.csdn.net/sinyu890807/article/details/9153761 This is a Java A copy of design patterns , Specially for Ruby Fans provide , Not familiar with R ...

  6. speex And webrtc Echo cancellation summary

    Echo cancellation AEC contain :    Delay estimation alignment + Linear adaptive filter +NLP( Double talk detection . Handle )+ Comfort noise CNG One .speex aec 1. No, NLP 2. Just think about real time DSP System , That is, there is no delay alignment, etc 3. Adaptive filtering ...

  7. A standard , Very compatible div The basic model of frame imitation !

    <!DOCTYPE html> <html > <head> <meta http-equiv="Content-Type" conten ...

  8. Linux Shell There are three uses of quotation marks in

    Linux Shell There are three kinds of quotation marks in , Double quotation marks (" "). Single quotation marks (' ') And back quotes (` `). Where double quotation marks are used to $.''.` and \ Replace : Single quotes do not replace , In the string ...

  9. character string Hash/ Trees Hash Learning notes

    Hash Tags: character string Homework tribe Comment address One . summary Baidu Encyclopedia : Hash table (Hash table/ Hashtable ), According to the key code value (Key value) Data structures that are accessed directly . Hash tables are often used to compare whether two strings are ...

  10. [JAVA] JAVA Class path

    Java Class path Classpath is a collection of all paths that contain class files . The directories and archives in the classpath are the starting points for searching for classes . Virtual machine search class Search for jre/lib and jre/lib/ext The system files stored in the archive files in the directory Search again from ...