Hadoop 2.7.7 alicloud installation and deployment

Programming my everything 2021-01-23 10:57:59
hadoop 2.7.7 alicloud installation deployment


Alicloud's network environment does not need our configuration , If it's a virtual machine on your own computer , Virtual machine installation steps can be Baidu . This is a stand-alone installation ( There is also the introduction of cluster mode )
Use Xshell Connect to Alibaba cloud host , Upload the downloaded installation package to the server with the command

# Install the program first , Easy to use at the back
[root@fda ~]# yum -y install lrzsz
# rz It's upload sz Add file name It's a download
# The following command, enter, will let you select the file you want to upload
[root@fda ~]# rz

Turn off firewall

Alibaba cloud's firewall is closed , If it is not closed, execute the following related commands

# View firewall on status
[root@fda ~]# systemctl status firewalld
# Turn off firewall
[root@fda ~]# systemctl stop firewalld
# It is forbidden to start the firewall
[root@fda ~]# systemctl disable firewalld
# Turn on the firewall
[root@fda ~]# systemctl start firewalld
# Set the boot firewall
[root@fda ~]# systemctl enable firewalld
# service iptables restart
[root@fda ~]# systemctl restart firewalld

Configuration free

modify hosts file , Add the following ( I have a few of my own and add a few )

[root@fda ~]# vim /etc/hosts
# Add the following
172.22.110.228 fda

To configure SSH

# Each machine first uses ssh Perform the following , To generate a .ssh Folder
# ssh Followed by the host name
[root@fda ~]# ssh fda
# Then input no that will do
# Every machine goes into ~/.ssh Directory operation
[root@fda ~]# cd ~/.ssh
# Enter the following command , Enter all the way , To generate public and secret keys
[root@fda .ssh]# ssh-keygen -t rsa -P ''
# The following information indicates that the generation is successful
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:6YO1h1emM9gcWvv9OT6ftHxLnjP9u8p25x1o30oq3No root@node01
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . |
| S o o |
| + O * . |
| . B.X. o.+.|
| +o=+=**%|
| .oEo*^^|
+----[SHA256]-----+
# Connect the id_rsa.pub Copy the public key content to authorized_keys In file
[root@fda .ssh]# cp id_rsa.pub authorized_keys
# Will all authorized_keys File merge
If it is multiple, you can use the following command to merge files
[root@fda1 .ssh]# cat ~/.ssh/authorized_keys | ssh root@fda 'cat >>
~/.ssh/authorized_keys'
# see master Upper authorized_keys The contents of the document , Similar to the following
[root@fda .ssh]# more authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQC5iw8+LlLxo0d77uaTChOKKJqfMHzp2jgzqV2hFAneFXqqWmr
Z4/FrMUPenmdss19bP4Up9G7PGbJu29yZDvkDwlmuqnVajYyDOsCl7PPXPWXMIlxMGUHgSXLnQQi6QnWp04v
JKD
s0EbiRTd0ZYCSQefzJcZ8jbQ7bLYt6jtil7FfUupTdHTeexKKd8Mq3K7YFZHumKvhzs6wWiM+n41jANS083s
s3O
YmAdO2cU0w1BhLVvJhdzd6fNG3RXVCXI2v0XxCUHiqI9Oewl2qPOfKzeyy09bJxo371Ezjmt8GMrkA/Ecepk
vx1
2qwNzC9bSPLfbnPWVo2gIxe4mMaFqCFJ root@fda
# If it's multiple, it will master Upper authorized_keys File distribution to other hosts
[root@fda .ssh]# scp ~/.ssh/authorized_keys root@fda1 :~/.ssh/
# In the case of multiple machines, each machine is connected to the other ssh Password free login operation , Including myself and myself
[root@fda ~]# ssh fda1
[root@fda1 ~]# ssh fda

install Java Environmental Science

primary JDK uninstall
If CentOS What has been installed in is JDK, You can uninstall the current JDK, Re install the new JDK. It's OK not to unload
Often use ⽤. If you want to uninstall , Just look at ⾯ Code of

# Query all currently installed jdk edition
[root@fda ~]# rpm -qa|grep jdk
# If nothing is shown, there is no installed jdk, There is no need to unload , If the following jdk, You can uninstall
copy-jdk-configs-2.2-3.el7.noarch
java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64
java-1.8.0-openjdk-headless-1.8.0.131-11.b12.el7.x86_64
# uninstall jdk, Use the following method to unload
[root@fda ~]# yum -y remove copy-jdk-configs-2.2-3.el7.noarch
# Check all currently installed jdk edition
[root@fda ~]# rpm -qa|grep jdk
# Create the specified directory on the master node
[root@fda ~]# mkdir -p /opt/module/Java
[root@fda ~]# mkdir -p /opt/module/Hadoop
# Enter into Java Catalog
[root@fda ~]# cd /opt/module/Java
# Use rz Command from the windows Host upload jdk Compress the package to the host
[root@fda Java]# rz
# Unzip to the current directory
[root@fda Java]# tar -zxvf jdk-8u181-linux-x64.tar.gz
# Configure environment variables
[root@fda Java]# vim /etc/profile
# Add something after the file
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
export JRE_HOME=/opt/module/Java/jdk1.8.0_181/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
# Make the settings work
[root@fda Java]# source /etc/profile
# Check whether the configuration is successful
[root@fda jdk1.8.0_181]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

install Hadoop Environmental Science

Upload files to the server

# Upload
[root@fda ~]# cd /opt/module/Hadoop
[root@fda Hadoop]# rz
# decompression
[root@fda Hadoop]# tar -zxvf hadoop-2.7.7.tar.gz

Create the corresponding ⽬ record : establish ⽬ The purpose of recording is to know who corresponds to whom when configuring . You don't have to create ,Hadoop Meeting ⾃ Create

# Get into hadoop-2.7.7 Home directory
[root@fda Hadoop]# cd hadoop-2.7.7
# Create the following directory , For later use
[root@fda hadoop-2.7.7]# mkdir tmp
[root@fda hadoop-2.7.7]# mkdir logs
[root@fda hadoop-2.7.7]# mkdir -p dfs/name
[root@fda hadoop-2.7.7]# mkdir -p dfs/data
[root@fda hadoop-2.7.7]# mkdir -p dfs/namesecondary

Modify the configuration ⽂ Pieces of : stay Hadoop There are the following configurations in ⽂ It needs to be modified
Script configuration

[root@fda hadoop-2.7.7]# vim etc/hadoop/hadoop-env.sh
# modify JAVA_HOME For , Otherwise, it is easy to appear Hadoop Can't start problem
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
[root@fda hadoop-2.7.7]# vim etc/hadoop/yarn-env.sh
# modify JAVA_HOME For
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
[root@fda hadoop-2.7.7]# vim etc/hadoop/mapred-env.sh
# modify JAVA_HOME For :
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181

Core profile modification

[root@fda hadoop-2.7.7]# vim etc/hadoop/core-site.xml
<!-- stay <configuration></configuration> Add something in the middle -->
<!-- Definition Hadoop HDFS in namenode Of URI And port 【 You have to configure 】-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://fda:9000</value>
</property>
<!--Hadoop Run time temporary storage directory 【 You have to configure 】-->
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/tmp</value>
</property>
<!-- It is used to read and write when processing sequenced files buffer Size 【 You can not configure 】-->
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<!-- The following two configurations are not available for the time being 【 You don't have to configure 】-->
<property>
<name>hadoop.proxyuser.hadoopuser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoopuser.groups</name>
<value>*</value>
</property>

HDFS To configure ⽂ Pieces of

[root@fda hadoop-2.7.7]# vi etc/hadoop/hdfs-site.xml
<!-- stay <configuration></configuration> Add something in the middle -->
<!--namenode node Metadata storage directory 【 You have to configure 】-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/name</value>
</property>
<!--datanode The real data storage directory 【 You have to configure 】-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/data</value>
</property>
<!-- Appoint DataNode Storage block Number of copies of , No more than DataNode Just a few of them , The default is 3【 You have to configure 】-->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!-- Appoint SecondaryNamenode Working directory of 【 You have to configure 】-->
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/namesecondary</value>
</property>
<!-- Appoint SecondaryNamenode Of http Protocol access address 【 You have to configure 】-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>fda:50090</value>
</property>
<!-- Appoint SecondaryNamenode Of https Protocol access address :【 It can be done without configuration 】-->
<property>
<name>dfs.namenode.secondary.https-address</name>
<value>fda:50091</value>
</property>
<!-- Must be set to true, Otherwise, we can't pass web visit hdfs File information on 【 You have to configure 】-->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

Yarn To configure ⽂ Pieces of

[root@fda hadoop-2.7.7]# vim etc/hadoop/yarn-site.xml
<!--Reducer How to get the data 【 You have to configure 】-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--Reducer The way to get data shuffle The class corresponding to the procedure , You can customize ,【 You can not configure 】, This is the default -->
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!--ResourceManager Host name , After configuration, others address You don't have to configure it , Unless you need a custom port 【 You have to configure 】-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>fda</value>
</property>
<!--NodeManager The memory size of the node , Unit is MB【 You have to configure 】-->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
<!-- Log aggregation function 【 There is no need to configure 】 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- Log retention time settings 7 God 【 There is no need to configure 】-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>

MapReduce To configure ⽂ Pieces of
But this ⽂ It doesn't exist , Copy before opening

# Use cp Order to make a copy of it , Don't create it yourself
[root@fda hadoop-2.7.7]# cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
[root@fda hadoop-2.7.7]# vim etc/hadoop/mapred-site.xml
<!-- Use yarn function mapreduce Program 【 You have to configure 】-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- Configure history server 【 There is no need to configure 】MapReduce JobHistory Server Address -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>fda:10020</value>
</property>
<!--MapReduce JobHistory Server Web Interface address -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>fda:19888</value>
</property>

slaves⽂ Pieces of

[root@fda hadoop-2.7.7]# vim etc/hadoop/slaves
# Add the following : All the data nodes are added here , Pay attention to delete the original localhost【 You have to configure 】 Several data nodes are added to a few
fda

If it's more than one , You can use the following command to distribute to each host

[root@fda hadoop]# scp -r /opt/module root@fda1:/opt/

Set the environment variable

# Each node is edited /etc/profile file
[root@fda hadoop-2.7.7]# vim /etc/profile
# Add the following
export HADOOP_HOME=/opt/module/Hadoop/hadoop-2.7.7
export HADOOP_LOG_DIR=$HADOOP_HOME/logs
export YARN_LOG_DIR=$HADOOP_LOG_DIR
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
# Make settings take effect immediately
[root@fda hadoop-2.7.7]# source /etc/profile

format Hadoop

[root@fda hadoop-2.7.7]# hdfs namenode -format
21/01/22 23:43:54 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = fda/172.22.110.228
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.7
STARTUP_MSG: classpath = /opt/module/Hadoop/hadoop-2.7.7/etc/hadoop:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-net-3.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jettison-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/junit-4.11.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/activation-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hadoop-annotations-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/avro-1.7.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hadoop-auth-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-nfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guice-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/activation-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-registry-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-api-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-client-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac; compiled by 'stevel' on 2018-07-18T22:47Z
STARTUP_MSG: java = 1.8.0_181
************************************************************/
21/01/22 23:43:54 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
21/01/22 23:43:54 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-4f331720-6e78-42d2-80b8-54733c52f1be
21/01/22 23:43:55 INFO namenode.FSNamesystem: No KeyProvider found.
21/01/22 23:43:55 INFO namenode.FSNamesystem: fsLock is fair: true
21/01/22 23:43:55 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
21/01/22 23:43:55 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
21/01/22 23:43:55 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
21/01/22 23:43:55 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
21/01/22 23:43:55 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Jan 22 23:43:55
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map BlocksMap
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^21 = 2097152 entries
21/01/22 23:43:55 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
21/01/22 23:43:55 INFO blockmanagement.BlockManager: defaultReplication = 1
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxReplication = 512
21/01/22 23:43:55 INFO blockmanagement.BlockManager: minReplication = 1
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
21/01/22 23:43:55 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
21/01/22 23:43:55 INFO blockmanagement.BlockManager: encryptDataTransfer = false
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
21/01/22 23:43:55 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
21/01/22 23:43:55 INFO namenode.FSNamesystem: supergroup = supergroup
21/01/22 23:43:55 INFO namenode.FSNamesystem: isPermissionEnabled = true
21/01/22 23:43:55 INFO namenode.FSNamesystem: HA Enabled: false
21/01/22 23:43:55 INFO namenode.FSNamesystem: Append Enabled: true
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map INodeMap
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^20 = 1048576 entries
21/01/22 23:43:55 INFO namenode.FSDirectory: ACLs enabled? false
21/01/22 23:43:55 INFO namenode.FSDirectory: XAttrs enabled? true
21/01/22 23:43:55 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
21/01/22 23:43:55 INFO namenode.NameNode: Caching file names occuring more than 10 times
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map cachedBlocks
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^18 = 262144 entries
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
21/01/22 23:43:55 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
21/01/22 23:43:55 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map NameNodeRetryCache
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^15 = 32768 entries
21/01/22 23:43:55 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1450880783-172.22.110.228-1611330235399
21/01/22 23:43:55 INFO common.Storage: Storage directory /opt/module/Hadoop/hadoop-2.7.7/dfs/name has been successfully formatted.
21/01/22 23:43:55 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/module/Hadoop/hadoop-2.7.7/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
21/01/22 23:43:55 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/Hadoop/hadoop-2.7.7/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
21/01/22 23:43:55 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
21/01/22 23:43:55 INFO util.ExitUtil: Exiting with status 0
21/01/22 23:43:55 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at fda/172.22.110.228
************************************************************/

start-up Hadoop
Execute the following command at the master node

[root@fda hadoop-2.7.7]# start-dfs.sh
[root@fda hadoop-2.7.7]# start-yarn.sh
[root@fda hadoop-2.7.7]# jps
19587 SecondaryNameNode
19429 DataNode
19833 NodeManager
19738 ResourceManager
19308 NameNode
20126 Jps

Here's the picture
 Insert picture description here
adopt http://192.168.100.101:50070 Address access has the following bounds ⾯ Indicating successful configuration
192.168.100.101 Change to your alicloud public network ip
 Insert picture description here

版权声明
本文为[Programming my everything]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210123105643766g.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云