Hadoop2.7.7阿里云安装部署

编程我的一切 2021-01-23 10:57:18
hadoop 阿里 云安 hadoop2.7.7 2.7.7


阿里云的网络环境不需要我们配置,如果是在自己电脑上的虚拟机,虚拟机的安装步骤可以百度。这里是单机版的安装(也有集群模式的介绍)
使用Xshell连接阿里云主机,用命令将自己下载好的安装包上传到服务器

# 先安装程序,方便后面使用
[root@fda ~]# yum -y install lrzsz
# rz是上传 sz 加文件名 是下载
# 如下命令回车会让你选择需要上传的文件
[root@fda ~]# rz

关闭防火墙

阿里云的防火墙是关闭的,如果不是关闭的执行下面的相关命令

#查看防火墙开启状态
[root@fda ~]# systemctl status firewalld
#关闭防火墙
[root@fda ~]# systemctl stop firewalld
#禁止开机启动防火墙
[root@fda ~]# systemctl disable firewalld
#开启防火墙
[root@fda ~]# systemctl start firewalld
#设置开机启动防火墙
[root@fda ~]# systemctl enable firewalld
#重启防火墙
[root@fda ~]# systemctl restart firewalld

配置免密

修改hosts文件,添加以下内容(自己有几台添加几台)

[root@fda ~]# vim /etc/hosts
# 添加以下内容
172.22.110.228 fda

配置SSH

#每台机器先使用ssh执行以下,以在主目录产生一个.ssh 文件夹
# ssh 后面跟主机名称
[root@fda ~]# ssh fda
#然后输入no即可
#每台机器均进入~/.ssh 目录进行操作
[root@fda ~]# cd ~/.ssh
#输入以下命令,一路回车,用以产生公钥和秘钥
[root@fda .ssh]# ssh-keygen -t rsa -P ''
#出现以下信息说明生成成功
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:6YO1h1emM9gcWvv9OT6ftHxLnjP9u8p25x1o30oq3No root@node01
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . |
| S o o |
| + O * . |
| . B.X. o.+.|
| +o=+=**%|
| .oEo*^^|
+----[SHA256]-----+
#将每台机器上的id_rsa.pub公钥内容复制到authorized_keys文件中
[root@fda .ssh]# cp id_rsa.pub authorized_keys
#将所有的authorized_keys文件进行合并
如果是多台的情况下可以使用下面的命令进行文件合并
[root@fda1 .ssh]# cat ~/.ssh/authorized_keys | ssh root@fda 'cat >>
~/.ssh/authorized_keys'
#查看master上的authorized_keys文件内容,类似如下即可
[root@fda .ssh]# more authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQC5iw8+LlLxo0d77uaTChOKKJqfMHzp2jgzqV2hFAneFXqqWmr
Z4/FrMUPenmdss19bP4Up9G7PGbJu29yZDvkDwlmuqnVajYyDOsCl7PPXPWXMIlxMGUHgSXLnQQi6QnWp04v
JKD
s0EbiRTd0ZYCSQefzJcZ8jbQ7bLYt6jtil7FfUupTdHTeexKKd8Mq3K7YFZHumKvhzs6wWiM+n41jANS083s
s3O
YmAdO2cU0w1BhLVvJhdzd6fNG3RXVCXI2v0XxCUHiqI9Oewl2qPOfKzeyy09bJxo371Ezjmt8GMrkA/Ecepk
vx1
2qwNzC9bSPLfbnPWVo2gIxe4mMaFqCFJ root@fda
#如果是多台的情况下将master上的authorized_keys文件分发到其他主机上
[root@fda .ssh]# scp ~/.ssh/authorized_keys root@fda1 :~/.ssh/
#如果是多台的情况下每台机器之间进行ssh免密码登录操作,包括自己与自己
[root@fda ~]# ssh fda1
[root@fda1 ~]# ssh fda

安装 Java 环境

原JDK卸载
如果CentOS中已经安装的有 JDK,可以先卸载当前 JDK,重新安装新的 JDK。不卸载也可以正
常使⽤。如果想卸载,就看下⾯的代码即可

#查询当前所有安装的jdk版本
[root@fda ~]# rpm -qa|grep jdk
#如果什么都没有展示说明没有已安装的jdk,则无需卸载,如果出现以下jdk,则可以卸载
copy-jdk-configs-2.2-3.el7.noarch
java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64
java-1.8.0-openjdk-headless-1.8.0.131-11.b12.el7.x86_64
#卸载jdk,使用下面的方法卸载即可
[root@fda ~]# yum -y remove copy-jdk-configs-2.2-3.el7.noarch
#再次查询当前所有安装的jdk版本
[root@fda ~]# rpm -qa|grep jdk
#在主节点上创建指定目录
[root@fda ~]# mkdir -p /opt/module/Java
[root@fda ~]# mkdir -p /opt/module/Hadoop
#进入到Java目录
[root@fda ~]# cd /opt/module/Java
#使用rz命令从windows主机上传jdk压缩包到主机上
[root@fda Java]# rz
#解压到当前目录
[root@fda Java]# tar -zxvf jdk-8u181-linux-x64.tar.gz
#配置环境变量
[root@fda Java]# vim /etc/profile
#在该文件后面追加一下内容
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
export JRE_HOME=/opt/module/Java/jdk1.8.0_181/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
#使刚才的设置生效
[root@fda Java]# source /etc/profile
#检测是否配置成功
[root@fda jdk1.8.0_181]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

安装 Hadoop 环境

上传文件到服务器

#上传
[root@fda ~]# cd /opt/module/Hadoop
[root@fda Hadoop]# rz
#解压
[root@fda Hadoop]# tar -zxvf hadoop-2.7.7.tar.gz

创建相应⽬录:创建⽬录是为了配置的时候知道谁与谁对应。其实可以不创建,Hadoop 会⾃动创建

#进入hadoop-2.7.7主目录
[root@fda Hadoop]# cd hadoop-2.7.7
#创建以下目录,以备后用
[root@fda hadoop-2.7.7]# mkdir tmp
[root@fda hadoop-2.7.7]# mkdir logs
[root@fda hadoop-2.7.7]# mkdir -p dfs/name
[root@fda hadoop-2.7.7]# mkdir -p dfs/data
[root@fda hadoop-2.7.7]# mkdir -p dfs/namesecondary

修改配置⽂件:在 Hadoop 中有以下配置⽂件需要做修改
脚本的配置

[root@fda hadoop-2.7.7]# vim etc/hadoop/hadoop-env.sh
#修改JAVA_HOME为以下内容,否则容易出现Hadoop无法启动问题
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
[root@fda hadoop-2.7.7]# vim etc/hadoop/yarn-env.sh
#修改JAVA_HOME为以下内容
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181
[root@fda hadoop-2.7.7]# vim etc/hadoop/mapred-env.sh
#修改JAVA_HOME为以下内容:
export JAVA_HOME=/opt/module/Java/jdk1.8.0_181

核心配置文件修改

[root@fda hadoop-2.7.7]# vim etc/hadoop/core-site.xml
<!--在<configuration></configuration>中间添加一下内容-->
<!--定义Hadoop HDFS中 namenode 的URI和端口【必须配置】-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://fda:9000</value>
</property>
<!--Hadoop运行时临时的存储目录【必须配置】-->
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/tmp</value>
</property>
<!--用作序列化文件处理时读写buffer的大小【可以不配置】-->
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<!--以下两个配置暂时用不上【可以不必配置】-->
<property>
<name>hadoop.proxyuser.hadoopuser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoopuser.groups</name>
<value>*</value>
</property>

HDFS 配置⽂件

[root@fda hadoop-2.7.7]# vi etc/hadoop/hdfs-site.xml
<!--在<configuration></configuration>中间添加一下内容-->
<!--namenode节点 元数据存储目录【必须配置】-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/name</value>
</property>
<!--datanode 真正的数据存储目录【必须配置】-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/data</value>
</property>
<!--指定DataNode存储block的副本数量,不大于DataNode的个数就行,默认为3【必须配置】-->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!--指定SecondaryNamenode的工作目录【必须配置】-->
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:/opt/module/Hadoop/hadoop-2.7.7/dfs/namesecondary</value>
</property>
<!--指定SecondaryNamenode的http协议访问地址【必须配置】-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>fda:50090</value>
</property>
<!--指定SecondaryNamenode的https协议访问地址:【可以不进行配置】-->
<property>
<name>dfs.namenode.secondary.https-address</name>
<value>fda:50091</value>
</property>
<!--必须设置为true,否则就不能通过web访问hdfs上的文件信息【必须配置】-->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

Yarn 配置⽂件

[root@fda hadoop-2.7.7]# vim etc/hadoop/yarn-site.xml
<!--Reducer获取数据的方式【必须配置】-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--Reducer获取数据的方式中shuffle过程对应的类,可以自定义,【可以不配置】,这是默认的-->
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!--ResourceManager主机名,配置后其他的address就不用配置了,除非需要自定义端口【必须配置】-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>fda</value>
</property>
<!--NodeManager节点的内存大小,单位为MB【必须配置】-->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
<!-- 日志聚集功能【暂时不需要配置】 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 日志保留时间设置7天 【暂时不需要配置】-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>

MapReduce 配置⽂件
但是这个⽂件并不存在,先复制再打开

#使用cp命令复制一份出来,不要自己创建
[root@fda hadoop-2.7.7]# cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
[root@fda hadoop-2.7.7]# vim etc/hadoop/mapred-site.xml
<!--使用yarn运行mapreduce程序【必须配置】-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!--配置历史服务器【暂时不需要配置】MapReduce JobHistory Server地址-->
<property>
<name>mapreduce.jobhistory.address</name>
<value>fda:10020</value>
</property>
<!--MapReduce JobHistory Server Web界面地址-->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>fda:19888</value>
</property>

slaves⽂件

[root@fda hadoop-2.7.7]# vim etc/hadoop/slaves
#添加以下内容:这里添加的是所有的数据节点,注意删除原来的localhost【必须配置】有几台添加几台数据节点
fda

如果是多台,可以用下面的命令分发到各个主机上

[root@fda hadoop]# scp -r /opt/module root@fda1:/opt/

设置环境变量

#各节点均编辑/etc/profile文件
[root@fda hadoop-2.7.7]# vim /etc/profile
#添加以下内容
export HADOOP_HOME=/opt/module/Hadoop/hadoop-2.7.7
export HADOOP_LOG_DIR=$HADOOP_HOME/logs
export YARN_LOG_DIR=$HADOOP_LOG_DIR
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
#使设置立即生效
[root@fda hadoop-2.7.7]# source /etc/profile

格式化Hadoop

[root@fda hadoop-2.7.7]# hdfs namenode -format
21/01/22 23:43:54 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = fda/172.22.110.228
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.7
STARTUP_MSG: classpath = /opt/module/Hadoop/hadoop-2.7.7/etc/hadoop:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-net-3.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jettison-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/junit-4.11.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/activation-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hadoop-annotations-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/avro-1.7.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/hadoop-auth-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-nfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guice-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/activation-1.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-registry-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-api-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-client-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.7.jar:/opt/module/Hadoop/hadoop-2.7.7/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.7-tests.jar:/opt/module/Hadoop/hadoop-2.7.7/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac; compiled by 'stevel' on 2018-07-18T22:47Z
STARTUP_MSG: java = 1.8.0_181
************************************************************/
21/01/22 23:43:54 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
21/01/22 23:43:54 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-4f331720-6e78-42d2-80b8-54733c52f1be
21/01/22 23:43:55 INFO namenode.FSNamesystem: No KeyProvider found.
21/01/22 23:43:55 INFO namenode.FSNamesystem: fsLock is fair: true
21/01/22 23:43:55 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
21/01/22 23:43:55 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
21/01/22 23:43:55 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
21/01/22 23:43:55 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
21/01/22 23:43:55 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Jan 22 23:43:55
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map BlocksMap
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^21 = 2097152 entries
21/01/22 23:43:55 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
21/01/22 23:43:55 INFO blockmanagement.BlockManager: defaultReplication = 1
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxReplication = 512
21/01/22 23:43:55 INFO blockmanagement.BlockManager: minReplication = 1
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
21/01/22 23:43:55 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
21/01/22 23:43:55 INFO blockmanagement.BlockManager: encryptDataTransfer = false
21/01/22 23:43:55 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
21/01/22 23:43:55 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
21/01/22 23:43:55 INFO namenode.FSNamesystem: supergroup = supergroup
21/01/22 23:43:55 INFO namenode.FSNamesystem: isPermissionEnabled = true
21/01/22 23:43:55 INFO namenode.FSNamesystem: HA Enabled: false
21/01/22 23:43:55 INFO namenode.FSNamesystem: Append Enabled: true
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map INodeMap
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^20 = 1048576 entries
21/01/22 23:43:55 INFO namenode.FSDirectory: ACLs enabled? false
21/01/22 23:43:55 INFO namenode.FSDirectory: XAttrs enabled? true
21/01/22 23:43:55 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
21/01/22 23:43:55 INFO namenode.NameNode: Caching file names occuring more than 10 times
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map cachedBlocks
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^18 = 262144 entries
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
21/01/22 23:43:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
21/01/22 23:43:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
21/01/22 23:43:55 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
21/01/22 23:43:55 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
21/01/22 23:43:55 INFO util.GSet: Computing capacity for map NameNodeRetryCache
21/01/22 23:43:55 INFO util.GSet: VM type = 64-bit
21/01/22 23:43:55 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
21/01/22 23:43:55 INFO util.GSet: capacity = 2^15 = 32768 entries
21/01/22 23:43:55 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1450880783-172.22.110.228-1611330235399
21/01/22 23:43:55 INFO common.Storage: Storage directory /opt/module/Hadoop/hadoop-2.7.7/dfs/name has been successfully formatted.
21/01/22 23:43:55 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/module/Hadoop/hadoop-2.7.7/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
21/01/22 23:43:55 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/Hadoop/hadoop-2.7.7/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
21/01/22 23:43:55 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
21/01/22 23:43:55 INFO util.ExitUtil: Exiting with status 0
21/01/22 23:43:55 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at fda/172.22.110.228
************************************************************/

启动Hadoop
在主节点执行以下命令

[root@fda hadoop-2.7.7]# start-dfs.sh
[root@fda hadoop-2.7.7]# start-yarn.sh
[root@fda hadoop-2.7.7]# jps
19587 SecondaryNameNode
19429 DataNode
19833 NodeManager
19738 ResourceManager
19308 NameNode
20126 Jps

如下图
在这里插入图片描述
通过 http://192.168.100.101:50070 地址访问出现以下界⾯说明配置成功
192.168.100.101换成你的阿里云的公网ip
在这里插入图片描述

版权声明
本文为[编程我的一切]所创,转载请带上原文链接,感谢
https://www.cnblogs.com/qishun/p/14316544.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云