1. Build Hadoop experimental platform

Running cat brother 2021-01-22 09:09:53
build hadoop experimental platform


Node function planning

operating system :CentOS7.2(1511)

Java JDK edition :jdk-8u65-linux-x64.tar.gz

Hadoop edition :hadoop-2.8.3.tar.gz

Download address :

 link :https://pan.baidu.com/s/1iQfjO-d2ojA6mAeOOKb6CA
Extraction code :l0qp
node1 node2 node3
NameNode ResourceManage
DataNode DataNode DataNode
NodeManager NodeManager NodeManager
HistoryServer SecondaryNameNode

Configure the host IP Address and host name

The first three hosts are named as :node1,node2,node3,IP The corresponding relationship between address and host name is as follows :

Serial number Host name IP Address remarks
1 node1 192.168.100.11 Master node
2 node2 192.168.100.12 From the node
3 node3 192.168.100.13 From the node

Modify hostname

Execute the command to modify the host name on each of the three nodes :

node1:

[root@localhost ~]# hostnamectl set-hostname node1

node2:

[root@localhost ~]# hostnamectl set-hostname node2

node3:

[root@localhost ~]# hostnamectl set-hostname node3

Press ctrl+d Shortcut key or input exit, Exit terminal , After logging in again , View host name , As shown in the figure below :

image-20210121191713446

modify IP Address

With node1 Node as an example , Make changes on three nodes IP Address operation ( Note that the name of the network card may vary from machine to machine , for example ,node1 The name of the network card is :eno16777736):

[root@node1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736

take node1,node2,node3 Node IP The addresses are set to :192.168.100.11,192.168.100.12,192.168.100.13

image-20210121192437937

Modify host mapping

Perform the following operations on each of the three nodes , Add hostname and IP Address mapping :

[root@node1 ~]# vi /etc/hosts

image-20210121195138068

Configure secret free login between nodes and hosts

Generate cost node public key

stay node1,node2,node3 The key generation command is executed on each of the three nodes ( Encountered selection , Just press the home button Enter):

[root@node1 ~]# ssh-keygen

image-20210121194454019

Get into .ssh Catalog , View the generated public key :

[root@node1 ~]# cd ~/.ssh/
[root@node1 .ssh]# ls
id_rsa id_rsa.pub

Copy public key

Copy the generated public key to the node ( Including its own nodes ):

node1 node :

[root@node1 .ssh]# ssh-copy-id -i id_rsa.pub root@node1
The authenticity of host 'node1 (192.168.100.11)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node1'"
and check to make sure that only the key(s) you wanted were added.
[root@node1 .ssh]# ssh-copy-id -i id_rsa.pub root@node2
The authenticity of host 'node2 (192.168.100.12)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node2'"
and check to make sure that only the key(s) you wanted were added.
[root@node1 .ssh]# ssh-copy-id -i id_rsa.pub root@node3
The authenticity of host 'node3 (192.168.100.13)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node3's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node3'"
and check to make sure that only the key(s) you wanted were added.

node2 node :

[root@node2 .ssh]# ssh-copy-id -i id_rsa.pub root@node1
The authenticity of host 'node1 (192.168.100.11)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node1'"
and check to make sure that only the key(s) you wanted were added.
[root@node2 .ssh]# ssh-copy-id -i id_rsa.pub root@node2
The authenticity of host 'node2 (192.168.100.12)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node2'"
and check to make sure that only the key(s) you wanted were added.
[root@node2 .ssh]# ssh-copy-id -i id_rsa.pub root@node3
The authenticity of host 'node3 (192.168.100.13)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node3's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node3'"
and check to make sure that only the key(s) you wanted were added.

node3 node :

[root@node3 .ssh]# ssh-copy-id -i id_rsa.pub root@node1
The authenticity of host 'node1 (192.168.100.11)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node1'"
and check to make sure that only the key(s) you wanted were added.
[root@node3 .ssh]# ssh-copy-id -i id_rsa.pub root@node2
The authenticity of host 'node2 (192.168.100.12)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node2'"
and check to make sure that only the key(s) you wanted were added.
[root@node3 .ssh]# ssh-copy-id -i id_rsa.pub root@node3
The authenticity of host 'node3 (192.168.100.13)' can't be established.
ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node3's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node3'"
and check to make sure that only the key(s) you wanted were added.

Test password free login

Execute commands on each of the three nodes , Access related nodes ( With its own nodes ), If you don't need to enter a password for authentication , That means success ( With node3 Take the operation on the node as an example ):

[root@node3 .ssh]# ssh node1
Last login: Thu Jan 21 11:32:29 2021 from 192.168.100.1
[root@node1 ~]# exit
logout
Connection to node1 closed.
[root@node3 .ssh]# ssh node2
Last login: Thu Jan 21 16:01:47 2021 from node1
[root@node2 ~]# exit
logout
Connection to node2 closed.
[root@node3 .ssh]# ssh node3
Last login: Thu Jan 21 16:01:59 2021 from node1
[root@node3 ~]# exit
logout
Connection to node3 closed.

Turn off firewall

All three nodes have to execute :

[root@node1 .ssh]# systemctl stop firewalld
[root@node1 .ssh]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

Set up Selinux

All three nodes should be set up selinux by disabled:

[root@node1 ~]# vi /etc/selinux/config

image-20210121203234663

take selinux Set to disabled after , You need to restart the machine to take effect , You can also execute the following commands , take selinux Set to permissive( Also operate on three nodes ):

[root@node1 ~]# setenforce 0
[root@node1 ~]# getenforce
Permissive

To configure Java Environmental Science

stay node1 Create a directory under the node /opt/jdk, take jdk The package is uploaded to this directory :

[root@node1 ~]# mkdir -p /opt/jdk
[root@node1 ~]# cd /opt/jdk
[root@node1 jdk]# ls
jdk-8u65-linux-x64.tar.gz

decompression jdk-8u65-linux-x64.tar.gz Go to the current directory , Delete the package when you're done :

[root@node1 jdk]# tar zxvf jdk-8u65-linux-x64.tar.gz
[root@node1 jdk]# rm -f jdk-8u65-linux-x64.tar.gz

modify /etc/profile file , add to Java Environment configuration information :

[root@node1 jdk]# vi /etc/profile
#Java Start
export JAVA_HOME=/opt/jdk/jdk1.8.0_65
export PATH=$PATH:${JAVA_HOME}/bin
export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
#Java End

Use Java The environment configuration information takes effect :

[root@node1 jdk]# source /etc/profile
[root@node1 jdk]# java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)

To configure Hadoop Environmental Science

stay node1 Create a directory under the node /opt/hadoop, take hadoop The package is uploaded to this directory :

[root@node1 ~]# mkdir -p /opt/hadoop
[root@node1 ~]# cd /opt/hadoop/
[root@node1 hadoop]# ls
hadoop-2.8.3.tar.gz

decompression hadoop-2.8.3.tar.gz Go to the current directory , Delete the package when you're done :

[root@node1 hadoop]# tar zxvf hadoop-2.8.3.tar.gz
[root@node1 hadoop]# rm -f hadoop-2.8.3.tar.gz

add to Java environmental information

Modify in turn etc Under the table of contents hadoop-env.sh、mapred-env.sh、yarn-env.sh In the document JDK route , Point them at /opt/jdk/jdk1.8.0_65/, Note that when editing the configuration file , The first # export The symbol before ”#“ Get rid of :

[root@node1 ~]# cd /opt/hadoop/hadoop-2.8.3/etc/hadoop/
[root@node1 hadoop]# vi hadoop-env.sh

image-20210121211337260

[root@node1 hadoop]# vi mapred-env.sh

image-20210121211846828

[root@node1 hadoop]# vi yarn-env.sh

image-20210121211714422

To configure core-site.xml

Create... On each of the three nodes hadoop Temporary directory /opt/datas/tmp:

[root@node1 ~]# mkdir -p /opt/datas/tmp
[root@node2 ~]# mkdir -p /opt/datas/tmp
[root@node3 ~]# mkdir -p /opt/datas/tmp

stay node1 Modify... On node core-site.xml Configuration information :

[root@node1 ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/core-site.xml

Add the following :

<configuration>
<property>
<!-- NameNode Host address and port number -->
<name>fs.defaultFS</name>
<value>hdfs://node1:8020</value>
</property>
<!-- hadoop The address of the temporary directory -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/data/tmp</value>
</property>
</configuration>

To configure hdfs-site.xml

Create storage in three nodes respectively NameNode Catalog of data /opt/datas/dfs/namenode, And deposit DataNode Catalog of data /opt/datas/dfs/datanode( With node1 As an example of the operation on ,node2 and node3 The operation is the same on ):

[root@node1 ~]# mkdir -p /opt/datas/dfs/namenode
[root@node1 ~]# mkdir -p /opt/datas/dfs/datanode

edit hdfs-site.xml file , Configuration related information :

[root@node1 ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/hdfs-site.xml
<configuration>
<!-- Specify the number of copies created -->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!-- Appoint SecondaryNameNode Address and port number , take node2 As SecondaryNameNode The server -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node2:50090</value>
</property>
<!-- NameNode Data storage path -->
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/datas/dfs/namenode</value>
</property>
<!-- DataNode Data storage path -->
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/datas/dfs/datanode</value>
</property>
</configuration>

To configure slaves

slaves The file is used to specify hdfs DataNode Work node , edit slaves file :

[root@node1 ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/slaves

Change the content of the document to :

image-20210121223526312

To configure yarn-site.xml

edit yarn-site.xml file :

[root@node1 ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/yarn-site.xml

Modify file content :

<configuration>
<!-- NodeManager Ancillary services running on , It needs to be configured to mapreduce_shuffle, To run MapReduce Program -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- Appoint ResourceManager The server -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node2</value>
</property>
<!-- Configure whether to enable log aggregation -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- Configure aggregated logs in hdfs The longest storage time on the computer -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>106800</value>
</property>
</configuration>

To configure mapred-site.xml

With mapred-site.xml.template As a template , Copy a mapred-site.xml file :

[root@node1 ~]# cp /opt/hadoop/hadoop-2.8.3/etc/hadoop/mapred-site.xml.template /opt/hadoop/hadoop-2.8.3/etc/hadoop/mapred-site.xml

edit mapred-site.xml file :

[root@node1 ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/mapred-site.xml
<configuration>
<!-- Set up mapreduce The task runs in yarn On -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- Set up mapreduce History server address and port number -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>node1:10020</value>
</property>
<!-- Set up mapreduce History server's web Page address and port number -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>node1:19888</value>
</property>
</configuration>

stay profile Configuration in file hadoop environmental information

Edit environment profile /etc/profile:

[root@node1 ~]# vi /etc/profile
#Hadoop Start
export HADOOP_HOME=/opt/hadoop/hadoop-2.8.3
export PATH=$PATH:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin
#Hadoop End

Make the environment configuration information effective :

[root@node1 ~]# source /etc/profile

Distribute content to nodes

stay node2,node3 Create a directory on the node /opt/jdk,/opt/hadoop:

[root@node2 ~]# mkdir -p /opt/jdk
[root@node2 ~]# mkdir -p /opt/hadoop

distribution jdk to node2,node3:

[root@node1 ~]# scp -r /opt/jdk/jdk1.8.0_65/ node2:/opt/jdk
[root@node1 ~]# scp -r /opt/jdk/jdk1.8.0_65/ node3:/opt/jdk

distribution hadoop to node2,node3:

[root@node1 ~]# scp -r /opt/hadoop/hadoop-2.8.3/ node2:/opt/hadoop
[root@node1 ~]# scp -r /opt/hadoop/hadoop-2.8.3/ node3:/opt/hadoop

distribution profile to node2,node3:

[root@node1 ~]# scp /etc/profile node2:/etc/profile
[root@node1 ~]# scp /etc/profile node3:/etc/profile

stay node2,node3 Execute the command on the node to make the configuration take effect :

node2:

[root@node2 ~]# source /etc/profile

node3:

[root@node3 ~]# source /etc/profile

format NameNode

If you need to reformat NameNode, You need to put the original NameNode and DataNode Delete all the documents under , Otherwise, it will report a mistake , Because every time you format , The default is to create a cluster ID, And write NameNode and DataNode Of VERSION In file (VERSION The directory where the file is located is dfs/namenode/current and dfs/datanode/current), When reformatting , A new cluster will be generated by default ID, If you don't delete the original directory , It can lead to NameNode Medium VERSION In the file is the new cluster ID, and DataNode The middle is the old cluster ID, So it's inconsistent , Result in an error , Another way is to specify the cluster when formatting ID Parameters , Designated as the old cluster ID.
NameNode and DataNode The directory is in hdfs-site.xml in dfs.namenode.name.dir、dfs.datanode.data.dir Configured .

[root@node1 ~]# cd /opt/hadoop/hadoop-2.8.3/bin/
[root@node1 bin]# ./hdfs namenode -format

image-20210121232343453

Start cluster

start-up HDFS

[root@node1 ~]# cd /opt/hadoop/hadoop-2.8.3/sbin/
[root@node1 sbin]# ./start-dfs.sh
Starting namenodes on [node1]
node1: starting namenode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-namenode-node1.out
node3: starting datanode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-datanode-node3.out
node2: starting datanode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-datanode-node2.out
node1: starting datanode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-datanode-node1.out
Starting secondary namenodes [node2]
node2: starting secondarynamenode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-secondarynamenode-node2.out
[root@node1 sbin]#

jps Command to view the process startup , Can see node1 Node started NameNode and DataNode process .

[root@node1 sbin]# jps
1588 NameNode
1717 DataNode
1930 Jps

start-up YARN

stay node2 Execute commands on nodes :

[root@node2 ~]# cd /opt/hadoop/hadoop-2.8.3/sbin/
[root@node2 sbin]# ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/hadoop-2.8.3/logs/yarn-root-resourcemanager-node2.out
node3: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.3/logs/yarn-root-nodemanager-node3.out
node1: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.3/logs/yarn-root-nodemanager-node1.out
node2: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.3/logs/yarn-root-nodemanager-node2.out
[root@node2 sbin]#

jps Command to view the process startup , Can see node2 Node started ResourceManager process :

[root@node2 sbin]# jps
2629 NodeManager
2937 Jps
1434 DataNode
1531 SecondaryNameNode
2525 ResourceManager
[root@node2 sbin]#

Be careful , If not ResourceManager Run on main engine $HADOOP_HOME/sbin/start-yarn.sh If you order ,ResourceManager The process will not start , Need to ResourceManager Execute on the host ./yarn-daemon.sh start resourcemanager Command to start ResourceManager process .

Start the log server

stay node1 Start on the node MapReduce The log service :

[root@node1 sbin]# ./mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /opt/hadoop/hadoop-2.8.3/logs/mapred-root-historyserver-node1.out
[root@node1 sbin]#

jps Command to view the process startup , Can see node1 Node started JobHistoryServer process :

[root@node1 sbin]# jps
1588 NameNode
1717 DataNode
2502 Jps
2462 JobHistoryServer
2303 NodeManager
[root@node1 sbin]#

see HDFS Web page

The address is NameNode Process running host ip, Port is 50070( website :http://192.168.100.11:50070):

image-20210121234119461

see YARN Web page

The address is node2 host ip, The port number is :8088( website :http://192.168.100.12:8088

image-20210121234304905

see JobHistory Web page

The address is node1 host ip, The port number is :19888( website :http://192.168.100.11:19888/jobhistory)

image-20210121234541746

Test cases ( Use the word segmentation tool to count the sample word frequency )

stay node1 Prepare the sample file on the node

[root@node1 ~]# vi example.txt

stay example.txt Add the following to the file :

hadoop mapreduce hive
hbase spark storm
sqoop hadoop hive
spark hadoop

stay hdfs Create input directory in /datas/input

[root@node1 ~]# hadoop fs -mkdir -p /datas/input

Put the sample file example.txt Uploaded to the hdfs Directory

[root@node1 ~]# hadoop fs -put ~/example.txt /datas/input

function hadoop Self contained mapreduce Demo Program

[root@node1 ~]# hadoop jar /opt/hadoop/hadoop-2.8.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.3.jar wordcount /datas/input/example.txt /datas/output

image-20210122000546399

View output file

[root@node1 ~]# hadoop fs -cat /datas/output/part-r-00000
hadoop 3
hbase 1
hive 2
mapreduce 1
spark 2
sqoop 1
storm 1
[root@node1 ~]#
版权声明
本文为[Running cat brother]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210122090730008x.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云