1.搭建Hadoop實驗平臺

itread01 2021-01-22 12:29:38
大数据 hadoop 搭建 itread01


# 節點功能規劃作業系統:CentOS7.2(1511)Java JDK版本:jdk-8u65-linux-x64.tar.gzHadoop版本:hadoop-2.8.3.tar.gz下載地址:~~~html連結:https://pan.baidu.com/s/1iQfjO-d2ojA6mAeOOKb6CA 提取碼:l0qp ~~~| node1 | node2 | node3 || ------------- | -------------- | ----------------- || NameNode | ResourceManage | || DataNode | DataNode | DataNode || NodeManager | NodeManager | NodeManager || HistoryServer | | SecondaryNameNode |# 配置主機IP地址和主機名稱三始主機分別命名為:node1,node2,node3,IP地址和主機名稱對應關係如下:| 序號 | 主機名 | IP地址 | 備註 || ---- | ------ | -------------- | ------ || 1 | node1 | 192.168.100.11 | 主節點 || 2 | node2 | 192.168.100.12 | 從節點 || 3 | node3 | 192.168.100.13 | 從節點 |## 修改主機名在三個節點上分別執行修改主機名的命令:node1:~~~bash[ [email protected] ~]# hostnamectl set-hostname node1~~~node2:~~~bash[ [email protected] ~]# hostnamectl set-hostname node2~~~node3:~~~bash[ [email protected] ~]# hostnamectl set-hostname node3~~~按ctrl+d快捷鍵或輸入exit,退出終端,重新登入後,檢視主機名,如下圖所示:![image-20210121191713446](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083334016-293593373.png)## 修改IP地址以node1節點為例,在三個節點執行修改IP地址的操作(注意網絡卡名稱因機器的不同可能不一樣,例如,node1的網絡卡名為:eno16777736):~~~bash[ [email protected] ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736~~~將node1,node2,node3節點的IP地址分別設定為:192.168.100.11,192.168.100.12,192.168.100.13![image-20210121192437937](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083334344-1060761246.png)## 修改主機對映在三個節點分別執行如下操作,新增主機名和IP地址的對映關係:~~~bash[ [email protected] ~]# vi /etc/hosts~~~![image-20210121195138068](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083334707-1614134806.png)# 配置節點主機之間的免密登入## 生成本節點公鑰在node1,node2,node3三個節點上分別執行生成金鑰的命令(遇到選擇項,直接按回國鍵Enter):~~~bash[ [email protected] ~]# ssh-keygen~~~![image-20210121194454019](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083334959-902589208.png)進入.ssh目錄,檢視生成的公鑰:~~~bash[ [email protected] ~]# cd ~/.ssh/[ [email protected] .ssh]# lsid_rsa id_rsa.pub~~~## 拷貝公鑰將生成的公鑰拷貝至節點(包括自身節點):node1節點:~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node1 (192.168.100.11)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node2 (192.168.100.12)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node3 (192.168.100.13)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~node2節點:~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node1 (192.168.100.11)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node2 (192.168.100.12)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node3 (192.168.100.13)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~node3節點:~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node1 (192.168.100.11)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node2 (192.168.100.12)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~~~~bash[ [email protected] .ssh]# ssh-copy-id -i id_rsa.pub [email protected]The authenticity of host 'node3 (192.168.100.13)' can't be established.ECDSA key fingerprint is e1:6c:f3:7f:be:79:dc:87:15:97:51:4d:e5:b4:56:78.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Number of key(s) added: 1Now try logging into the machine, with: "ssh ' [email protected]'"and check to make sure that only the key(s) you wanted were added.~~~## 測試免密登入在三個節點上分別執行命令,訪問相關節點(含自身節點),如果不需要輸入密碼進行身份驗證,則表示成功(以node3節點上的操作為例):~~~bash[ [email protected] .ssh]# ssh node1Last login: Thu Jan 21 11:32:29 2021 from 192.168.100.1[ [email protected] ~]# exitlogoutConnection to node1 closed.[ [email protected] .ssh]# ssh node2Last login: Thu Jan 21 16:01:47 2021 from node1[ [email protected] ~]# exitlogoutConnection to node2 closed.[ [email protected] .ssh]# ssh node3Last login: Thu Jan 21 16:01:59 2021 from node1[ [email protected] ~]# exitlogoutConnection to node3 closed.~~~# 關閉防火牆三個節點都要執行:~~~bash[ [email protected] .ssh]# systemctl stop firewalld[ [email protected] .ssh]# systemctl disable firewalldRemoved symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.~~~# 設定Selinux三個節點都要設定selinux為disabled:~~~bash[ [email protected] ~]# vi /etc/selinux/config ~~~![image-20210121203234663](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083335249-1197621270.png)將selinux設定為disabled後,需要重啟機器生效,也可以執行如下命令,將selinux設定為permissive(同樣也要在三個節點操作):~~~bash[ [email protected] ~]# setenforce 0[ [email protected] ~]# getenforce Permissive~~~# 配置Java環境在node1節點下建立目錄/opt/jdk,將jdk包上傳至此目錄:~~~bash[ [email protected] ~]# mkdir -p /opt/jdk[ [email protected] ~]# cd /opt/jdk[ [email protected] jdk]# lsjdk-8u65-linux-x64.tar.gz~~~解壓縮jdk-8u65-linux-x64.tar.gz至當前目錄,完成後刪除壓縮包:~~~bash[ [email protected] jdk]# tar zxvf jdk-8u65-linux-x64.tar.gz[ [email protected] jdk]# rm -f jdk-8u65-linux-x64.tar.gz ~~~修改/etc/profile檔案,新增Java環境配置資訊:~~~bash[ [email protected] jdk]# vi /etc/profile~~~~~~shell#Java Startexport JAVA_HOME=/opt/jdk/jdk1.8.0_65export PATH=$PATH:${JAVA_HOME}/binexport CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar#Java End~~~使用Java環境配置資訊生效:~~~bash[ [email protected] jdk]# source /etc/profile[ [email protected] jdk]# java -versionjava version "1.8.0_65"Java(TM) SE Runtime Environment (build 1.8.0_65-b17)Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)~~~# 配置Hadoop環境在node1節點下建立目錄/opt/hadoop,將hadoop包上傳至此目錄:~~~bash[ [email protected] ~]# mkdir -p /opt/hadoop[ [email protected] ~]# cd /opt/hadoop/[ [email protected] hadoop]# lshadoop-2.8.3.tar.gz~~~解壓縮hadoop-2.8.3.tar.gz至當前目錄,完成後刪除壓縮包:~~~bash[ [email protected] hadoop]# tar zxvf hadoop-2.8.3.tar.gz[ [email protected] hadoop]# rm -f hadoop-2.8.3.tar.gz ~~~## 新增Java環境資訊依次修改etc目錄下 hadoop-env.sh、mapred-env.sh、yarn-env.sh檔案中的JDK路徑,將其分別指向/opt/jdk/jdk1.8.0_65/,注意在編輯配置檔案時,先把# export前的符號”#“去掉:~~~bash[ [email protected] ~]# cd /opt/hadoop/hadoop-2.8.3/etc/hadoop/~~~~~~bash[ [email protected] hadoop]# vi hadoop-env.sh~~~![image-20210121211337260](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083335440-486079932.png)~~~bash[ [email protected] hadoop]# vi mapred-env.sh~~~![image-20210121211846828](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083335624-884046726.png)~~~bash[ [email protected] hadoop]# vi yarn-env.sh ~~~![image-20210121211714422](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083335791-1848160796.png)## 配置core-site.xml在三個節點上分別建立hadoop臨時目錄/opt/datas/tmp:~~~bash[ [email protected] ~]# mkdir -p /opt/datas/tmp~~~~~~bash[ [email protected] ~]# mkdir -p /opt/datas/tmp~~~~~~bash[ [email protected] ~]# mkdir -p /opt/datas/tmp~~~在node1節點上修改core-site.xml配置資訊:~~~bash[ [email protected] ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/core-site.xml~~~新增如下內容:~~~xml~~~## 配置hdfs-site.xml在三個節點分別建立好存放NameNode資料的目錄/opt/datas/dfs/namenode,以及存入DataNode資料的目錄/opt/datas/dfs/datanode(以node1上的操作為例,node2和node3上的操作相同):~~~bash[ [email protected] ~]# mkdir -p /opt/datas/dfs/namenode[ [email protected] ~]# mkdir -p /opt/datas/dfs/datanode~~~編輯hdfs-site.xml檔案,配置相關資訊:~~~bash[ [email protected] ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/hdfs-site.xml~~~~~~xml~~~## 配置slavesslaves檔案用於指定hdfs DataNode 工作節點,編輯slaves檔案:~~~bash[ [email protected] ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/slaves~~~將檔案內容修改為:![image-20210121223526312](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083335979-114006850.png)## 配置yarn-site.xml編輯yarn-site.xml檔案:~~~bash[ [email protected] ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/yarn-site.xml~~~修改檔案內容:~~~xml~~~## 配置mapred-site.xml以mapred-site.xml.template為模板,複製一個mapred-site.xml檔案:~~~bash[ [email protected] ~]# cp /opt/hadoop/hadoop-2.8.3/etc/hadoop/mapred-site.xml.template /opt/hadoop/hadoop-2.8.3/etc/hadoop/mapred-site.xml~~~編輯mapred-site.xml檔案:~~~bash[ [email protected] ~]# vi /opt/hadoop/hadoop-2.8.3/etc/hadoop/mapred-site.xml~~~~~~xml~~~## 在profile檔案中配置hadoop環境資訊編輯環境配置檔案/etc/profile:~~~bash[ [email protected] ~]# vi /etc/profile~~~~~~shell#Hadoop Startexport HADOOP_HOME=/opt/hadoop/hadoop-2.8.3export PATH=$PATH:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin#Hadoop End~~~讓環境配置資訊生效:~~~bash[ [email protected] ~]# source /etc/profile~~~# 分發內容至節點## 在node2,node3節點上建立目錄/opt/jdk,/opt/hadoop:~~~bash[ [email protected] ~]# mkdir -p /opt/jdk[ [email protected] ~]# mkdir -p /opt/hadoop~~~## 分發jdk至node2,node3:~~~bash[ [email protected] ~]# scp -r /opt/jdk/jdk1.8.0_65/ node2:/opt/jdk[ [email protected] ~]# scp -r /opt/jdk/jdk1.8.0_65/ node3:/opt/jdk~~~## 分發hadoop至node2,node3:~~~bash[ [email protected] ~]# scp -r /opt/hadoop/hadoop-2.8.3/ node2:/opt/hadoop[ [email protected] ~]# scp -r /opt/hadoop/hadoop-2.8.3/ node3:/opt/hadoop~~~## 分發profile至node2,node3:~~~bash[ [email protected] ~]# scp /etc/profile node2:/etc/profile[ [email protected] ~]# scp /etc/profile node3:/etc/profile~~~## 在node2,node3節點上執行命令使配置生效:node2:~~~bash[ [email protected] ~]# source /etc/profile~~~node3:~~~bash[ [email protected] ~]# source /etc/profile~~~# 格式化NameNode如果需要重新格式化NameNode,需要先將原來NameNode和DataNode下的檔案全部刪,不然會報錯,因為每次格式化,預設是建立一個叢集ID,並寫入NameNode和DataNode的VERSION檔案中(VERSION檔案所在目錄為dfs/namenode/current 和 dfs/datanode/current),重新格式化時,預設會生成一個新的叢集ID,如果不刪除原來的目錄,會導致NameNode中的VERSION檔案中是新的叢集ID,而DataNode中是舊的叢集ID,從而不一致,導致報錯,另一種方法是格式化時指定叢集ID引數,指定為舊的叢集ID。NameNode和DataNode所在目錄是在hdfs-site.xml中dfs.namenode.name.dir、dfs.datanode.data.dir所配置。~~~bash[ [email protected] ~]# cd /opt/hadoop/hadoop-2.8.3/bin/[ [email protected] bin]# ./hdfs namenode -format~~~![image-20210121232343453](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083336222-965876884.png)# 啟動叢集## 啟動HDFS~~~bash[ [email protected] ~]# cd /opt/hadoop/hadoop-2.8.3/sbin/~~~~~~bash[ [email protected] sbin]# ./start-dfs.shStarting namenodes on [node1]node1: starting namenode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-namenode-node1.outnode3: starting datanode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-datanode-node3.outnode2: starting datanode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-datanode-node2.outnode1: starting datanode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-datanode-node1.outStarting secondary namenodes [node2]node2: starting secondarynamenode, logging to /opt/hadoop/hadoop-2.8.3/logs/hadoop-root-secondarynamenode-node2.out[ [email protected] sbin]# ~~~jps 命令檢視程序啟動情況,能看到node1節點啟動了 NameNode 和 DataNode程序。~~~bash[ [email protected] sbin]# jps1588 NameNode1717 DataNode1930 Jps~~~## 啟動YARN在node2節點上執行命令:~~~bash[ [email protected] ~]# cd /opt/hadoop/hadoop-2.8.3/sbin/[ [email protected] sbin]# ./start-yarn.sh starting yarn daemonsstarting resourcemanager, logging to /opt/hadoop/hadoop-2.8.3/logs/yarn-root-resourcemanager-node2.outnode3: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.3/logs/yarn-root-nodemanager-node3.outnode1: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.3/logs/yarn-root-nodemanager-node1.outnode2: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.3/logs/yarn-root-nodemanager-node2.out[ [email protected] sbin]# ~~~jps 命令檢視程序啟動情況,能看到node2節點啟動了ResourceManager程序:~~~bash[ [email protected] sbin]# jps2629 NodeManager2937 Jps1434 DataNode1531 SecondaryNameNode2525 ResourceManager[ [email protected] sbin]# ~~~注意,如果不在ResourceManager主機上執行 $HADOOP_HOME/sbin/start-yarn.sh 命令的話,ResourceManager 程序將不會啟動,需要到 ResourceManager 主機上執行./yarn-daemon.sh start resourcemanager 命令來啟動ResourceManager程序。## 啟動日誌伺服器在node1節點上啟動MapReduce日誌服務:~~~bash[ [email protected] sbin]# ./mr-jobhistory-daemon.sh start historyserverstarting historyserver, logging to /opt/hadoop/hadoop-2.8.3/logs/mapred-root-historyserver-node1.out[ [email protected] sbin]# ~~~jps 命令檢視程序啟動情況,能看到node1節點啟動了JobHistoryServer程序:~~~bash[ [email protected] sbin]# jps1588 NameNode1717 DataNode2502 Jps2462 JobHistoryServer2303 NodeManager[ [email protected] sbin]# ~~~## 檢視HDFS Web頁面地址為 NameNode 程序執行主機ip,埠為50070(網址:http://192.168.100.11:50070):![image-20210121234119461](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083336446-1370959865.png)## 檢視YARN Web頁面地址為node2主機ip,埠號為:8088(網址:http://192.168.100.12:8088)![image-20210121234304905](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083336718-675515982.png)## 檢視JobHistory Web 頁面地址為node1主機ip,埠號為:19888(網址:http://192.168.100.11:19888/jobhistory)![image-20210121234541746](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083337051-132828003.png)# 測試案例(使用分詞工具統計樣本詞頻)## 在node1節點上準備樣本檔案~~~bash[ [email protected] ~]# vi example.txt~~~在example.txt檔案中新增如下內容:~~~bashhadoop mapreduce hivehbase spark stormsqoop hadoop hivespark hadoop~~~## 在hdfs中建立輸入目錄/datas/input~~~bash[ [email protected] ~]# hadoop fs -mkdir -p /datas/input~~~## 將樣本檔案example.txt上傳至hdfs目錄中~~~bash[ [email protected] ~]# hadoop fs -put ~/example.txt /datas/input~~~## 執行hadoop自帶的mapreduce Demo程式~~~bash[ [email protected] ~]# hadoop jar /opt/hadoop/hadoop-2.8.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.3.jar wordcount /datas/input/example.txt /datas/output~~~![image-20210122000546399](https://img2020.cnblogs.com/blog/47143/202101/47143-20210122083337417-757464521.png)## 檢視輸出檔案~~~bash[ [email protected] ~]# hadoop fs -cat /datas/output/part-r-00000hadoop 3hbase 1hive 2mapreduce 1spark 2sqoop 1storm 1[ [email protected] ~
版权声明
本文为[itread01]所创,转载请带上原文链接,感谢
https://www.itread01.com/content/1611289683.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云