This article takes you to understand the installation process of data warehouse hive

Big data super brother 2020-11-07 20:55:46
article takes understand installation process


HIVE It's a data warehouse , The warehouse is based on hadoop frame , Can exist hdfs The structured data file on is mapped to a database table .HIVE You can use classes SQL Statement to process structured data ( Query data ), In other words, structured data is treated as a class mysql In the table , use SQL Statement query .

Structured data is line data , Data that can be represented in a two-dimensional table structure ; Unstructured data is data that cannot be represented by two-dimensional table structure , Including all forms of office documents 、 Text 、 picture 、XML、HTML、 Various reports 、 Image and audio / Video information .

Hive The essence of will SQL The statement is converted to MapReduce Task run , To make unfamiliar with MapReduce It's very convenient for users to use HQL Processing and calculation HDFS Structured data on , It is suitable for offline batch data calculation .

HIVE Official website

website :

The Apache Hive data warehouse software facilitates reading,writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.

1 MySQL install

HIVE By default, the metadata is placed in the embedded Derby In the database , But because of Derby The database can only allow one session connection , Not applicable in actual production environment , Therefore, this paper adopts MySQL Storage HIVE Metadata .

MySQL Download address

use yum Tool installation MySQL

# download mysql80-community-release-el8-1.noarch.rpm
wget https://dev.mysql.com/get/mysql80-community-release-el8-1.noarch.rpm
# install rpm software package
yum localinstall mysql80-community-release-el8-1.noarch.rpm
# install mysql Server side
yum install mysql-community-server
# start-up mysql Server side , And set boot up
systemctl start mysqld
systemctl enable mysqld
systemctl daemon-reload
# land mysql Modify the client first root The default password for the account
# Check in the log file mysql Of root User default password
grep 'temporary password' /var/log/mysqld.log
#mysql Client login , Fill in the password from the above command , Choose one of the three .
mysql -uroot -p
# Change Password , Passwords must have upper and lower case and numbers .
ALTER USER 'root'@'localhost' IDENTIFIED BY 'Pass9999';
Change the password , To restart the server .
systemctl restart mysqld
# Log in to the client
mysql -uroot -p Pass9999
# modify root Remote access to , You can connect remotely mysql.
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'Pass9999' WITH GRANT OPTION;
# Refresh MySQL The system authority of
FLUSH PRIVILEGES;

 Insert picture description here
mysql keep in storage hive Metadata , What tables does metadata have , The specific meaning of each table is as follows :

 Insert picture description here

SELECT * FROM `VERSION`
SELECT * FROM `DBS`
SELECT * FROM `TBLS`
  • version surface :hive Version information
     Insert picture description here
  • DBS surface :hive Database related metadata table
     Insert picture description here
  • TBLS surface :hive Table and view related metadata table
     Insert picture description here

2 HIVE install

The package I downloaded is :apache-hive-3.1.2-bin.tar.gz, Download address can be downloaded from hadoop In the article on cluster building .

# decompression /usr/local Next
tar -zxvf apache-hive-3.1.2-bin.tar.gz /usr/local
# rename
mv apache-hive-3.1.2-bin hive-3.1.2
# Configure environment variables
vi /etc/profile
# Add the following configuration at the end of the document
export HIVE_HOME=/usr/local/hive-3.1.2
export HIVE_CONF_DIR=$HIVE_HOME/conf
export PATH=$HIVE_HOME/bin:$PATH
# Immediate effect environment variable
source /etc/profile
#cd Under the file
/usr/local/hive-3.1.2/conf
# Copy a document as hive-site.xml
cp hive-default.xml.template hive-site.xml
# Empty hive-site.xml The content of , Add the following
<configuration>
<property><!-- Database connection address , Use MySQL Store metadata information , establish hive DB-->
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value>
</property>
<property><!-- Database driven -->
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property><!-- Database user name -->
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>
<property><!-- password -->
<name>javax.jdo.option.ConnectionPassword</name>
<value>Pass-9999</value>
<description>password to use against metastore database</description>
</property>
<property><!--HDFS route , Used to store different map/reduce Stage execution plan and intermediate output of these stages .-->
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hive</value>
</property>
<property><!--Hive Query the directory where the log is located ,HDFS route -->
<name>hive.querylog.location</name>
<value>/tmp/logs</value>
</property>
<property><!-- The default location of the local table ,HDFS route -->
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property><!-- Local mode on ,3 Start mode , See below for details -->
<name>hive.metastore.local</name>
<value>true</value>
</property>
<property
<name>hive.server2.logging.operation.log.location</name>
<value>/tmp/logs</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/hive${hive.session.id}_resources</value>
</property>
</configuration>
# modify hive The configuration file
#cd To bin Under the document
/usr/local/hive-3.1.2/bin
# Add the following configuration
export HADOOP_HEAPSIZE=${HADOOP_HEAPSIZE:-256}
export JAVA_HOME=/usr/local/jdk1.8.0_261
export HADOOP_HOME=/usr/local/hadoop-3.2.1
export HIVE_HOME=/usr/local/hive-3.1.2
# add to java drive
cd /usr/local/hive-3.1.2/lib
# hold jar Put it in lib Under the document , The file was downloaded from the Internet .
mysql-connector-java-5.1.49-bin.jar
# Yes Hive Initialize and start Hive
# cd To bin Under the document , Yes hive initialization , Mainly initialization mysql, add to mysql by hive Metabase of .
cd /usr/local/hive-3.1.2/bin
schematool -initSchema -dbType mysql
# start-up hive, Direct input hive start-up
hive
# see database and table
show databases;
show tables;

 Insert picture description here

Articles are constantly updated , You can search by wechat 「 Big data analyst knowledge sharing 」 First time reading , reply 【666】 Access to big data related information .
版权声明
本文为[Big data super brother]所创,转载请带上原文链接,感谢

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云