HIVE It's a data warehouse , The warehouse is based on hadoop frame , Can exist hdfs The structured data file on is mapped to a database table .HIVE You can use classes SQL Statement to process structured data ( Query data ), In other words, structured data is treated as a class mysql In the table , use SQL Statement query .
Structured data is line data , Data that can be represented in a two-dimensional table structure ; Unstructured data is data that cannot be represented by two-dimensional table structure , Including all forms of office documents 、 Text 、 picture 、XML、HTML、 Various reports 、 Image and audio / Video information .
Hive The essence of will SQL The statement is converted to MapReduce Task run , To make unfamiliar with MapReduce It's very convenient for users to use HQL Processing and calculation HDFS Structured data on , It is suitable for offline batch data calculation .
website :
The Apache Hive data warehouse software facilitates reading,writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.
1 MySQL install
HIVE By default, the metadata is placed in the embedded Derby In the database , But because of Derby The database can only allow one session connection , Not applicable in actual production environment , Therefore, this paper adopts MySQL Storage HIVE Metadata .
use yum Tool installation MySQL
# download mysql80-community-release-el8-1.noarch.rpm
wget https://dev.mysql.com/get/mysql80-community-release-el8-1.noarch.rpm
# install rpm software package
yum localinstall mysql80-community-release-el8-1.noarch.rpm
# install mysql Server side
yum install mysql-community-server
# start-up mysql Server side , And set boot up
systemctl start mysqld
systemctl enable mysqld
systemctl daemon-reload
# land mysql Modify the client first root The default password for the account
# Check in the log file mysql Of root User default password
grep 'temporary password' /var/log/mysqld.log
#mysql Client login , Fill in the password from the above command , Choose one of the three .
mysql -uroot -p
# Change Password , Passwords must have upper and lower case and numbers .
ALTER USER 'root'@'localhost' IDENTIFIED BY 'Pass9999';
Change the password , To restart the server .
systemctl restart mysqld
# Log in to the client
mysql -uroot -p Pass9999
# modify root Remote access to , You can connect remotely mysql.
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'Pass9999' WITH GRANT OPTION;
# Refresh MySQL The system authority of
FLUSH PRIVILEGES;
mysql keep in storage hive Metadata , What tables does metadata have , The specific meaning of each table is as follows :
SELECT * FROM `VERSION`
SELECT * FROM `DBS`
SELECT * FROM `TBLS`
- version surface :hive Version information
- DBS surface :hive Database related metadata table
- TBLS surface :hive Table and view related metadata table
2 HIVE install
The package I downloaded is :apache-hive-3.1.2-bin.tar.gz, Download address can be downloaded from hadoop In the article on cluster building .
# decompression /usr/local Next
tar -zxvf apache-hive-3.1.2-bin.tar.gz /usr/local
# rename
mv apache-hive-3.1.2-bin hive-3.1.2
# Configure environment variables
vi /etc/profile
# Add the following configuration at the end of the document
export HIVE_HOME=/usr/local/hive-3.1.2
export HIVE_CONF_DIR=$HIVE_HOME/conf
export PATH=$HIVE_HOME/bin:$PATH
# Immediate effect environment variable
source /etc/profile
#cd Under the file
/usr/local/hive-3.1.2/conf
# Copy a document as hive-site.xml
cp hive-default.xml.template hive-site.xml
# Empty hive-site.xml The content of , Add the following
<configuration>
<property><!-- Database connection address , Use MySQL Store metadata information , establish hive DB-->
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value>
</property>
<property><!-- Database driven -->
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property><!-- Database user name -->
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>
<property><!-- password -->
<name>javax.jdo.option.ConnectionPassword</name>
<value>Pass-9999</value>
<description>password to use against metastore database</description>
</property>
<property><!--HDFS route , Used to store different map/reduce Stage execution plan and intermediate output of these stages .-->
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hive</value>
</property>
<property><!--Hive Query the directory where the log is located ,HDFS route -->
<name>hive.querylog.location</name>
<value>/tmp/logs</value>
</property>
<property><!-- The default location of the local table ,HDFS route -->
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property><!-- Local mode on ,3 Start mode , See below for details -->
<name>hive.metastore.local</name>
<value>true</value>
</property>
<property
<name>hive.server2.logging.operation.log.location</name>
<value>/tmp/logs</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp/hive${hive.session.id}_resources</value>
</property>
</configuration>
# modify hive The configuration file
#cd To bin Under the document
/usr/local/hive-3.1.2/bin
# Add the following configuration
export HADOOP_HEAPSIZE=${HADOOP_HEAPSIZE:-256}
export JAVA_HOME=/usr/local/jdk1.8.0_261
export HADOOP_HOME=/usr/local/hadoop-3.2.1
export HIVE_HOME=/usr/local/hive-3.1.2
# add to java drive
cd /usr/local/hive-3.1.2/lib
# hold jar Put it in lib Under the document , The file was downloaded from the Internet .
mysql-connector-java-5.1.49-bin.jar
# Yes Hive Initialize and start Hive
# cd To bin Under the document , Yes hive initialization , Mainly initialization mysql, add to mysql by hive Metabase of .
cd /usr/local/hive-3.1.2/bin
schematool -initSchema -dbType mysql
# start-up hive, Direct input hive start-up
hive
# see database and table
show databases;
show tables;
Articles are constantly updated , You can search by wechat 「 Big data analyst knowledge sharing 」 First time reading , reply 【666】 Access to big data related information .