Hive basic concepts, installation and deployment, use (simple and clear, at a glance!)

Homo sapiens 2021-01-22 18:08:50
hive basic concepts installation deployment


After a few days MapReduce After studying , We're finally here Hive Stage . This blog will bring you Hadoop Of components ——Hive Introduction to ! First of all, before we start , Let's review it with a familiar picture Hadoop The components of the ecosystem !

We can see clearly that ,Hive It's a bee with an elephant's head ! Why the elephant head ? Xiaojun is here to leave you a suspense , The answer is in the picture ~~ Next, let's officially enter Hive Learning from !!!

Hive Basic concepts

1.1、Hive brief introduction

What is? Hive

Hive Is based on Hadoop A data warehouse tool , A structured data file can be mapped to a database table , And provide classes SQL Query function (HQL). Its essence is to make SQL Convert to MapReduce The task of Computing , Bottom layer HDFS To provide data storage ,hive It can be understood as a will SQL Convert to MapReduce It's a tool for our mission .

Why use Hive

Use it directly Hadoop The problems faced :

  • Personnel learning costs are too high
  • The project cycle is too short
  • MapReduce It is too difficult to develop complex query logic

Why use Hive:

  • Operation interface adopts class SQL grammar , Provide the ability of rapid development .
  • Avoid writing MapReduce, Reduce developer learning costs . It's very convenient to expand the function .

Hive Characteristics

  • Scalable Hive The scale of cluster can be expanded freely , In general, there is no need to restart the service .
  • Malleability Hive Support user-defined functions , Users can implement their own functions according to their own needs .
  • Fault tolerance Good fault tolerance , There is a problem with the node SQL Execution can still be completed .

1.2、 Hive framework

Architecture diagram

Basic composition

  • The user interface : Include CLI、JDBC/ODBC、WebGUI. among ,CLI(command line interface) by shell Command line ;JDBC/ODBC yes Hive Of JAVA Realization , With traditional databases JDBC similar ;WebGUI It's through a browser Hive.
  • Metadata Store : Usually stored in a relational database such as mysql/derby in .Hive Store metadata in a database .Hive The metadata in includes the name of the table , The columns and partitions of the table and their properties , Table properties ( External table or not ), Table data directory, etc .
  • Interpreter 、 compiler 、 Optimizer 、 actuator : complete HQL Lexical analysis of query statements 、 Syntax analysis 、 compile 、 Optimization and generation of query plan . The generated query plan is stored in HDFS in , And then MapReduce Calls to perform .

1.3、 Hive And Hadoop The relationship between

Hive utilize HDFS Store the data , utilize MapReduce Query analysis data

1.4、Hive Compare with traditional database

hive Massive data for offline analysis

  1. data format .Hive There is no specific data format defined in , Data format can be specified by user , User defined data formats need to specify three properties : Column separator ( Usually a space 、”\t”、”\x001″)、 Line separator (”\n”) And how to read file data (Hive There are three file formats by default in TextFile,SequenceFile as well as RCFile).
  2. Hive In the process of loading data , No need to go from user data format to Hive Conversion of defined data formats .
  3. Hive No changes will be made to the data itself during loading , It doesn't even scan the data . And just copy or move the data content to the corresponding HDFS Directory .
  4. Hive Overwrites and additions to data are not supported , All data is determined at load time .
  5. Hive In the process of loading data, some of the Key Index .Hive To access a specific value in the data that meets the criteria , Need to brutally scan the whole data , So access latency is high . Because of the high latency of data access , To determine the Hive Not suitable for online data query .
  6. Hive It's based on Hadoop Above , therefore Hive The scalability of is and Hadoop The scalability of is consistent .

summary : hive have sql Appearance of database , But the application scenario is totally different ,hive It is only suitable for statistical analysis of batch data

1.5、Hive Data storage

1、Hive All data in is stored in HDFS in , No special data storage format ( Can support Text,SequenceFile,ParquetFile,ORC Format RCFILE etc. )

2、 You only need to tell Hive Column and row separators in data ,Hive You can parse the data .

3、Hive The following data models are included in :DB、Table,External Table,Partition,Bucket.

  • db: stay hdfs Manifested as ${hive.metastore.warehouse.dir} Directory next folder
  • table: stay hdfs Middle performance db Directory next folder
  • external table: And table similar , However, its data storage location can be in any specified path
  • partition: stay hdfs Manifested as table A subdirectory under a directory
  • bucket: stay hdfs Under the same table directory hash Multiple files after hash

1.6、HIVE Installation and deployment of

We choose the first machine here as our hive The installation machine .

Friendship tips : Hive Its function is based on MapReduce and HDFS, So I'm making sure that Hive It works Premise yes MapReduce and HDFS Can be used normally !

1.6.1 install

1.6.1.1、derby edition hive Use it directly :

1、 decompression hive

cd /export/softwares tar -zxvf hive-1.1.0-cdh5.14.0.tar.gz -C ../servers/

2、 Direct start bin/hive

cd ../servers/
cd hive-1.1.0-cdh5.14.0/
bin/hive
hive> create database mytest;

When we're on a node Hive Created a database on , Run to other nodes and find it doesn't exist ! The data on each node is not uniform !

shortcoming : Multiple places to install hive after , every last hive It's having its own set of metadata , Everyone's Library 、 The tables are not unified ;

Because in this way hive Metadata of cannot be unified , So this is basically " useless " 了 ! therefore , We need to change the following way to realize the data sharing of each node !!!

1.6.1.2、 Use mysql share hive Metadata :

mysql Database installation

  1. Online installation mysql Related packages
yum install mysql mysql-server mysql-devel
  1. start-up mysql Service for
/etc/init.d/mysqld start
chkconfig mysqld on
  1. Get into mysql And then authorize
use mysql;

Configure remote connection

grant all privileges on *.* to 'root'@'%' identified by '123456' with grant option;

Refresh flush privileges;

  1. Set up root User connection mysql The username and password of
update user set password=password('123456') where user='root';

Refresh flush privileges;

modify hive Configuration file for

  • modify hive-env.sh

Add our hadoop Environment variables of :

cd /export/servers/hive-1.1.0-cdh5.14.0/conf
cp hive-env.sh.template hive-env.sh
vim hive-env.sh

The specific location of the modified file is as follows :

HADOOP_HOME=/export/servers/hadoop-2.6.0-cdh5.14.0 #Hive Configuration Directory can be controlled by: export HIVE_CONF_DIR=/export/servers/hive-1.1.0-cdh5.14.0/conf

  • modify hive-site.xml
cd /export/servers/hive-1.1.0-cdh5.14.0/conf
vim hive-site.xml
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node01:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>node01</value>
</property>
<!--
<property>
<name>hive.metastore.uris</name>
<value>thrift://node03.hadoop.com:9083</value>
</property>
-->
</configuration>

In profile node01 Is the host alias for this node , If it is different, please modify it yourself !

Upload mysql Of lib Drive pack

And then the most important step , Will be mysql Of lib The driver package is uploaded to hive Of lib Under the table of contents

cd /export/servers/hive-1.1.0-cdh5.14.0/lib

take mysql-connector-java-5.1.38.jar Upload to this directory

After success , Effect display :

That's all for this sharing , Don't forget to praise and pay attention to Xiaojun , In the future, Xiaojun will share more Hive course , Coming soon ヾ(๑╹◡╹)ノ"

Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .

版权声明
本文为[Homo sapiens]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210122180032575c.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云