Hive basic operation (updating continuously)

Homo sapiens 2021-01-22 18:08:19
hive basic operation updating continuously


This blog , What Xiaojun shares is about Hive Basic operation !

Basic operation of database

Create database

 create database [ if not exists ] myhive ;

explain :hive The table storage location mode of is created by hive-site.xml One of the attributes specifies <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value>

Create a database and specify hdfs Storage location :

create database myhive2 location '/myhive2';

Delete database

drop database myhive2;

Using this command to delete can only delete an empty database , If there are tables under the database , So it's a mistake !

Force database deletion

drop database myhive cascade;

The tables under the database are deleted together ; Do not execute , Dangerous action

view the database

show databases;

View details

# View basic database information
desc database myhive2;
# See the database for more details
desc database extended myhive2;

Database switching

use myhive( Database name );

modify the database

The metadata information of the database is unchangeable , Including the name of the database and the location of the database , But we can use alter database Command to modify some properties of the database .

# Modify the creation date of the database
alter database myhive2 set dbproperties('createtime'='20880611');

Hive Field type when creating a table

Basic operation of data table

Create a basic data table ( Internal table ):

create table tableName( Field name Field type , Field name Field type ) ROW FORMAT DELIMITED IELDS TERMINATED BY char(char Separator ) Specifies the field to field separator in the data ‘\t’ or ‘,’ or ‘|’ Or others

Create external data tables :

create EXTERNAL table tableName( Field name Field type , Field name Field type ) To create an external table, you need to specify the storage path of the data . adopt LOCATION Make a designation .

Load data from the local file system into the table load data local inpath ‘ File path ’ into table Table name ;

Load data and override existing data load data local inpath ‘ File path ’ overwrite into table Table name ;

from HDFS The file system loads data into the table load data inpath ‘/hivedatas/techer.csv’ into table techer;

The difference between internal and external tables : When deleting an internal table : Internal table deletion removes both metadata and data of the table . When deleting an external table : Metadata of external table is deleted , The data itself is not deleted .

Create a partition table :

Common partition rules of enterprises : Partition by day ( One division a day )

Create partition table Syntax create table score(s_id string,c_id string, s_score int) partitioned by (month string) row format delimited fields terminated by ‘\t’;

Create a strap with multiple partitions create table score2 (s_id string,c_id string, s_score int) partitioned by (year string,month string,day string) row format delimited fields terminated by ‘\t’;

Load data Into the partition table load data local inpath ‘/export/servers/hivedatas/score.csv’ into table score partition (month=‘201806’);

Load data Go to a multi partitioned table load data local inpath ‘/export/servers/hivedatas/score.csv’ into table score2 partition(year=‘2018’,month=‘06’,day=‘01’);

Check the partition show partitions score;

Add a partition alter table score add partition(month=‘201805’);

Add multiple partitions at the same time alter table score add partition(month=‘201804’) partition(month = ‘201803’); Be careful : After you add a partition, you can create it in hdfs You can see a folder under the table in the file system

Delete partition alter table score drop partition(month = ‘201806’); A special emphasis on : Partition field must not appear in the existing field of database table !

effect : Divide the data into regions , You don't need to scan irrelevant data when querying , Speed up query .

Create a bucket table :

A special structure is added to the existing table structure .

Divide the data into buckets according to the specified fields , To put it bluntly, it is to divide the data into fields , Data can be divided into multiple files according to fields

Turn on hive Bucket table function of set hive.enforce.bucketing=true;

Set bucket (reduce) The number of set mapreduce.job.reduces=3;

Create bucket table create table course (c_id string,c_name string,t_id string) clustered by(c_id) into 3 buckets row format delimited fields terminated by ‘\t’;

matters needing attention : Bucket table data loading , Only through insert overwrite.hdfs dfs -put File or through load data Unable to load . So you can only create a normal table first , And pass insert overwrite Load the data of ordinary table into bucket table through query

Create a normal table create table course_common (c_id string,c_name string,t_id string) row format delimited fields terminated by ‘\t’;

Loading data in common table load data local inpath ‘/export/servers/hivedatas/course.csv’ into table course_common;

adopt insert overwrite Load data into the bucket table insert overwrite table course select * from course_common cluster by(c_id);

A special emphasis on : The bucket field must be a field in the table .

Bucket logic : Hash the bucket field , Use hash value and the number of buckets to get the remainder , What's the balance , This data is in which bucket . This time about Hive That's the basic operation of , The following small bacteria will complete more content for you here , Coming soon !ε≡٩(๑>₃<)۶

Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .

版权声明
本文为[Homo sapiens]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210122180032364d.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云