Materialize MySQL engine: the expressway from Mysql to click house

Huawei cloud developer community 2021-01-21 11:07:17
materialize mysql engine expressway mysql

Abstract : MySQL To ClickHouse Principle and practice of data synchronization


be familiar with MySQL Our friends should know ,MySQL The data synchronization mechanism between master and slave is perfect . What's amazing is that ,ClickHouse As a hot big data analysis engine in recent years, it can also be mounted as MySQL Slave Library , As MySQL Of " Coprocessor " oriented OLAP Scenarios provide efficient data analysis capabilities . The earlier plan was more straightforward , Through third-party plug-ins, all MySQL To transform the operations performed on , And then in ClickHouse End by end playback to achieve data synchronization . Finally in the 2020 In the second half of ,Yandex The company in ClickHouse The community released MaterializeMySQL engine , Support from the MySQL Full and incremental real-time data synchronization .MaterializeMySQL The engine currently supports MySQL 5.6/5.7/8.0 edition , compatible Delete/Update sentence , And most commonly used DDL operation .

Basic concepts

  • MySQL & ClickHouse

MySQL In general, it refers to complete MySQL RDBMS, Is an open source relational database management system , At present belongs to the Oracle company .MySQL With continuous improvement of functions and active open source community , It has attracted more and more enterprise and individual users .

ClickHouse By Yandex The company is open source oriented to OLAP Distributed column database of scenarios .ClickHouse With real-time query , complete DBMS And efficient data compression , Support batch update and high availability . Besides ,ClickHouse It is also compatible with SQL Grammar has many advantages such as out of the box .

  • Row Store & Column Store

MySQL Storage uses Row Store, The data in the table are in accordance with Row Continuous storage in storage medium for logical storage unit . This storage method is suitable for random addition, deletion, modification and query operations , It is more friendly for line by line query . But if you choose the target of the query, only a few attributes in a row are involved ,Row The storage method also has to traverse all the rows and filter out the target attributes , When there are many table attributes, the query efficiency is usually low . Although the optimization schemes such as index and cache are in OLTP It can improve the efficiency in the scene , But in the face of massive data background OLAP The scene seems a little inadequate .

ClickHouse It uses Column Store, The data in the table are in accordance with Column Continuous storage in storage medium for logical storage unit . This storage method is suitable for SIMD (Single Instruction Multiple Data) Concurrent processing of data , Especially when there are many table attributes, the query efficiency is significantly improved . In the column storage mode, the data types of physical adjacency are usually the same , Therefore, it is naturally suitable for data compression to achieve the ultimate data compression ratio .


Usage method

  • Deploy Master-MySQL

Turn on BinLog function :ROW Pattern
Turn on GTID Pattern : When solving site synchronization MySQL Master slave switching problem (BinLog reset Leading to site failure )


  • Deploy Slave-ClickHouse

obtain ClickHouse/Master Code compilation installation
Recommended GCC-10.2.0,CMake 3.15,ninja1.9.0 And above

  • establish Master-MySQL in database And table


  • establish Slave-ClickHouse in MaterializeMySQL database


Now you can see that ClickHouse There are already from MySQL The data synchronized in :


working principle

  • BinLog Event

MySQL in BinLog Event It mainly includes the following categories :


After the transaction is committed ,MySQL What will be done SQL Handle BinLog Event, And persist to BinLog file

ClickHouse Through consumption BinLog Achieve data synchronization , The main consideration in the process is 3 Several aspects :

1、DDL compatible : because ClickHouse and MySQL There is a difference in the definition of data types between ,DDL Statements need to be converted accordingly

2、Delete/Update Support : introduce _version Field , Control version information

3、Query Filter : introduce _sign Field , Tag data validity

  • DDL operation

Compare the MySQL Of DDL Statement and in ClickHouse End execution DDL sentence :


You can see :

1、 stay DDL Added by default during conversion 2 Hidden fields :_sign(-1 Delete , 1 write in ) and _version( Data version )
2、 By default, the table engine is set to ReplacingMergeTree, With _version As column version
3、 primary DDL Primary key field runoob_id As ClickHouse Sort key and partition key

There are many more DDL Handle , For example, adding columns 、 Index, etc. , The corresponding code is in Parsers/MySQL Under the table of contents .

  • Delete/Update operation



You can see ,ClickHouse The data is also synchronized with the update operation in real time .

  • Delete:


You can see , Delete id by 2 It's just an extra line _sign == -1 One line of records , It's not really deleted .

  • Log playback

MySQL Data synchronization between master and slave Slave The node will BinLog Event Convert to the corresponding SQL sentence ,Slave simulation Master write in . Similarly , Traditional third-party plug-ins follow MySQL In the master-slave mode BinLog Consumption plan , the Event It's converted to ClickHouse Compatible SQL sentence , And then in ClickHouse On the implementation ( The playback ), But the whole execution link is long , Usually, the performance loss is large . The difference is ,MaterializeMySQL The internal data analysis and write back scheme provided by the engine hide the complex links of the three-party plug-ins . The playback will BinLog Event Convert to the bottom Block structure , Then write directly to the underlying storage engine , Close to physical replication . This scheme can be compared with BinLog Event Play it back to InnoDB Of Page in .

Synchronization strategies

  • Log playback

v20.9.1 The previous version was based on site synchronization ,ClickHouse Every batch consumed BinLog Event, It will be recorded Event The information of the locus to .metadata file :


So when ClickHouse On restart , It will bring {‘mysql-bin.000003’, 355005999} The binary tells... By protocol MySQL Server,MySQL Start sending data from this site :


Existing problems :

If MySQL Server It's a cluster , adopt VIP Foreign service ,MaterializeMySQL establish database when host Pointing to VIP, When the cluster master-slave switch occurs ,{Binlog File, Binlog Position} Binary is not necessarily accurate , because BinLog You can do it reset operation .


To solve this problem ,v20.9.1 It went online after the release GTID Synchronous mode , Out of sync mode .

  • GTID Sync

GTID Mode for each event Assign a globally unique ID And serial number , Direct notification MySQL This GTID that will do , therefore .metadata Turn into :


among 0857c24e-4755-11eb-888c-00155dfbdec7 Is to generate Event The host UUID,1-783 It's synchronized event Section

So the process becomes :


Source code analysis

  • summary

In the latest source code (v20.13.1.1) in ,ClickHouse The official response to DatabaseMaterializeMySQL The relevant source code of the engine has been reconstructed , And adapted GTID Synchronous mode .ClickHouse The entrance to the whole project main Function in /ClickHouse/programs/main.cpp In file , The main program will distribute the task to ClickHouse/programs In the subroutine under the directory . This analysis focuses on Server End MaterializeMySQL The workflow of the engine .

  • Source directory

And MaterializeMySQL Related to the main source path :


  • The main process of the server

ClickHouse Use POCO The network library handles network requests ,Client The processing logic of the connection is in ClickHouse/src/Server/*Handler.cpp Of hander In the method . With TCP For example , Remove the handshake , Initialization context and exception handling code , The main logic can be abstracted into :


  • Data synchronization preprocessing

Client Sent SQL stay executeQuery Function processing , The main logic is simplified as follows :


There are three main points :

1、 analysis SQL Statement and generate a syntax tree AST
2、InterpreterFactory The factory class is based on AST Generate actuators

Follow up on the third point , have a look InterpreterCreateQuery Of excute() What did you do :


The note here is obvious , Main execution CREATE or ATTACH DATABASE, Continue to follow up createDatabase() function :


Come here , It's equivalent to distributing tasks to DatabaseMaterializeMySQL Handle , And then follow loadStoredObjects function :


To follow up startSynchronization() Bound execution function :


  • Full amount of synchronization

MaterializeMySQLSyncThread::prepareSynchronized be responsible for DDL And full synchronization , The main process is simplified as follows :


ClickHouse As MySQL From the node , stay MaterializeMetadata In the constructor MySQL A series of preprocessing is carried out at the end :

1、 Close the open table , At the same time, put a read lock on the table and start the transaction
2、TablesCreateQuery adopt SHOW CREATE TABLE Statement get MySQL The table building statement of the end
3、 Release the table lock after obtaining the table creation statement

Keep going down , Execute to metadata.transaction() function , The call passes in an anonymous function as an argument , Follow up the function and you'll find that the anonymous function will eventually execute , That is to say cleanOutdatedTables as well as dumpDataForTables function , Let's see dumpDataForTables function :


Continue tracking tryToExecuteQuery function , Will be called to executeQueryImpl() function , I mentioned this function above , But this time our context changed , The generated actuator changes , At this point DDL Transformation and dump table Wait for the operation :


here InterpreterFactory return InterpreterExternalDDLQuery, Follow in execute What does the function do :


Go ahead and have a look getIdentifierName(arguments[1])).execute() What did you do :


A further look InterpreterImpl::getRewrittenQueries How to transform DDL Of :


complete DDL After the transformation, the new DDL sentence , Complete the table creation operation , Back to dumpDataForTables:


  • The incremental synchronization

Remember startSynchronization() Bound to the execution function ? The whole synchronous analysis is in prepareSynchronized() On going , What about incremental updates ?


You can see ,while There is one in the sentence binlog_event Listen function for , To listen for MySQL End BinLog Log changes , once MySQL The client side performs related operations , Its BinLog The log will update and trigger binlog_event, Incremental updates are mainly done here .



MaterializeMySQL Engine is ClickHouse official 2020 The main features of the year , Because this feature is just needed in the production environment, and it has just been online for a long time , The whole module is in the state of high-speed iteration , So there are many functions to be perfected . For example, checking the status of the replication process and verifying the consistency of the data . If you are interested, please refer to Github Upper 2021-Roadmap, It will update some recent plans of the community . Please correct any misunderstanding of the above contents .


ClickHouse Community documentation

ClickHouse Community source code

MySQL Real time replication and Implementation

MaterializeMySQL Engine analysis

This article is shared from Huawei cloud community 《MySQL To ClickHouse The highway -MaterializeMySQL engine 》, Original author :FavonianKong .

Click to follow , The first time to learn about Huawei's new cloud technology ~

本文为[Huawei cloud developer community]所创,转载请带上原文链接,感谢

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云