Materialize MySQL engine: the expressway from Mysql to click house

Huawei cloud developer community 2021-01-21 12:15:02
materialize mysql engine expressway mysql


Abstract : MySQL To ClickHouse Principle and practice of data synchronization

introduction

be familiar with MySQL Our friends should know ,MySQL The data synchronization mechanism between master and slave is perfect . What's amazing is that ,ClickHouse As a hot big data analysis engine in recent years, it can also be mounted as MySQL Slave Library , As MySQL Of " Coprocessor " oriented OLAP Scenarios provide efficient data analysis capabilities . The earlier plan was more straightforward , Through third-party plug-ins, all MySQL To transform the operations performed on , And then in ClickHouse End by end playback to achieve data synchronization . Finally in the 2020 In the second half of ,Yandex The company in ClickHouse The community released MaterializeMySQL engine , Support from the MySQL Full and incremental real-time data synchronization .MaterializeMySQL The engine currently supports MySQL 5.6/5.7/8.0 edition , compatible Delete/Update sentence , And most commonly used DDL operation .

Basic concepts

  • MySQL & ClickHouse

MySQL In general, it refers to complete MySQL RDBMS, Is an open source relational database management system , At present belongs to the Oracle company .MySQL With continuous improvement of functions and active open source community , It has attracted more and more enterprise and individual users .

ClickHouse By Yandex The company is open source oriented to OLAP Distributed column database of scenarios .ClickHouse With real-time query , complete DBMS And efficient data compression , Support batch update and high availability . Besides ,ClickHouse It is also compatible with SQL Grammar has many advantages such as out of the box .

  • Row Store & Column Store

MySQL Storage uses Row Store, The data in the table are in accordance with Row Continuous storage in storage medium for logical storage unit . This storage method is suitable for random addition, deletion, modification and query operations , It is more friendly for line by line query . But if you choose the target of the query, only a few attributes in a row are involved ,Row The storage method also has to traverse all the rows and filter out the target attributes , When there are many table attributes, the query efficiency is usually low . Although the optimization schemes such as index and cache are in OLTP It can improve the efficiency in the scene , But in the face of massive data background OLAP The scene seems a little inadequate .

ClickHouse It uses Column Store, The data in the table are in accordance with Column Continuous storage in storage medium for logical storage unit . This storage method is suitable for SIMD (Single Instruction Multiple Data) Concurrent processing of data , Especially when there are many table attributes, the query efficiency is significantly improved . In the column storage mode, the data types of physical adjacency are usually the same , Therefore, it is naturally suitable for data compression to achieve the ultimate data compression ratio .

Usage method

  • Deploy Master-MySQL
    Turn on BinLog function :ROW Pattern
    Turn on GTID Pattern : When solving site synchronization MySQL Master slave switching problem (BinLog reset Leading to site failure )
# my.cnf Key configuration
gtid_mode=ON
enforce_gtid_consistency=1
binlog_format=ROW
  • Deploy Slave-ClickHouse
    obtain ClickHouse/Master Code compilation installation
    Recommended GCC-10.2.0,CMake 3.15,ninja1.9.0 And above
  • establish Master-MySQL in database And table
creat databases master_db;
use master_db;
CREATE TABLE IF NOT EXISTS `runoob_tbl`(
`runoob_id` INT UNSIGNED AUTO_INCREMENT,
`runoob_` VARCHAR(100) NOT NULL,
`runoob_author` VARCHAR(40) NOT NULL,
`submission_date` DATE,
PRIMARY KEY ( `runoob_id` )
)ENGINE=InnoDB DEFAULT CHARSET=utf8;
# Insert several pieces of data
INSERT INTO runoob_tbl (runoob_, runoob_author, submission_date) VALUES ("MySQL-learning", "Bob", NOW());
INSERT INTO runoob_tbl (runoob_, runoob_author, submission_date) VALUES ("MySQL-learning", "Tim", NOW());
  • establish Slave-ClickHouse in MaterializeMySQL database
# Turn on materialize sync
SET allow_experimental_database_materialize_mysql=1;
# establish slave library , The parameters are ("mysqld Service address ", " Database name to be synchronized ", " Authorized account ", " password ")
CREATE DATABASE slave_db ENGINE = MaterializeMySQL('192.168.6.39:3306', 'master_db', 'root', '3306123456');

Now you can see that ClickHouse There are already from MySQL The data synchronized in :

DESKTOP:) select * from runoob_tbl;
SELECT *
FROM runoob_tbl
Query id: 6e2b5f3b-0910-4d29-9192-1b985484d7e3
┌─runoob_id─┬─runoob_title───┬─runoob_author─┬─submission_date─┐
│ 1 │ MySQL-learning │ Bob │ 2021-01-06 │
└───────────┴────────────────┴───────────────┴─────────────────┘
┌─runoob_id─┬─runoob_title───┬─runoob_author─┬─submission_date─┐
│ 2 │ MySQL-learning │ Tim │ 2021-01-06 │
└───────────┴────────────────┴───────────────┴─────────────────┘
2 rows in set. Elapsed: 0.056 sec.

working principle

  • BinLog Event

MySQL in BinLog Event It mainly includes the following categories :

1. MYSQL_QUERY_EVENT -- DDL
2. MYSQL_WRITE_ROWS_EVENT -- insert
3. MYSQL_UPDATE_ROWS_EVENT -- update
4. MYSQL_DELETE_ROWS_EVENT -- delete

After the transaction is committed ,MySQL What will be done SQL Handle BinLog Event, And persist to BinLog file

ClickHouse Through consumption BinLog Achieve data synchronization , The main consideration in the process is 3 Several aspects :

  1. DDL compatible : because ClickHouse and MySQL There is a difference in the definition of data types between ,DDL Statements need to be converted accordingly
  2. Delete/Update Support : introduce _version Field , Control version information
  3. Query Filter : introduce _sign Field , Tag data validity
  • DDL operation

Compare the MySQL Of DDL Statement and in ClickHouse End execution DDL sentence :

mysql> show create table runoob_tbl\G;
*************************** 1. row ***************************
Table: runoob_tbl
Create Table: CREATE TABLE `runoob_tbl` (
`runoob_id` int unsigned NOT NULL AUTO_INCREMENT,
`runoob_` varchar(100) NOT NULL,
`runoob_author` varchar(40) NOT NULL,
`submission_date` date DEFAULT NULL,
PRIMARY KEY (`runoob_id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
---------------------------------------------------------------
cat /metadata/slave_db/runoob_tbl.sql
ATTACH TABLE _ UUID '14dbff59-930e-4aa8-9f20-ccfddaf78077'
(
`runoob_id` UInt32,
`runoob_` String,
`runoob_author` String,
`submission_date` Nullable(Date),
`_sign` Int8 MATERIALIZED 1,
`_version` UInt64 MATERIALIZED 1
)
ENGINE = ReplacingMergeTree(_version)
PARTITION BY intDiv(runoob_id, 4294967)
ORDER BY tuple(runoob_id)
SETTINGS index_granularity = 8192

You can see :

1、 stay DDL Added by default during conversion 2 Hidden fields :_sign(-1 Delete , 1 write in ) and _version( Data version )
2、 By default, the table engine is set to ReplacingMergeTree, With _version As column version
3、 primary DDL Primary key field runoob_id As ClickHouse Sort key and partition key

There are many more DDL Handle , For example, adding columns 、 Index, etc. , The corresponding code is in Parsers/MySQL Under the table of contents .

  • Delete/Update operation

Update:

# Mysql End :
UPDATE runoob_tbl set runoob_author='Mike' where runoob_id=2;
mysql> select * from runoob_tbl;
+-----------+----------------+---------------+-----------------+
| runoob_id | runoob_title | runoob_author | submission_date |
+-----------+----------------+---------------+-----------------+
| 1 | MySQL-learning | Bob | 2021-01-06 |
| 2 | MySQL-learning | Mike | 2021-01-06 |
+-----------+----------------+---------------+-----------------+
2 rows in set (0.00 sec)
----------------------------------------------------------------
# ClickHouse End :
DESKTOP:) select *, _sign, _version from runoob_tbl order by runoob_id;
SELECT
*,
_sign,
_version
FROM runoob_tbl
ORDER BY runoob_id ASC
Query id: c5f4db0a-eff6-4b49-a429-b55230c26301
┌─runoob_id─┬─runoob_title───┬─runoob_author─┬─submission_date─┬─_sign─┬─_version─┐
│ 1 │ MySQL-learning │ Bob │ 2021-01-0612 │
│ 2 │ MySQL-learning │ Mike │ 2021-01-0614 │
│ 2 │ MySQL-learning │ Tim │ 2021-01-0613 │
└───────────┴────────────────┴───────────────┴─────────────────┴───────┴──────────┘
3 rows in set. Elapsed: 0.003 sec.

You can see ,ClickHouse The data is also synchronized with the update operation in real time .

  • Delete:
# Mysql End
mysql> DELETE from runoob_tbl where runoob_id=2;
mysql> select * from runoob_tbl;
+-----------+----------------+---------------+-----------------+
| runoob_id | runoob_title | runoob_author | submission_date |
+-----------+----------------+---------------+-----------------+
| 1 | MySQL-learning | Bob | 2021-01-06 |
+-----------+----------------+---------------+-----------------+
1 row in set (0.00 sec)
----------------------------------------------------------------
# ClickHouse End
DESKTOP:) select *, _sign, _version from runoob_tbl order by runoob_id;
SELECT
*,
_sign,
_version
FROM runoob_tbl
ORDER BY runoob_id ASC
Query id: e9cb0574-fcd5-4336-afa3-05f0eb035d97
┌─runoob_id─┬─runoob_title───┬─runoob_author─┬─submission_date─┬─_sign─┬─_version─┐
│ 1 │ MySQL-learning │ Bob │ 2021-01-0612 │
└───────────┴────────────────┴───────────────┴─────────────────┴───────┴──────────┘
┌─runoob_id─┬─runoob_title───┬─runoob_author─┬─submission_date─┬─_sign─┬─_version─┐
│ 2 │ MySQL-learning │ Mike │ 2021-01-06 │ -15 │
└───────────┴────────────────┴───────────────┴─────────────────┴───────┴──────────┘
┌─runoob_id─┬─runoob_title───┬─runoob_author─┬─submission_date─┬─_sign─┬─_version─┐
│ 2 │ MySQL-learning │ Mike │ 2021-01-0614 │
│ 2 │ MySQL-learning │ Tim │ 2021-01-0613 │
└───────────┴────────────────┴───────────────┴─────────────────┴───────┴──────────┘
4 rows in set. Elapsed: 0.002 sec.

You can see , Delete id by 2 It's just an extra line _sign == -1 One line of records , It's not really deleted .

  • Log playback

MySQL Data synchronization between master and slave Slave The node will BinLog Event Convert to the corresponding SQL sentence ,Slave simulation Master write in . Similarly , Traditional third-party plug-ins follow MySQL In the master-slave mode BinLog Consumption plan , the Event It's converted to ClickHouse Compatible SQL sentence , And then in ClickHouse On the implementation ( The playback ), But the whole execution link is long , Usually, the performance loss is large . The difference is ,MaterializeMySQL The internal data analysis and write back scheme provided by the engine hide the complex links of the three-party plug-ins . The playback will BinLog Event Convert to the bottom Block structure , Then write directly to the underlying storage engine , Close to physical replication . This scheme can be compared with BinLog Event Play it back to InnoDB Of Page in .

Synchronization strategies

  • Log playback

v20.9.1 The previous version was based on site synchronization ,ClickHouse Every batch consumed BinLog Event, It will be recorded Event The information of the locus to .metadata file :

[FavonianKong@Wsl[20:42:37]slave_db]
$ cat ./.metadata
Version: 2
Binlog File: mysql-bin.000003
Binlog Position:355005999
Data Version: 5

So when ClickHouse On restart , It will bring {‘mysql-bin.000003’, 355005999} The binary tells... By protocol MySQL Server,MySQL Start sending data from this site :

s1> ClickHouse send out {‘mysql-bin.000003’, 355005999} Site information to MySQL
s2> MySQL Find the local mysql-bin.000003 File and navigate to 355005999 Offset position , Read next Event Send to ClickHouse
s3> ClickHouse receive binlog event And complete the synchronization operation
s4> ClickHouse to update .metadata site 

Existing problems :

If MySQL Server It's a cluster , adopt VIP Foreign service ,MaterializeMySQL establish database when host Pointing to VIP, When the cluster master-slave switch occurs ,{Binlog File, Binlog Position} Binary is not necessarily accurate , because BinLog You can do it reset operation .

s1> ClickHouse send out {'mysql-bin.000003’, 355005999} To the new owner of the cluster MySQL
s2> New Lord MySQL Found no local mysql-bin.000003 file , Because it did reset master operation ,binlog File is mysql-bin.000001
s3> Error copying 

To solve this problem ,v20.9.1 It went online after the release GTID Synchronous mode , Out of sync mode .

  • GTID Sync

GTID Mode for each event Assign a globally unique ID And serial number , Direct notification MySQL This GTID that will do , therefore .metadata Turn into :

[FavonianKong@Wsl[21:30:19]slave_db]
Version: 2
Binlog File: mysql-bin.000003
Executed GTID: 0857c24e-4755-11eb-888c-00155dfbdec7:1-783
Binlog Position:355005999
Data Version: 5

among 0857c24e-4755-11eb-888c-00155dfbdec7 Is to generate Event The host UUID,1-783 It's synchronized event Section

So the process becomes :

s1> ClickHouse send out GTID:0857c24e-4755-11eb-888c-00155dfbdec7:1-783 to MySQL
s2> MySQL according to GTID Find local sites , Read next Event Send to ClickHouse
s3> ClickHouse receive BinLog Event And complete the synchronization operation
s4> ClickHouse to update .metadata GTID Information 

Source code analysis

  • summary

In the latest source code (v20.13.1.1) in ,ClickHouse The official response to DatabaseMaterializeMySQL The relevant source code of the engine has been reconstructed , And adapted GTID Synchronous mode .ClickHouse The entrance to the whole project main Function in /ClickHouse/programs/main.cpp In file , The main program will distribute the task to ClickHouse/programs In the subroutine under the directory . This analysis focuses on Server End MaterializeMySQL The workflow of the engine .

  • Source directory

And MaterializeMySQL Related to the main source path :

ClickHouse/src/databases/MySQL //MaterializeMySQL Storage engine implementation 
ClickHouse/src/Storages/ // Table engine implementation 
ClickHouse/src/core/MySQL* // Copy the relevant code 
ClickHouse/src/Interpreters/ //Interpreters Realization ,SQL Of rewrite It's also dealt with here 
ClickHouse/src/Parsers/MySQL // The analysis part realizes ,DDL Analysis and other related processing are here 
  • The main process of the server

ClickHouse Use POCO The network library handles network requests ,Client The processing logic of the connection is in ClickHouse/src/Server/*Handler.cpp Of hander In the method . With TCP For example , Remove the handshake , Initialization context and exception handling code , The main logic can be abstracted into :

// ClickHouse/src/Server/TCPHandler.cpp
TCPHandler.runImpl()
{
...
while(true) {
...
if (!receivePacket()) //line 184
continue
/// Processing Query //line 260
state.io = executeQuery(state.query, *query_context, ...);
...
}
  • Data synchronization preprocessing

Client Sent SQL stay executeQuery Function processing , The main logic is simplified as follows :

// ClickHouse/src/Interpreters/executeQuery.cpp
static std::tuple executeQueryImpl(...)
{
...
// line 354, The parser is configurable 
ast = parseQuery(...);
...
// line 503, According to the syntax tree generation interpreter
auto interpreter = InterpreterFactory::get(ast, context, ...);
...
// line 525, actuator interpreter Returns the result after execution 
res = interpreter->execute();
...
}

There are three main points :

1、 analysis SQL Statement and generate a syntax tree AST
2、InterpreterFactory The factory class is based on AST Generate actuators
3、interpreter->execute()

Follow up on the third point , have a look InterpreterCreateQuery Of excute() What did you do :

// ClickHouse/src/Interpreters/InterpreterCreateQuery.cpp
BlockIO InterpreterCreateQuery::execute()
{
...
// CREATE | ATTACH DATABASE
if (!create.database.empty() && create.table.empty())
// line 1133, When using MaterializeMySQL when , I'll go here and build a database 
return createDatabase(create);
}

The note here is obvious , Main execution CREATE or ATTACH DATABASE, Continue to follow up createDatabase() function :

// ClickHouse/src/Interpreters/InterpreterCreateQuery.cpp
BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create)
{
...
// line 208, It will be based on ASTCreateQuery Parameters , from DatabaseFactory The factory gets the database object
// For details, please refer to DatabasePtr DatabaseFactory::getImpl() function 
DatabasePtr database = DatabaseFactory::get(create, metadata_path, ...);
...
// line 253, Polymorphic calls , In the use of MaterializeMySQL when
// upper get Function returns DatabaseMaterializeMySQL
database->loadStoredObjects(context, ...);
}

Come here , It's equivalent to distributing tasks to DatabaseMaterializeMySQL Handle , And then follow loadStoredObjects function :

//ClickHouse/src/Databases/MySQL/DatabaseMaterializeMySQL.cpp
template
void DatabaseMaterializeMySQL::loadStoredObjects(Context & context, ...)
{
Base::loadStoredObjects(context, has_force_restore_data_flag, force_attach);
try
{
// line87, Here's the launch of materialize Synchronization thread of 
 materialize_thread.startSynchronization();
started_up = true;
}
catch (...)
...
}

To follow up startSynchronization() Bound execution function :

// ClickHouse/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp
void MaterializeMySQLSyncThread::synchronization()
{
...
// Full synchronization in repareSynchronized() Conduct 
if (std::optional metadata = prepareSynchronized())
{
while (!isCancelled())
{
UInt64 max_flush_time = settings->max_flush_data_time;
BinlogEventPtr binlog_event = client.readOneBinlogEvent(...);
{
// Incremental synchronous listening binlog_envent
if (binlog_event)
onEvent(buffers, binlog_event, *metadata);
}
}
}
...
}
  • Full amount of synchronization

MaterializeMySQLSyncThread::prepareSynchronized be responsible for DDL And full synchronization , The main process is simplified as follows :

// ClickHouse/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp
std::optional MaterializeMySQLSyncThread::prepareSynchronized()
{
while (!isCancelled())
{
...
try
{
// The constructor gets MySQL The state of 、MySQL The table building statement of the end ,
 MaterializeMetadata metadata(connection, ...);
// line345, DDL Correlation transformation 
metadata.transaction(position, [&]()
{
cleanOutdatedTables(database_name, global_context);
dumpDataForTables(connection, metadata, global_context, ...);
});
return metadata;
}
...
}
}

ClickHouse As MySQL From the node , stay MaterializeMetadata In the constructor MySQL A series of preprocessing is carried out at the end :

1、 Close the open table , At the same time, put a read lock on the table and start the transaction
2、TablesCreateQuery adopt SHOW CREATE TABLE Statement get MySQL The table building statement of the end
3、 Release the table lock after obtaining the table creation statement

Keep going down , Execute to metadata.transaction() function , The call passes in an anonymous function as an argument , Follow up the function and you'll find that the anonymous function will eventually execute , That is to say cleanOutdatedTables as well as dumpDataForTables function , Let's see dumpDataForTables function :

// ClickHouse/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp
static inline void dumpDataForTables(...)
{
...
//line293, Here, the table creation statement is executed 
 tryToExecuteQuery(..., query_context, database_name, comment);
}

Continue tracking tryToExecuteQuery function , Will be called to executeQueryImpl() function , I mentioned this function above , But this time our context changed , The generated actuator changes , At this point DDL Transformation and dump table Wait for the operation :

// ClickHouse/src/Interpreters/executeQuery.cpp
static std::tuple executeQueryImpl(...)
{
...
// line 354, The parser is configurable 
ast = parseQuery(...);
...
// line 503, This is different from the context information before , Generate interpreter Also different 
auto interpreter = InterpreterFactory::get(ast,context, ...);
...
// line 525, actuator interpreter Returns the result after execution 
res = interpreter->execute();
...
}

here InterpreterFactory return InterpreterExternalDDLQuery, Follow in execute What does the function do :

// ClickHouse/src/Interpreters/InterpreterExternalDDLQuery.cpp
BlockIO InterpreterExternalDDLQuery::execute()
{
...
if (external_ddl_query.from->name == "MySQL")
{
#ifdef USE_MYSQL
...
// line61, When full copy is performed DDL when , Will be implemented here 
else if (...->as())
return MySQLInterpreter::InterpreterMySQLCreateQuery(
external_ddl_query.external_ddl, cogetIdentifierName(arguments[0]),
getIdentifierName(arguments[1])).execute();
#endif
}
...
return BlockIO();
}

Go ahead and have a look getIdentifierName(arguments[1])).execute() What did you do :

// ClickHouse/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h
class InterpreterMySQLDDLQuery : public IInterpreter
{
public:
...
BlockIO execute() override
{
...
// line68, Take from MySQL Acquired DDL Statement 
ASTs rewritten_queries = InterpreterImpl::getRewrittenQueries(
query, context, mapped_to_database, mysql_database);
// line70, Here is the transformed DDL sentence 
for (const auto & rewritten_query : rewritten_queries)
executeQuery(..., queryToString(rewritten_query), ...);
return BlockIO{};
}
...
}

A further look InterpreterImpl::getRewrittenQueries How to transform DDL Of :

// ClickHouse/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp
ASTs InterpreterCreateImpl::getRewrittenQueries(...)
{
...
// Check for presence primary_key, No direct error report 
if (primary_keys.empty())
throw Exception("cannot be materialized, no primary keys.", ...);
...
// add to _sign and _version Column .
auto sign_column_name = getUniqueColumnName(columns_name_and_type, "_sign");
auto version_column_name = getUniqueColumnName(columns_name_and_type, "_version");
// Here, the table building engine is quietly modified to ReplacingMergeTree
storage->set(storage->engine, makeASTFunction("ReplacingMergeTree", ...));
...
return ASTs{rewritten_query};
}

complete DDL After the transformation, the new DDL sentence , Complete the table creation operation , Back to dumpDataForTables:

// ClickHouse/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp
static inline void dumpDataForTables(...)
{
...
//line293, Here, the table creation statement is executed 
 tryToExecuteQuery(..., query_context, database_name, comment);
...
// line29, here dump The data is stored in MySQLBlockInputStream
 MySQLBlockInputStream input(connection, ...);
}
  • The incremental synchronization

Remember startSynchronization() Bound to the execution function ? The whole synchronous analysis is in prepareSynchronized() On going , What about incremental updates ?

// ClickHouse/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp
void MaterializeMySQLSyncThread::synchronization()
{
...
// Full synchronization in repareSynchronized() Conduct 
if (std::optional metadata = prepareSynchronized())
{
while (!isCancelled())
{
UInt64 max_flush_time = settings->max_flush_data_time;
BinlogEventPtr binlog_event = client.readOneBinlogEvent(...);
{
// Incremental synchronous listening binlog_envent
if (binlog_event)
onEvent(buffers, binlog_event, *metadata);
}
}
}
...
}

You can see ,while There is one in the sentence binlog_event Listen function for , To listen for MySQL End BinLog Log changes , once MySQL The client side performs related operations , Its BinLog The log will update and trigger binlog_event, Incremental updates are mainly done here .

// ClickHouse/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp
void MaterializeMySQLSyncThread::onEvent(Buffers & buffers, const BinlogEventPtr & receive_event, MaterializeMetadata & metadata)
{
// Incremental synchronization by listening binlog event Realization , At present, there are four kinds of support event:MYSQL_WRITE_ROWS_EVENT、
// MYSQL_UPDATE_ROWS_EVENT、MYSQL_DELETE_ROWS_EVENT and MYSQL_QUERY_EVENT
// The specific process can find the corresponding onHandle function , I will not analyze it in detail here 
if (receive_event->type() == MYSQL_WRITE_ROWS_EVENT){...}
else if (receive_event->type() == MYSQL_UPDATE_ROWS_EVENT){...}
else if (receive_event->type() == MYSQL_DELETE_ROWS_EVENT){...}
else if (receive_event->type() == MYSQL_QUERY_EVENT){...}
else {/* MYSQL_UNHANDLED_EVENT*/}
}

Summary

MaterializeMySQL Engine is ClickHouse official 2020 The main features of the year , Because this feature is just needed in the production environment, and it has just been online for a long time , The whole module is in the state of high-speed iteration , So there are many functions to be perfected . For example, checking the status of the replication process and verifying the consistency of the data . If you are interested, please refer to Github Upper 2021-Roadmap, It will update some recent plans of the community . Please correct any misunderstanding of the above contents .

quote

ClickHouse Community documentation

ClickHouse Community source code

MySQL Real time replication and Implementation

MaterializeMySQL Engine analysis

This article is shared from Huawei cloud community 《MySQL To ClickHouse The highway -MaterializeMySQL engine 》, Original author :FavonianKong .

 

Click to follow , The first time to learn about Huawei's new cloud technology ~

版权声明
本文为[Huawei cloud developer community]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210121121408356l.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云