author | Kamil Charłampowicz
translator | The king
planning | Tina
Use Kafka, How to successfully migrate SQL More than 20 One hundred million records ？ One of our clients came across a MySQL problem , They have a big watch , This watch has 20 More than 100 million records , And it's growing . If the infrastructure is not replaced , There is a risk that disk space will be exhausted , It could end up destroying the entire application . and , There are other problems with such a large watch ： Poor query performance 、 Bad pattern design , There are too many records to find a simple way to analyze the data . We hope to have such a solution , It can solve these problems , No need to introduce high cost maintenance time window , As a result, the application cannot run and the customer cannot use the system . In this article , I'm going to talk about our solution , But I also want to remind you , This is not a suggestion ： Different situations require different solutions , But maybe someone can get some valuable insights from our solution .
Will cloud solutions be the antidote ？
After evaluating several alternative solutions , We decided to move the data to the cloud , We chose Google Big Query. The reason we chose it , It's because our customers prefer Google's cloud solutions , Their data is structured and analyzable , And it doesn't require low latency , therefore BigQuery Seems like a perfect choice . After testing , We are sure that Big Query It's a good enough solution , Be able to meet the needs of customers , So that they can use analysis tools , Data analysis can be done in seconds . however , As you may already know , Yes BigQuery Making a lot of queries can be costly , So we want to avoid querying directly through the application , We will only BigQuery As an analysis and backup tool .
Streaming data to the cloud
When it comes to streaming data , There are many ways to achieve , We chose a very simple approach . We used Kafka, Because we've used it extensively in our projects , So there's no need to introduce other solutions .Kafka It gives us another advantage —— We can push all the data to Kafka On , And keep it for a while , And then transmit them to the destination , Not to MySQL Clusters add a lot of load . If BigQuery Introduction failed （ For example, the cost of executing the request query is too high or too difficult ）, This approach provides us with some kind of retreat . It's an important decision , It brings us a lot of benefits , And the cost is small .
Take data from MySQL flow to Kafka
About how to get data from MySQL flow to Kafka, You might think Debezium（https://debezium.io） or Kafka Connect. Both solutions are good choices , But in our case , We have no way to use them .MySQL The server version is too old ,Debezium I won't support it , upgrade MySQL Upgrading is not the way . We can't use it either Kafka Connect, Because the autoincrement column is missing in the table ,Kafka Connect There is no way to guarantee that data will not be lost when it is transmitted . We know it's possible to use timestamps , But this method may lose some data , because Kafka The precision of the timestamp used when querying data is lower than that defined in the table column . Of course , Both solutions are good , If you use them in your project, they won't cause conflicts , I recommend using them to stream data from the database to Kafka. In our case , We need to develop a simple Kafka producer , It's responsible for querying data , And make sure you don't lose data , Then stream the data to Kafka, And another consumer , It's responsible for sending data to BigQuery, As shown in the figure below .
Stream data to BigQuery
Reclaim storage space by partitioning
We stream all the data to Kafka( To reduce the load , We use data filtering ), And then flow the data to BigQuery, This helps us solve the query performance problem , Let's analyze a lot of data in seconds , But space problems still exist . We want to design a solution , It can solve the current problems , It's easy to use in the future . We have a new one for the data sheet schema, Usage sequence ID A primary key , And partition the data by month . Partition large tables , We can back up the old partition , And delete these partitions when they are no longer needed , Reclaim some space . therefore , We use new technology schema Created a new table , And use from Kafka To fill in the new partition table . After migrating all the records , We deployed a new version of the application , It inserts into the new table , And delete the old table , To reclaim space . Of course , To migrate old data to new tables , You need to have enough free space . however , In our case , We constantly backup and delete old partitions during the migration process , Make sure there is enough space to store new data .
Flow data into the partition table
Reclaim storage space by organizing data
I'm streaming data to BigQuery after , We can easily analyze the entire data set , And test some new ideas , For example, reduce the space occupied by tables in the database . One idea is to verify how different types of data are distributed in the table . It turned out , almost 90% There's no need to have such data , So we decided to sort out the data . I developed a new Kafka consumer , It will filter out unwanted records , And insert the records you need to leave into another table . We call it organizing tables , As shown below .
After finishing , type A and B It's filtered out ：
Flow data into a new table
After sorting out the data , We updated the app , Let it read from the new collation table . Let's continue to write the data to the partition table ,Kafka Constantly pushing data from this table into the collation table . As you can see , Through the above solutions, we have solved the problems faced by our customers . Because of the use of partitions , Storage space is no longer an issue , Data collation and indexing solve some query performance problems of applications . Last , We stream all the data to the cloud , So that our customers can easily analyze all the data . Because we only use for specific analysis queries BigQuery, However, related queries from users' other applications are still generated by MySQL The server processes , So it's not going to be expensive . Another important point is , All of this is done without downtime , So customers won't be affected .
in general , We use Kafka Stream data to BigQuery. Because all the data is pushed to Kafka, We have enough space to develop other solutions , So we can solve important problems for our customers , And don't worry about making mistakes .
This article is from WeChat official account. - InfoQ（infoqchina） , author ：Kamil
The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the firstname.lastname@example.org Delete .
Original publication time ： 2021-01-14
Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .