How to successfully migrate more than 2 billion records in SQL database with Kafka?

Deep learning and python 2021-01-20 21:59:45
successfully migrate billion records sql


author | Kamil Charłampowicz

translator | The king

planning | Tina

Use Kafka, How to successfully migrate SQL More than 20 One hundred million records ? One of our clients came across a MySQL problem , They have a big watch , This watch has 20 More than 100 million records , And it's growing . If the infrastructure is not replaced , There is a risk that disk space will be exhausted , It could end up destroying the entire application . and , There are other problems with such a large watch : Poor query performance 、 Bad pattern design , There are too many records to find a simple way to analyze the data . We hope to have such a solution , It can solve these problems , No need to introduce high cost maintenance time window , As a result, the application cannot run and the customer cannot use the system . In this article , I'm going to talk about our solution , But I also want to remind you , This is not a suggestion : Different situations require different solutions , But maybe someone can get some valuable insights from our solution .

Will cloud solutions be the antidote ?

After evaluating several alternative solutions , We decided to move the data to the cloud , We chose Google Big Query. The reason we chose it , It's because our customers prefer Google's cloud solutions , Their data is structured and analyzable , And it doesn't require low latency , therefore BigQuery Seems like a perfect choice . After testing , We are sure that Big Query It's a good enough solution , Be able to meet the needs of customers , So that they can use analysis tools , Data analysis can be done in seconds . however , As you may already know , Yes BigQuery Making a lot of queries can be costly , So we want to avoid querying directly through the application , We will only BigQuery As an analysis and backup tool .

Streaming data to the cloud

When it comes to streaming data , There are many ways to achieve , We chose a very simple approach . We used Kafka, Because we've used it extensively in our projects , So there's no need to introduce other solutions .Kafka It gives us another advantage —— We can push all the data to Kafka On , And keep it for a while , And then transmit them to the destination , Not to MySQL Clusters add a lot of load . If BigQuery Introduction failed ( For example, the cost of executing the request query is too high or too difficult ), This approach provides us with some kind of retreat . It's an important decision , It brings us a lot of benefits , And the cost is small .

Take data from MySQL flow to Kafka

About how to get data from MySQL flow to Kafka, You might think Debezium(https://debezium.io) or Kafka Connect. Both solutions are good choices , But in our case , We have no way to use them .MySQL The server version is too old ,Debezium I won't support it , upgrade MySQL Upgrading is not the way . We can't use it either Kafka Connect, Because the autoincrement column is missing in the table ,Kafka Connect There is no way to guarantee that data will not be lost when it is transmitted . We know it's possible to use timestamps , But this method may lose some data , because Kafka The precision of the timestamp used when querying data is lower than that defined in the table column . Of course , Both solutions are good , If you use them in your project, they won't cause conflicts , I recommend using them to stream data from the database to Kafka. In our case , We need to develop a simple Kafka producer , It's responsible for querying data , And make sure you don't lose data , Then stream the data to Kafka, And another consumer , It's responsible for sending data to BigQuery, As shown in the figure below .

Stream data to BigQuery

Reclaim storage space by partitioning

We stream all the data to Kafka( To reduce the load , We use data filtering ), And then flow the data to BigQuery, This helps us solve the query performance problem , Let's analyze a lot of data in seconds , But space problems still exist . We want to design a solution , It can solve the current problems , It's easy to use in the future . We have a new one for the data sheet schema, Usage sequence ID A primary key , And partition the data by month . Partition large tables , We can back up the old partition , And delete these partitions when they are no longer needed , Reclaim some space . therefore , We use new technology schema Created a new table , And use from Kafka To fill in the new partition table . After migrating all the records , We deployed a new version of the application , It inserts into the new table , And delete the old table , To reclaim space . Of course , To migrate old data to new tables , You need to have enough free space . however , In our case , We constantly backup and delete old partitions during the migration process , Make sure there is enough space to store new data .

Flow data into the partition table

Reclaim storage space by organizing data

I'm streaming data to BigQuery after , We can easily analyze the entire data set , And test some new ideas , For example, reduce the space occupied by tables in the database . One idea is to verify how different types of data are distributed in the table . It turned out , almost 90% There's no need to have such data , So we decided to sort out the data . I developed a new Kafka consumer , It will filter out unwanted records , And insert the records you need to leave into another table . We call it organizing tables , As shown below .

After finishing , type A and B It's filtered out :

Flow data into a new table

After sorting out the data , We updated the app , Let it read from the new collation table . Let's continue to write the data to the partition table ,Kafka Constantly pushing data from this table into the collation table . As you can see , Through the above solutions, we have solved the problems faced by our customers . Because of the use of partitions , Storage space is no longer an issue , Data collation and indexing solve some query performance problems of applications . Last , We stream all the data to the cloud , So that our customers can easily analyze all the data . Because we only use for specific analysis queries BigQuery, However, related queries from users' other applications are still generated by MySQL The server processes , So it's not going to be expensive . Another important point is , All of this is done without downtime , So customers won't be affected .

total junction

in general , We use Kafka Stream data to BigQuery. Because all the data is pushed to Kafka, We have enough space to develop other solutions , So we can solve important problems for our customers , And don't worry about making mistakes .

This article is from WeChat official account. - InfoQ(infoqchina) , author :Kamil

The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the yunjia_community@tencent.com Delete .

Original publication time : 2021-01-14

Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .

版权声明
本文为[Deep learning and python]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210120215734340k.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云