Eight graphs take you to analyze the data consistency between redis and mysql

InfoQ 2020-11-09 22:19:06
graphs analyze data consistency redis


{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":" Preface "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":" Original official account :bigsai"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" about Web Come on , The increase of users and visits promotes the change and progress of project technology and architecture to a certain extent . There may be some of the following :"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":"1","normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":" Page concurrency and visits are not much ,MySQL"},{"type":"codeinline","content":[{"type":"text","text":" Enough to support "}]},{"type":"text","text":" The development of their own logic business . In fact, we can not add cache . Cache static pages at most ."}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":" Page concurrency increased significantly , There's some pressure on the database , And some data updates are less frequent "},{"type":"codeinline","content":[{"type":"text","text":" Repeatedly inquired "}]},{"type":"text","text":" Or query speed "},{"type":"codeinline","content":[{"type":"text","text":" slower "}]},{"type":"text","text":". Then you can consider using caching technology to optimize . Save high hit objects to key-value Formal Redis in , that , If the data is hit , Then you can avoid the inefficient db. From efficient redis Data found in ."}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":" Of course , There may be other problems , You also cache pages through static pages 、cdn Speed up 、 Even load balancing can improve system concurrency . No introduction here ."}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/85/85ed5751ed1bc4ec6e44e5413343d1f6.png","alt":"image-20201106173835116","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":" Caching ideas are everywhere "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" We start with an algorithm problem to understand the meaning of caching ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "},{"type":"text","marks":[{"type":"strong"}],"text":" problem 1:"}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Enter a number n(n<20), seek "},{"type":"codeinline","content":[{"type":"text","text":"n!"}]},{"type":"text","text":";"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" analysis 1"},{"type":"text","text":": "}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Just think about algorithms , The problem of numerical crossing is not considered ."}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Of course we know "},{"type":"codeinline","content":[{"type":"text","text":"n!=n * (n-1) * (n-2) * ... * 1= n * (n-1)!"}]},{"type":"text","text":";"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Then we can use a recursive function to solve the problem ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"java"},"content":[{"type":"text","text":"static long jiecheng(int n)\n{\n\tif(n==1||n==0)return 1;\n\telse {\n\t return n*jiecheng(n-1);\n\t}\n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" In this way, each input request needs to be executed "},{"type":"codeinline","content":[{"type":"text","text":"n"}]},{"type":"text","text":" Time ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" problem 2:"}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Input t Group data ( It could be hundreds of ), Each group has one. xi(xi<20), seek "},{"type":"codeinline","content":[{"type":"text","text":"xi!"}]},{"type":"text","text":";"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" analysis 2"},{"type":"text","text":":"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" If you use "},{"type":"codeinline","content":[{"type":"text","text":" recursive "}]},{"type":"text","text":", Input t Group data , Each input is xi, Then the number of times to execute each time is :"}]}]}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c3/c323b51171ffe5cc8b0d86ab2de5df4b.png","alt":null,"title":"","style":[{"key":"width","value":"50%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Every time you enter Xi Too big or t Too many metropolises cause a lot of burden ! The time complexity is "},{"type":"text","marks":[{"type":"strong"}],"text":"O(n2)"}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" So can you change your mind . you 're right 、 yes "},{"type":"codeinline","content":[{"type":"text","text":" The meter "}]},{"type":"text","text":". A watch is often used for ACM In the algorithm, , It is often used to solve multiple groups of input and output 、 Graph theory search results 、 Path storage problem . that , For this factorial . We just need to apply for an array , According to the number, store the required data in the array from front to back , After that, we can output the array value directly , The idea is clear :"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"java"},"content":[{"type":"text","text":"import java.util.Scanner;\npublic class test {\npublic static void main(String[] args) {\n\t// TODO Auto-generated method stub\n\tScanner sc=new Scanner(System.in);\n\tint t=sc.nextInt();\n\tlong jiecheng[]=new long[21];\n\tjiecheng[0]=1;\n\tfor(int i=1;i<21;i++)\n\t{\n\t\tjiecheng[i]=jiecheng[i-1]*i;\n\t}\n for(int i=0;i"},{"type":"codeinline","content":[{"type":"text","text":" Memory "}]},{"type":"text","text":". We know that most relational databases are "},{"type":"codeinline","content":[{"type":"text","text":" Read and write based on hard disk "}]},{"type":"text","text":" Of , Its efficiency and resources are limited , and redis It's memory based , The speed of reading and writing varies greatly . When the concurrency is too high, the performance of relational database reaches the bottleneck , You can strategically put frequently accessed data into Redis Improve system throughput and concurrency ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" For common websites and scenarios , Relational databases may be slow in two places :"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Reading and writing IO Poor performance "}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" A data may be obtained by a large amount of calculation "}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" So using caching can reduce the number of disks IO The number of times and the number of calculations in a relational database . The speed of reading is also reflected in two aspects :"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Memory based , Read and write faster "}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Using hash algorithm to locate the result directly does not need to calculate "}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" So for a decent , A little bit of a website , Caching is very "},{"type":"codeinline","content":[{"type":"text","text":"necessary"}]},{"type":"text","text":" Of , and Redis It is undoubtedly one of the best choices ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/16/16fd3d3966013f30c25e113f88a82358.png","alt":"image-20201106180929673","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":" Something to be aware of "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Improper use of cache will cause many problems . So some details need to be carefully considered and designed . Of course, the most difficult data consistency is analyzed separately below ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" Whether to use cache "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Projects can't use caching just to use caching , Caching is not necessarily suitable for all scenarios , If the "},{"type":"text","marks":[{"type":"strong"}],"text":" Data consistency is extremely demanding "},{"type":"text","text":", Or, "},{"type":"text","marks":[{"type":"strong"}],"text":" Data changes frequently and there are not many queries "},{"type":"text","text":", Or there's no concurrency at all 、 Simple queries don't necessarily require caching , It may also waste resources and make the project cumbersome and difficult to maintain , And use redis More or less cache may encounter data consistency problems, which need to be considered ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" Cache design is reasonable "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When designing caching , Multiple table queries are likely to be encountered , If you encounter the key value pair of multi table query cache, you need to consider it reasonably , Whether it's split or together ? Of course, if there are many kinds of combinations, but few of them often appear, you can also cache them directly , The specific design should be based on the business needs of the project , There's no absolute standard ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" Expiration strategy options "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The cache contains relatively hot and commonly used data ,Redis Resources are also limited , We need to choose a reasonable policy to let the cache expire and delete . We have learned "},{"type":"codeinline","content":[{"type":"text","text":" operating system "}]},{"type":"text","text":" We also know that there is a FIFO algorithm in the implementation of computer cache ("},{"type":"text","marks":[{"type":"strong"}],"text":"FIFO"},{"type":"text","text":"); Least recently used algorithm ("},{"type":"text","marks":[{"type":"strong"}],"text":"LRU"},{"type":"text","text":"); The best elimination algorithm ("},{"type":"text","marks":[{"type":"strong"}],"text":"OPT"},{"type":"text","text":"); Minimum access page algorithm ("},{"type":"text","marks":[{"type":"strong"}],"text":"LFR"},{"type":"text","text":") Wait for the disk scheduling algorithm . Design Redis You can also learn from it when caching . According to the time FIFO It's the best way to achieve . And Redis stay "},{"type":"codeinline","content":[{"type":"text","text":" overall situation key"}]},{"type":"text","text":" Support expiration strategy ."}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" And the expiration time should also be set according to the system situation , If the hardware is better, it can be a little longer now , But too long or too short an expiration date may not be good , If it is too short, the cache hit rate may not be high , And too long may cause a lot of cold data stored in Redis No release ."}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":" Data consistency issues *"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" In fact, the problem of data consistency is mentioned above . Caching is not recommended if there is a high requirement for consistency . Let's sort out the cached data ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" stay Redis Data consistency problems are often encountered in caching . For a cache , Here's a list of things :"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" read "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"read"}]},{"type":"text","text":": from Redis Read from , If Redis There is no , Then from MySQL Get updates Redis cache ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The following flowchart describes the general scenario , It's not controversial :"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/d2/d292899310d79d72393e1b70ef4c3adb.png","alt":"image-20201106184713215","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" Write 1: Update the database first , Update the cache again ( Common low concurrency )"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/75/75c10073f2ab054943a313b8e6df7f5b.png","alt":"image-20201106184914749","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Update database information , Update again Redis cache . This is the normal practice , The cache is based on the database , From the database ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" But there may be some problems , For example, if the update cache fails ( Downtime and other conditions ), Will make the database and Redis Data inconsistency ."},{"type":"text","marks":[{"type":"strong"}],"text":" cause DB The new data , Caching old data "},{"type":"text","text":"."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" Write 2: So let's delete the cache , Write to database again ( Low concurrency optimization )"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/01/014d75d59cd837b025f16a81466f3421.png","alt":"image-20201106184958339","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" Problem solved "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" This situation can effectively avoid "},{"type":"text","marks":[{"type":"strong"}],"text":" Write 1"},{"type":"text","text":" Prevent writing Redis Failure problem . Delete cache for update . The ideal is for the next visit Redis Go to... For free MySQL Get the latest value into the cache . However, this situation is limited to low concurrency scenarios, not to high concurrency scenarios ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":" The problem is "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Write 2 Although I can "},{"type":"codeinline","content":[{"type":"text","text":" Seems to write Redis An abnormal problem "}]},{"type":"text","text":". It seems like a better solution, but there are problems in the high concurrency solution . We are "},{"type":"text","marks":[{"type":"strong"}],"text":" Write 1"},{"type":"text","text":" Discussed if the library update is successful , Cache update failure will result in dirty data . Our ideal is to delete the cache so that "},{"type":"codeinline","content":[{"type":"text","text":" Next thread "}]},{"type":"text","text":" Access is suitable for updating cache . The problem is : If this "},{"type":"text","marks":[{"type":"strong"}],"text":" The next thread came too early 、 Too clever "},{"type":"text","text":" What about it ?"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/8e/8efd5b578bdb7e9a6d74c4518c2871f2.png","alt":"image-20201106191042265","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Because you don't know who is the first and who is the second , Who is slow and who is slow? . As shown above , There will be Redis Cache data and MySQL atypism . Of course you can be right key Conduct "},{"type":"codeinline","content":[{"type":"text","text":" locked "}]},{"type":"text","text":". But lock is such a heavyweight thing that has a great impact on the concurrent function , Don't use a lock if you can ! The above-mentioned situation will still cause "},{"type":"text","marks":[{"type":"strong"}],"text":" Caching is old data ,DB It's new data "},{"type":"text","text":". And if the cache doesn't expire, this problem will persist ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" Write 3: Delay double delete strategy "}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/e9/e9ffc29b44cc93e241aee35b077bc0c9.png","alt":"image-20201106191310072","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" This is the delay double delete strategy , It can be relieved in "},{"type":"text","marks":[{"type":"strong"}],"text":" Write 2"},{"type":"text","text":" Update in MySQL In the process, there are read threads entering, which causes Redis Caching and MySQL Data inconsistency . The way is "},{"type":"text","marks":[{"type":"strong"}],"text":" Delete cache -> Update cache -> Time delay ( A few hundred ms)( Asynchronous ) Delete cache again "},{"type":"text","text":". Even on the way to update the cache "},{"type":"text","marks":[{"type":"strong"}],"text":" Write 2"},{"type":"text","text":" The problem of . Cause data inconsistency , But delay ( It depends on the business , Usually hundreds of ms) Deleting again can quickly resolve the inconsistency ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" But there are loopholes in the plan , For example, delete the error for the second time 、 Write many read high concurrency MySQL The pressure of the visit and so on . Of course you can choose to use MQ Wait for message queuing to resolve asynchronously . In fact, the practical solution is very difficult to take into account the foolproof , Therefore, many big men may be spurted because of some mistakes in the design process . As the author of vegetables, I will not make a fool of myself here , Everybody, welcome to contribute your project ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":" Write 4: Operate the cache directly , Write... Regularly sql( Suitable for high concurrency )"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" When there is "},{"type":"codeinline","content":[{"type":"text","text":" A bunch of concurrency ( Write )"}]},{"type":"text","text":" After throwing it , Even if the previous schemes use message queuing asynchronous communication, it is difficult to give users a comfortable experience . And for large-scale operations sql There will also be a lot of pressure on the system . So another solution is to directly operate the cache , Write the cache to sql. because Redis This kind of non relational database is based on memory operation KV It's a lot faster than the traditional relationship ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/11/119a03bf70d0129245b3fe7fae14f146.png","alt":"image-20201106192531468","title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" The above applies to business design in high concurrency , At this time Redis Data based ,MySQL Data is auxiliary . Insert... Regularly ( It's like a data backup library ). Of course , This kind of high concurrency is often due to the business to "},{"type":"codeinline","content":[{"type":"text","text":" read "}]},{"type":"text","text":"、"},{"type":"codeinline","content":[{"type":"text","text":" Write "}]},{"type":"text","text":" There may be different requirements for the order of , Maybe with "},{"type":"codeinline","content":[{"type":"text","text":" Message queue "}]},{"type":"text","text":" as well as "},{"type":"codeinline","content":[{"type":"text","text":" lock "}]},{"type":"text","text":" The completion of data and order for business may be due to high concurrency 、 The uncertainty and instability of multithreading , Improve business reliability ."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" All in all , The more "},{"type":"codeinline","content":[{"type":"text","text":" High concurrency "}]},{"type":"text","text":"、 The more right "},{"type":"codeinline","content":[{"type":"text","text":" High data consistency requirements "}]},{"type":"text","text":" The scheme of data consistency in the design scheme needs "},{"type":"codeinline","content":[{"type":"text","text":" Consider and take into account "}]},{"type":"text","text":" Of "},{"type":"codeinline","content":[{"type":"text","text":" More complicated 、 The more "}]},{"type":"text","text":". The above is also the author's aim at Redis Learning and self divergence of data consistency problems ( Rats ) Study . If there is an explanation that is unreasonable, or please correct it !"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Last , If you feel good, please click three times , Welcome to the original official account :「"},{"type":"text","marks":[{"type":"strong"}],"text":"bigsai"},{"type":"text","text":"」, ad locum , Not only can you learn knowledge and dry goods , I also prepared a lot of advanced materials for you , reply 「bigsai」 You can get it by password !"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ed/eded3adec06b5165335f9a0c9b897a1e.jpeg","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
版权声明
本文为[InfoQ]所创,转载请带上原文链接,感谢

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云