I am a Redis
Hello , I am a Redis, One is called Antirez The man who brought me into this world .
Speaking of my birth , With relational databases MySQL It's quite original .
Before I came into this world ,MySQL It's been a hard time , The Internet is growing faster and faster , It holds more and more data , User requests have skyrocketed , And every user request becomes a read-write operation to it ,MySQL It's hard to say . Especially to “ double 11”、“618“ This is the day of the national shopping spree , All are MySQL The days of suffering .
According to later MySQL Tell me. , In fact, more than half of user requests are read operations , And it's often a repeat query , Waste a lot of time on disk I/O.
Later, some people thought , Is it possible to learn from CPU, Add a cache to the database ？ So I was born ！
Shortly after birth , I'll go with MySQL Became a good friend , We're often on the back-end server hand in hand .
Applications start with MySQL Data queried , Check in with me , When you need to use it later , Just ask me for , I'm not looking for MySQL want .
For ease of use , I support the storage of several data structures ：
Because I record all the registered data in memory , You don't have to do snail slow I/O operation , So it's better to look for me than to look for MySQL It will save a lot of time .
Don't underestimate this simple change , I can do it for MySQL It lightens a lot of burden ！ As the program runs , I'm caching more and more data , I've been blocking user requests for quite a while , This time it's free ！
With my participation , The performance of network services has improved a lot , It's all due to the fact that I've been shot a lot for the database .
But soon I found out that things were not good , The data I cache is in memory , But even on the server , Memory space resources are still very limited , You can't live like this , I have to find a way , Or take jujube pills .
Soon , I came up with a way ： Set a timeout for cached content , It's up to the applications to set the exact length , What I have to do is delete the expired content from me , Just make room in time .
There's a timeout , When should I do the cleaning work ？
The simplest is to delete it regularly , I decided 100ms Just do it once , One second is 10 Time ！
When I clean up, I can't delete all the expired ones in one breath , I have a lot of data in it , I don't know how long it will take to scan it all over the place , It will seriously affect my reception of new customer requests ！
Time is tight and task is heavy , I had to choose a random part to clean up , It can relieve the memory pressure .
It's been a while , I found that some key values are lucky , Every time I'm not selected by my random algorithm , Every time I survived , This is not acceptable. , These long overdue data have been occupying a lot of memory space ！ Shivering cold ！
I can't rub sand in my eyes ！ So on the basis of the original regular deletion , Another move ：
The key values that originally escaped my random selection algorithm , Once you encounter a query request , I found out it was overdue , Then I'll never be polite , Delete... Now .
This is because it's passively triggered , It doesn't happen without a query , So it's also called lazy deletion ！
But , There are still some key values , Both escape my random selection algorithm , I haven't been inquired , They've been at large ！ And at the same time , There's less and less memory available .
And even to say the least , I can delete all the expired data , In case the expiration time is too long , I haven't been waiting for me to clean up , The memory is full , Take the same jujube pill , So I have to find a way .
I've been thinking about it for a long time , Finally, a big trick came out ： Memory retirement strategy , This time, I want to solve the problem completely ！
I have provided. 8 There are three kinds of strategies to choose from , Used to decide when I run out of memory ：
noeviction： Returns an error , No key values will be deleted
allkeys-lru： Use LRU Algorithm to delete the least recently used key value
volatile-lru： Use LRU The algorithm removes the least recently used key value from the set of keys with expiration time set
allkeys-random： From all key Random delete
volatile-random： Randomly delete from the set of keys with expiration time set
volatile-ttl： Remove the key with the shortest remaining time from the keys with the expiration time set
volatile-lfu： Remove the least frequently used key from the keys configured with expiration time
allkeys-lfu： Remove the least frequently used key from all keys
With the above sets of combo Boxing , I don't have to worry about the problem that there are too many expired data to fill up the space ~
My life is quite comfortable , however MySQL Big brother is not as comfortable as I am , Sometimes there are annoying requests , The query data does not exist ,MySQL It's going to be a waste of time ！ More Than This , Because it doesn't exist , I can't cache it either , That leads to the same request, every time you have to make MySQL I'm going to live in vain . My value as a cache has not been reflected ！ This is what people call cache penetration .
This comes and goes ,MySQL Big brother can't help it ：“ alas , brother , Can you help me find a way , Block the queries that you know won't work out for me ”
Then I thought of another good friend of mine ： The bloon filter
My friend has no other abilities , I'm good at quickly telling you whether the data you're looking for exists from a very large data set （ Whisper it to you , This friend of mine is a little unreliable , It tells you not to believe everything that exists , In fact, it may not exist , But if he tells you it doesn't exist , Then it must not exist ）.
If you're interested in my friend , You can have a look here 《 Vernacular bloom filter BloomFilter》.
I introduced this friend to the app , Data that doesn't exist doesn't have to be bothered MySQL 了 , Easy to help solve the problem of cache penetration .
After that, there was a period of peace , Until that day ···
There is a ,MySQL That guy is fishing for fish , Suddenly, a lot of requests were accepted by him , He was caught off guard .
After a lot of work ,MySQL Angry to find me ,“ brother , What's the matter , How come all of a sudden so fierce ”
I checked the log , Explain to me ：“ eldest brother , I'm really sorry , We just had a hot data expiration time , I deleted , Unfortunately, there were a lot of requests for this data , I have deleted , So the requests came to you ”
“ What are you doing , Pay attention next time ”,MySQL Big brother left with a look of unhappiness .
I don't care much about this little thing , And then it was forgotten , But I didn't think about it. A few days later, I made a bigger basket .
On that day , There are a lot of network requests coming to MySQL over there , It's much bigger than last time ,MySQL Big brother, I've been working hard for several times ！
It took a long time for this wave of traffic to pass ,MySQL It's time to relax .
“ bro , What's the reason this time ？”,MySQL Big brother is so tired that he has no strength .
“ This time it's even more unfortunate than the last one , This time, a large number of data have passed the validity period almost at the same time , And then there were a lot of requests for that data , So it's bigger than last time ”
MySQL Big brother frowned when he heard it ,“ Then you have to find a way , Torture me for three days , Who can resist this ？”
“ In fact, I am helpless , This time is not set by me , Or I'll go to the app and talk about it , Let him set the cache expiration time more evenly ？ At least don't let a lot of data collectively fail ”
“ go , Let's go together ”
later , We talked to the app , Not only the expiration time of the key value is random , Also set the hot data never expired , This problem has eased a lot . oh , We also named the two problems separately ： Cache breakdown and cache avalanche .
We've finally had a comfortable life again ···
On that day , I'm working hard , Something went wrong by accident , The whole process collapsed .
When I start again , All the previously cached data is gone , The storm like requests were met again MySQL Big brother there .
alas , If only I could remember what was cached before the crash ···
Foresee the future , Please pay attention to the following highlights ······