Redis Occupied memory size
We know Redis It's memory based key-value database , Because the memory size of the system is limited , So we're using Redis You can configure Redis The maximum memory size that can be used .
1、 Configure through profile
By means of Redis Under the installation directory redis.conf Add the following configuration settings to the configuration file to set the memory size
// Set up Redis The maximum occupied memory size is 100M maxmemory 100mb
redis The configuration file of does not have to be under the installation directory redis.conf file , start-up redis When serving, you can pass a parameter to specify redis Of the configuration file
2、 Modify... By command
Redis It supports dynamic modification of memory size by command at runtime
// Set up Redis The maximum occupied memory size is 100M 127.0.0.1:6379> config set maxmemory 100mb // Get the set Redis The maximum memory size that can be used 127.0.0.1:6379> config get maxmemory
If you do not set the maximum memory size or set the maximum memory size to 0, stay 64 Unlimited memory size under bit operating system , stay 32 Bit operating system is the most commonly used 3GB Memory
Redis Memory obsolescence
Now that you can set Redis Maximum occupied memory size , Then the configured memory will be used up . When the memory runs out , And go on to Redis If you add data to it, there will be no memory available ？
actually Redis Several strategies are defined to deal with this situation ：
noeviction( The default policy )： No more services for write requests , Direct return error （DEL Except for requests and some special requests ）
allkeys-lru： From all key Use in LRU The algorithm is eliminated
volatile-lru： From the set expiration time of key Use in LRU The algorithm is eliminated
allkeys-random： From all key Random elimination of data in
volatile-random： From the set expiration time of key In random elimination
volatile-ttl： After setting the expiration time key in , according to key The expiration time for elimination , The earlier they expire, the better they will be eliminated
When using volatile-lru、volatile-random、volatile-ttl These three strategies are , without key Can be eliminated , And noeviction Return error as well
How to get and set memory retirement strategy
Get the current memory retirement strategy ：
127.0.0.1:6379> config get maxmemory-policy
Set the obsolescence policy through the configuration file （ modify redis.conf file ）：
Modify the elimination strategy by command ：
127.0.0.1:6379> config set maxmemory-policy allkeys-lru
What is? LRU?
It says Redis The maximum available memory is used up , Yes, you can use LRU Algorithm for memory elimination , So what is LRU The algorithm ？
LRU(Least Recently Used), Least recently used , Is a cache replacement algorithm . When using memory as a cache , The size of the cache is generally fixed . When the cache is full , At this time, continue to add data to the cache , We need to eliminate some old data , Free up memory to store new data . It can be used at this time LRU The algorithm . The central idea is this ： If a data hasn't been used in the last period of time , So the possibility of being used in the future is very small , So it can be eliminated .
LRU stay Redis In the implementation of
The approximate LRU Algorithm
Redis It's an approximation LRU Algorithm , It's like the regular LRU The algorithm is not quite the same . The approximate LRU The algorithm uses random sampling to eliminate data , Every time you randomly come out 5（ Default ） individual key, Get rid of the least recently used key.
Can pass maxmemory-samples Parameter changes the number of samples ： example ：maxmemory-samples 10
maxmenory-samples The larger the configuration , The closer the elimination result is to the strict LRU Algorithm
Redis In order to achieve approximation LRU Algorithm , For each key Added an extra one 24bit Field of , Used to store the key Last time visited .
Redis3.0 To approximate LRU The optimization of the
Redis3.0 To approximate LRU The algorithm has been optimized . The new algorithm maintains a pool of candidates （ The size is 16）, The data in the pool is sorted according to the access time , For the first time key Will be put into the pool , And then each time I randomly selected key Only when the access time is less than the minimum time in the pool will it be put into the pool , Until the candidate pool is full . When it's full , If there's a new one key Need to put in , The last access time in the pool will be the maximum （ Recently interviewed ） The removal of .
When it comes to elimination , Then select the least recent access time directly from the pool （ The longest time I haven't been interviewed ） Of key Just get rid of it .
LRU Comparison of algorithms
We can compare each other through an experiment LRU The accuracy of the algorithm , The first Redis Add a certain amount of data n, send Redis Out of available memory , Go back to Redis Add inside n/2 New data for , At this time, we need to eliminate some of the data , If strictly LRU Algorithm , What should be eliminated is the first one to join n/2 The data of . Generate the following LRU Comparison of algorithms （ picture source ）：
You can see three different colors in the picture ：
Light grey is the data that was eliminated
Grey is the old data that hasn't been eliminated
Green is new data
We can see that Redis3.0 The number of samples is 10 The resulting graph is closest to the strict LRU. And also use 5 Number of samples ,Redis3.0 Better than Redis2.8.
LFU The algorithm is Redis4.0 A new elimination strategy . Its full name is Least Frequently Used, Its core idea is based on key The frequency of recent visits is eliminated , Rarely visited priorities are eliminated , Many of the people interviewed were left behind .
LFU The algorithm can better represent a key The heat of being interviewed . If you use LRU Algorithm , One key I haven't been interviewed for a long time , Just once in a while , So it's considered hot data , Will not be eliminated , And some of them key What is likely to be visited in the future will be eliminated . If you use LFU This is not the case with algorithms , Because using one at a time doesn't make one key Become hot data .
LFU There are two strategies ：
volatile-lfu： After setting the expiration time key Use in LFU Algorithm elimination key
allkeys-lfu： Of all the key Use in LFU Algorithms eliminate data
Set up and use the two elimination strategies as mentioned above , But it's important to note that the two-week strategy can only be in Redis4.0 And the above settings , If in Redis4.0 The following settings will report an error
One last question , Some people may have noticed , I didn't explain why Redis Use approximation LRU Algorithms without using accurate LRU Algorithm , You can give your answer in the comments area , Let's talk about learning .
Focus , Neverlost
All right, everyone , The above is the whole content of this article , You can see the people here , All are personnel . I said before ,PHP There are many technical points in this aspect , It's also because there are so many , I can't write it down , We won't read too much after writing it , So I've organized it into PDF And documentation , If you need anything, you can
More learning can be found in 【 Benchmarking big factories 】 The high-quality goods PHP Architect tutorial catalog , As long as you can finish it, make sure your salary goes up one step （ Continuous updating ）
I hope the above will help you , quite a lot PHPer There are always some problems and bottlenecks in the advanced stage , There is no sense of direction in the business code , I don't know where to start to improve , I've compiled some information about it , Including but not limited to ： Distributed architecture 、 Highly scalable 、 High performance 、 High concurrency 、 Server performance tuning 、TP6,laravel,YII2,Redis,Swoole、Swoft、Kafka、Mysql Optimize 、shell Script 、Docker、 Microservices 、Nginx If you need advanced advanced dry goods, you can share them for free , You can join me if you need PHP Technology exchange group