# Detailed explanation of redis's approximate LRU algorithm

Help me to the Internet cafe 2022-05-14 14:49:26 阅读数:990

detailedexplanationredisapproximatelru

stay 《Redis What if the data cache is full ？》 We know Redis After the cache is full, the data can be deleted through the elimination strategy to make room for new data .

The elimination strategy is as follows ：

Set the expiration time key

`volatile-ttl、volatile-random、volatile-lru、volatile-lfu`  The data range eliminated by these four strategies is the data with expiration time .

be-all key

`allkeys-lru、allkeys-random、allkeys-lfu`  These three elimination strategies, regardless of whether these key value pairs have an expiration time , When the memory is insufficient, it will be eliminated .

That means , Even if it doesn't expire yet , It's also deleted . Of course , If the expiration time has passed , Even if not selected by the elimination strategy , It's also deleted .

`volatile-ttl and volatile-randon`  It's simple , The point is  `volatile-lru and volatile-lfu`, They involve LRU Algorithm and LFU Algorithm .

Today, brother Ma will take you to finish Redis Of LRU Algorithm …

## The approximate LRU Algorithm

What is? LRU The algorithm ？

`LRU`  The whole process of the algorithm is  `Least Rencently Used`, As the name suggests, it is to eliminate data according to the algorithm that has not been used for the longest time .

The core idea 「 If the data was recently accessed , Then the probability of being released stably in the future is also higher 」.

We organize all the data into a linked list ：

• MRU： Indicates the head of the linked list , Represents the most frequently accessed data recently ;
• LRU： Indicates the end of the linked list , Represents the least frequently used data recently .

You can find ,LRU Both updating and inserting new data occur at the beginning of the linked list , The deletion of data occurs at the end of the linked list .

The accessed data will be moved to MRU End , The data before the accessed data moves back by one bit .

Can you use a single linked list ？

If you choose a single linked list , Delete this node , need O(n) Go through it and find the precursor node . So choose a two-way linked list , You can also delete O(1) complete .

Redis Use this LRU Does the algorithm manage all cached data ？

No, it isn't , because LRU The algorithm needs to manage all the data with a linked list , It will cause a lot of additional space consumption .

besides , A large number of nodes are accessed, which will lead to frequent movement of linked list nodes , So it reduces Redis performance .

therefore Redis The algorithm is simplified ,Redis LRU The algorithm is not really LRU,Redis adopt Yes, a small amount of key sampling , And eliminate the most inaccessible data in the sampled data key.

That means Redis Cannot eliminate the longest accessed data in the database .

Redis LRU An important point of the algorithm is that the number of samples can be changed to adjust the accuracy of the algorithm , Make it close to the real LRU Algorithm , At the same time, it avoids the consumption of memory , Because only a small number of samples need to be taken at a time , Not all the data .

The configuration is as follows ：

``````maxmemory-samples 50
``````

Operation principle

You remember that , data structure  `redisObjec`t There is one of them.  `lru`  Field , Used to record the timestamp of the last access of each data .

``````typedef struct redisObject {
unsigned type:4;
unsigned encoding:4;
/* LRU time (relative to global lru_clock) or
* LFU data (least significant 8 bits frequency
* and most significant 16 bits access time).
*/
unsigned lru:LRU_BITS;
int refcount;
void *ptr;
} robj;
``````

Redis When weeding out data , Selected randomly for the first time N Put data into Candidate set , take lru The data with the smallest field value is eliminated .

When Again When data need to be eliminated , The data will be re selected and put into the candidate set created for the first time , But there is a selection criterion ： Of data entering the set lru The value of must be less than the smallest in the candidate set lru value .

If the number of new data entering the candidate set reaches  `maxmemory-samples`  Set value , Then put the candidate set  `lru`  Minimal data elimination .

This will greatly reduce the number of linked list nodes , At the same time, you don't have to move the linked list node every time you access data , Greatly improved performance .

## Java Realization LRU Cahce

Make full use of Java Of `LinkedHashMap` Realization , It can be realized by combination or inheritance ,「 Margo 」 Use the form of combination to complete .

``````public class LRUCache<K, V> {
private Map<K, V> map;
private final int cacheSize;
public LRUCache(int initialCapacity) {
map = new LinkedHashMap<K, V>(initialCapacity, 0.75f, true) {
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > cacheSize;
}
};
this.cacheSize = initialCapacity;
}
}
``````

The point is  `LinkedHashMap` On the third constructor of , To set this construction parameter `accessOrder` Set to true, representative `LinkedHashMap` Maintain access order internally .

in addition , It needs to be rewritten `removeEldestEntry()`, If this function returns `true`, Represents removing the node that has not been accessed for the longest time , So as to eliminate data .

Realize it by yourself

Where the code is from  LeetCode 146. LRU Cache  Taken off the . There are comments in the code .

``````import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
/**
* Put the longest unused element at the head of the chain , The end of the chain places the element just added or accessed
*/
class LRUCache {
class Node {
int key, value;
Node pre, next;
Node(int key, int value) {
this.key = key;
this.value = value;
pre = this;
next = this;
}
}
private final int capacity;// LRU Cache The capacity of
private Node dummy;// dummy A node is a redundant node ,dummy Of next Is the first node of the list ,dummy Of pre Is the last node of the list
private Map<Integer, Node> cache;// preservation key-Node Yes ,Node It's a two-way linked list node
public LRUCache(int capacity) {
this.capacity = capacity;
dummy = new Node(0, 0);
cache = new ConcurrentHashMap<>();
}
public int get(int key) {
Node node = cache.get(key);
if (node == null) return -1;
remove(node);
return node.value;
}
public void put(int key, int value) {
Node node = cache.get(key);
if (node == null) {
if (cache.size() >= capacity) {
cache.remove(dummy.next.key);
remove(dummy.next);
}
node = new Node(key, value);
cache.put(key, node);
} else {
cache.remove(node.key);
remove(node);
node = new Node(key, value);
cache.put(key, node);
}
}
/**
* Add a new node at the end of the linked list
*
* @param node New node
*/
dummy.pre.next = node;
node.pre = dummy.pre;
node.next = dummy;
dummy.pre = node;
}
/**
* Delete the node from the two-way linked list
*
* @param node The node to be deleted
*/
private void remove(Node node) {
node.pre.next = node.next;
node.next.pre = node.pre;
}
}
``````