Redis source code analysis of data expiration

xindoo 2021-01-24 12:28:15
redis source code analysis data


I've counted some of our online redis The time distribution of data being accessed , Probably 90% Will only access the latest 15 Minute data ,99% To access the latest 1 Hours of data , Less than one in a thousand requests access more than 1 Days of data . We kept this data for two days ( near 500g Memory data ), If the main and standby are included, they are used up 120 Multiple Redis example ( An instance 8g Memory ), Just change the expiration time from 2 The day is changed into 1 Days can save 60 Multiple redis example , And it doesn't have much impact on the original business .

Of course Redis The automatic cleaning mechanism of data expiration has been implemented , All I have to do is change the expiration time of data writing . hypothesis Redis There is no mechanism for data expiration , What are we going to do ? I think it's troublesome , If you think about it carefully, you have to consider a lot of details . So I think expired data is a very important function in caching system , Besides saving things , It can also help us save a lot of costs . Let's take a look Redis How to realize the data expiration in .

Real time cleanup

as everyone knows ,Redis The core process is single threaded , It basically processes one request and then another , The process of processing a request is not just responding to a user initiated request ,Redis I'll do a lot of other work as well , Currently, this includes data expiration .

Redis Reading and writing something key When , It's going to call expireIfNeeded() Judge this first key Is it overdue , If expired , The deletion is executed .

int expireIfNeeded(redisDb *db, robj *key) {
if (!keyIsExpired(db,key)) return 0;
/* If it's in slave Run... In context , Go straight back to 1, because slave Of key Expiry is caused by master The control of the ,
* master Will give slave Send data delete command .
*
* If you return 0 Indicates that the data does not need to be cleaned up , return 1 Indicates that the data is marked as expired this time */
if (server.masterhost != NULL) return 1;
if (checkClientPauseTimeoutAndReturnIfPaused()) return 1;
/* Delete key */
server.stat_expiredkeys++;
propagateExpire(db,key,server.lazyfree_lazy_expire);
notifyKeyspaceEvent(NOTIFY_EXPIRED,
"expired",key,db->id);
int retval = server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) :
dbSyncDelete(db,key);
if (retval) signalModifiedKey(NULL,db,key);
return retval;
}

It's also very easy to judge whether it's overdue ,Redis stay dictEntry The last updated timestamp is stored in , You just need to determine the difference between the current timestamp and the last updated timestamp gap Whether it exceeds the set expiration time .

Let's focus on this line of code .

int retval = server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) :
dbSyncDelete(db,key);

lazyfree_lazy_expire yes Redis One of the configuration items of , Its function is to enable lazy deletion ( Not on by default ), Obviously, if it is turned on, asynchronous deletion will be performed , Now let's talk about it in detail Redis The lazy deletion of .

Lazy deletion

What is inert deletion , essentially Lazy deletion is the task of opening a new thread to process data deletion asynchronously . Why should there be inert deletion ? as everyone knows ,Redis The core process is single threaded execution , If a step is particularly time-consuming , It will directly affect Redis Performance of , For example, delete a few G Of hash key, Then this example doesn't directly go up to the sky .. In this case , You need to open a new thread to delete asynchronously , Prevent blocking out of Redis The main thread , Of course Redis In practice dbAsyncDelete() It's not entirely asynchronous , Let's look directly at the code .

#define LAZYFREE_THRESHOLD 64
int dbAsyncDelete(redisDb *db, robj *key) {
/* from db->expires Delete in key, Just delete the pointer , The actual value is not deleted */
if (dictSize(db->expires) > 0) dictDelete(db->expires,key->ptr);
/* If the value is composed of a few allocations, to free in a lazy way
* is actually just slower... So under a certain limit we just free
* the object synchronously. */
/*
* Take this out of the dictionary key( Not really deleted , Just can't find it ), If it's removed dictEntry Not for
* Empty to perform the following release logic
*/
dictEntry *de = dictUnlink(db->dict,key->ptr);
if (de) {
robj *val = dictGetVal(de);
/* Tells the module that the key has been unlinked from the database. */
moduleNotifyKeyUnlink(key,val);
/* lazy_free It's not completely asynchronous , Instead, evaluate the amount of work required for the release operation , If the impact is small, it will be deleted directly in the main thread */
size_t free_effort = lazyfreeGetFreeEffort(key,val);
/* If you release this object, you need to do a lot of work , Just put it in an asynchronous thread
* But if the object is a shared object (refcount > 1) You can't release it directly , Of course, it's rarely sent , But it's possible redis
* The core calls incrRefCount To protect the object , And then call dbDelete. I just need to call dictFreeUnlinkedEntry,
* Equivalent to calling decrRefCount */
if (free_effort > LAZYFREE_THRESHOLD && val->refcount == 1) {
atomicIncr(lazyfree_objects,1);
bioCreateLazyFreeJob(lazyfreeFreeObject,1, val);
dictSetVal(db->dict,de,NULL);
}
}
/* Release the memory occupied by key value pairs , If it is lazyFree,val It's already null 了 , Just release key The memory of */
if (de) {
dictFreeUnlinkedEntry(db->dict,de);
if (server.cluster_enabled) slotToKeyDel(key->ptr);
return 1;
} else {
return 0;
}
}
  1. First, in the db->expires Middle handle this key Delete ,(db->expires Save all the data with expiration time key, Convenient for data expiration )
  2. And then take this data node from db Take it out of the bag , The data is still in memory , Just can't find it .
  3. The next step is to clean up the data ,redis It's not just putting the cleanup work into asynchronous threads , But the call lazyfreeGetFreeEffort() To assess the impact of cleanup on performance , If the impact is small , Just do it directly in the main thread . On the contrary, if the impact is too great, the deleted task will be submitted to the asynchronous thread .
  4. Release key and val Occupied memory space , If it's asynchronous deletion ,val It's already null, It's just about releasing key The space occupied is enough .

Why is asynchronous deletion not completely asynchronous in the third step ? I think we still have to submit from asynchronous tasks bioCreateLazyFreeJob() A glimpse of what's going on .

void bioCreateLazyFreeJob(lazy_free_fn free_fn, int arg_count, ...) {
va_list valist;
/* Allocate memory for the job structure and all required
* arguments */
struct bio_job *job = zmalloc(sizeof(*job) + sizeof(void *) * (arg_count));
job->free_fn = free_fn;
va_start(valist, arg_count);
for (int i = 0; i < arg_count; i++) {
job->free_args[i] = va_arg(valist, void *);
}
va_end(valist);
bioSubmitJob(BIO_LAZY_FREE, job);
}
void bioSubmitJob(int type, struct bio_job *job) {
job->time = time(NULL);
// Multithreading requires locking , Take care of job Add to the end of the queue
pthread_mutex_lock(&bio_mutex[type]);
listAddNodeTail(bio_jobs[type],job);
bio_pending[type]++;
pthread_cond_signal(&bio_newjob_cond[type]);
pthread_mutex_unlock(&bio_mutex[type]);
}

I understand , When deleting asynchronously, you need to lock and submit asynchronous tasks to the queue , If the performance impact of locking and task submission is greater than that of direct deletion , Asynchronous deletion is better than synchronous deletion .

Delete... By regular sampling

Here's another question , If the data is no longer read or written after it is written , Is the real-time cleaning function unable to reach these data , And that data will always take up space . In this case ,Redis It also implements the policy of periodic deletion . as everyone knows ,Redis The core process is single threaded execution , So it's doomed that it can't stop for a long time to do a specific job , therefore Redis Regular cleaning is just a little bit at a time .

/* There are two cleaning modes , Fast cleaning and slow cleaning */
void activeExpireCycle(int type) {
/* Adjust the running parameters according to the configured expire
* effort. The default effort is 1, and the maximum configurable effort
* is 10. */
unsigned long
effort = server.active_expire_effort-1, /* Rescale from 0 to 9. */
config_keys_per_loop = ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP +
ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP/4*effort, // The amount of data per sample
config_cycle_fast_duration = ACTIVE_EXPIRE_CYCLE_FAST_DURATION +
ACTIVE_EXPIRE_CYCLE_FAST_DURATION/4*effort, // The duration of each cleanup
config_cycle_slow_time_perc = ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC +
2*effort, // Maximum CPU Cycle usage
config_cycle_acceptable_stale = ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE-
effort; // Acceptable percentage of expired data
/* This function has some global state in order to continue the work
* incrementally across calls. */
static unsigned int current_db = 0; /* Last DB tested. */
static int timelimit_exit = 0; /* Time limit hit in previous call? */
static long long last_fast_cycle = 0; /* When last fast cycle ran. */
int j, iteration = 0;
int dbs_per_call = CRON_DBS_PER_CALL;
long long start = ustime(), timelimit, elapsed;
/* When clients are paused the dataset should be static not just from the
* POV of clients not being able to write, but also from the POV of
* expires and evictions of keys not being performed. */
if (checkClientPauseTimeoutAndReturnIfPaused()) return;
// Clean up quickly
if (type == ACTIVE_EXPIRE_CYCLE_FAST) {
/* Don't start a fast cycle if the previous cycle did not exit
* for time limit, unless the percentage of estimated stale keys is
* too high. Also never repeat a fast cycle for the same period
* as the fast cycle total duration itself. */
// If the last execution did not trigger timelimit_exit, Skip execution
if (!timelimit_exit &&
server.stat_expired_stale_perc < config_cycle_acceptable_stale)
return;
// Do not perform a quick clean for two quick clean cycles
if (start < last_fast_cycle + (long long)config_cycle_fast_duration*2)
return;
last_fast_cycle = start;
}
/* We usually should test CRON_DBS_PER_CALL per iteration, with
* two exceptions:
*
* 1) Don't test more DBs than we have.
* 2) If last time we hit the time limit, we want to scan all DBs
* in this iteration, as there is work to do in some DB and we don't want
* expired keys to use memory for too much time. */
if (dbs_per_call > server.dbnum || timelimit_exit)
dbs_per_call = server.dbnum;
/* We can use at max 'config_cycle_slow_time_perc' percentage of CPU
* time per iteration. Since this function gets called with a frequency of
* server.hz times per second, the following is the max amount of
* microseconds we can spend in this function.
* config_cycle_slow_time_perc It's what cleaning can take CPU Cycle number configuration , Here, the number of cycles is converted to a specific time */
timelimit = config_cycle_slow_time_perc*1000000/server.hz/100;
timelimit_exit = 0;
if (timelimit <= 0) timelimit = 1;
if (type == ACTIVE_EXPIRE_CYCLE_FAST)
timelimit = config_cycle_fast_duration; /* in microseconds. */
/* Accumulate some global stats as we expire keys, to have some idea
* about the number of keys that are already logically expired, but still
* existing inside the database. */
long total_sampled = 0;
long total_expired = 0;
for (j = 0; j < dbs_per_call && timelimit_exit == 0; j++) {
/* Expired and checked in a single loop. */
unsigned long expired, sampled;
redisDb *db = server.db+(current_db % server.dbnum);
/* Increment the DB now so we are sure if we run out of time
* in the current DB we'll restart from the next. This allows to
* distribute the time evenly across DBs. */
current_db++;
/* Continue to expire if at the end of the cycle there are still
* a big percentage of keys to expire, compared to the number of keys
* we scanned. The percentage, stored in config_cycle_acceptable_stale
* is not fixed, but depends on the Redis configured "expire effort". */
do {
unsigned long num, slots;
long long now, ttl_sum;
int ttl_samples;
iteration++;
/* If there's nothing to clean up , End directly */
if ((num = dictSize(db->expires)) == 0) {
db->avg_ttl = 0;
break;
}
slots = dictSlots(db->expires);
now = mstime();
/* If slot The filling rate of is less than 1%, The cost of sampling is too high , Skip execution , Waiting for the next right opportunity .*/
if (num && slots > DICT_HT_INITIAL_SIZE &&
(num*100/slots < 1)) break;
/* Record the data sampled this time and the number of expired data */
expired = 0;
sampled = 0;
ttl_sum = 0;
ttl_samples = 0;
// Sampling at most each time num individual
if (num > config_keys_per_loop)
num = config_keys_per_loop;
/* Because of performance considerations , We visited hashcode The underlying implementation of , Code and dict.c Some types ,
* But it's hard to change in more than a decade .
*
* Be careful :hashtable A lot of specific places are empty , So our termination conditions need to take into account the scanned bucket
* Number . But it's actually empty bucket It's very fast , Because it's all in cpu Linear scan in cache line , So you can have more
* Sweep some bucket */
long max_buckets = num*20;
long checked_buckets = 0;
// Here's the sampling data and bucket The limit of quantity .
while (sampled < num && checked_buckets < max_buckets) {
for (int table = 0; table < 2; table++) {
if (table == 1 && !dictIsRehashing(db->expires)) break;
unsigned long idx = db->expires_cursor;
idx &= db->expires->ht[table].sizemask;
dictEntry *de = db->expires->ht[table].table[idx];
long long ttl;
/* Traverse the current bucket All in entry*/
checked_buckets++;
while(de) {
/* Get the next entry now since this entry may get
* deleted. */
dictEntry *e = de;
de = de->next;
ttl = dictGetSignedIntegerVal(e)-now;
if (activeExpireCycleTryExpire(db,e,now)) expired++;
if (ttl > 0) {
/* We want the average TTL of keys yet
* not expired. */
ttl_sum += ttl;
ttl_samples++;
}
sampled++;
}
}
db->expires_cursor++;
}
total_expired += expired;
total_sampled += sampled;
/* to update ttl Statistics */
if (ttl_samples) {
long long avg_ttl = ttl_sum/ttl_samples;
/* Do a simple running average with a few samples.
* We just use the current estimate with a weight of 2%
* and the previous estimate with a weight of 98%. */
if (db->avg_ttl == 0) db->avg_ttl = avg_ttl;
db->avg_ttl = (db->avg_ttl/50)*49 + (avg_ttl/50);
}
/* We can't block forever here even if there are many keys to
* expire. So after a given amount of milliseconds return to the
* caller waiting for the other active expire cycle.
* You can't just stay here and clean up , If it times out, end the cleanup cycle */
if ((iteration & 0xf) == 0) { /* check once every 16 iterations. */
elapsed = ustime()-start;
if (elapsed > timelimit) {
timelimit_exit = 1;
server.stat_expired_time_cap_reached_count++;
break;
}
}
/*
* If expired key More than the number of samples 10%+effort, It shows that there are many overdue tests , More cleaning up , therefore
* Continue to do a sampling and cleaning cycle .
*/
} while (sampled == 0 ||
(expired*100/sampled) > config_cycle_acceptable_stale);
}
elapsed = ustime()-start;
server.stat_expire_cycle_time_used += elapsed;
latencyAddSampleIfNeeded("expire-cycle",elapsed/1000);
/* Update our estimate of keys existing but yet to be expired.
* Running average with this sample accounting for 5%. */
double current_perc;
if (total_sampled) {
current_perc = (double)total_expired/total_sampled;
} else
current_perc = 0;
server.stat_expired_stale_perc = (current_perc*0.05)+
(server.stat_expired_stale_perc*0.95);
}

The code is a little long , Summarize the implementation process , See code Notes for details .

  1. First, several parameters are calculated by configuration or default values , These parameters directly or indirectly determine the termination conditions of the execution , They are as follows .
  • config_keys_per_loop: The amount of data per round robin sampling
  • config_cycle_fast_duration: Duration of each cleanup in fast cleanup mode
  • config_cycle_slow_time_perc: Maximum consumption per clean in slow clean mode CPU Number of cycles (cpu Maximum utilization )
  • config_cycle_acceptable_stale: Acceptable percentage of expired data , If the number of expired samples in this sampling is less than this threshold, the cleaning is finished .
  1. According to the above parameters, the specific value of the termination condition is calculated ( Maximum number of samples and timeout limit ).
  2. Traverse and clean up all db.
  3. For each db Cycle cleaning under the limit of termination condition .

summary

Redis The data expiration policy is relatively simple , It's not a lot of code , But just as performance considerations are everywhere . Of course Redis It just provides such a function , If you want to use it well, you have to adjust the expiration time configuration according to the specific business needs and actual data , Just like the example I gave at the beginning of the article .

This article is about Redis Source analysis series blog , At the same time, there are also corresponding Redis Chinese annotation version , I want to learn more about it Redis Classmate , welcome star And attention .
Redis Chinese annotation version warehouse : https://github.com/xindoo/Redis
Redis Source analysis column : https://zxs.io/s/1h
If you found this article useful , welcome One key, three links .
版权声明
本文为[xindoo]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210124122712409h.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云