About Redis Configuration instructions
Redis Description of persistence
redis Data persistence is supported by default , When redis When there is data in, the data will be saved to disk regularly , When redis When the server restarts, it will read the specified persistent file according to the configuration file , Realize the recovery of memory data .
1.RDB The pattern is redis The default persistence policy for .
2.RDB The pattern records redis Snapshot of memory data , The latest snapshot will cover the previous content , all RDB Persistent files take up less space , Persistence is more efficient .
3.RDB Patterns are persisted on a regular basis , So it may lead to the loss of data .
- save Requires immediate and immediate persistence Synchronous operation Other redis The operation will get stuck .
- bgsave Turn on background operation Asynchronous operations Because of asynchronous operation , So there's no guarantee rdb The files must be up to date and need to wait .
To configure ：
1. Persistent file name
2. Persistent file location
dir./ The relative path .
dir/usr/local/src/redis How to write absolute path .
3.RDB Pattern persistence strategy
1.AOF Mode is off by default , Manual opening required .
2.AOF Patterns are asynchronous operations , It records the process of user's operation , It can prevent users from losing data .
3. because AOF Patterns record the running state of a program , So the persistence file is relatively large , It takes a long time to recover data , Need to optimize persistence files artificially .
To configure ：
1. If data loss is not allowed , Use AOF Pattern .
2. If we pursue efficiency , Allow a small amount of data loss , use RDB Pattern .
3. If the minutes guarantee efficiency , And to ensure the data , You should configure redis The cluster of , Host use RDB Pattern , The slave machine uses AOF Pattern .
About Redis Memory strategy
An explanation of the memory policy
Redis Data is stored in memory . If you keep storing data in memory It will inevitably lead to the overflow of memory data .
- Keep as much as possible in redis Data addition timeout in .
- Using algorithms to optimize old data .
characteristic : The best memory optimization algorithm to use .
LRU yes Least Recently Used Abbreviation , namely Recently at least use , Is a commonly used data replacement algorithm , Choose the most recently unused data for elimination . The algorithm gives each data an access field , It is used to record the time that a data has experienced since it was last accessed t, When a data has to be eliminated , Select the existing data t The most valuable , That is, the least recently used data will be eliminated .
dimension : Time T
LFU（least frequently used (LFU) page-replacement algorithm）. That is, the least frequent use of page replacement algorithm , It is required to replace the page with the lowest reference count at page replacement , Because frequently used pages should have a large number of references . But some pages are used a lot at the beginning , But it will not be used in the future , Such pages will stay in memory for a long time , So we can introduce Use the count register to shift right one bit at a time , The average number of uses to form an exponential decay .
dimension : Times of use
That is to delete data randomly .
The algorithm to delete the data with time-out in advance .
Redis Memory data optimization
1.volatile-lru The data with time-out is used lru Algorithm .
2.allkeys-lru All the data use LRU Algorithm .
3.volatile-lfu The data with time-out is used lfu Algorithm delete .
4.allkeys-lfu All the data are based on lfu Algorithm delete .
5.volatile-random The data of setting time-out time adopts random algorithm .
6.allkeys-random Random algorithms for all data .
7.volatile-ttl Set the timeout data TTL Algorithm .
8.noeviction If memory overflows It will report an error and return . Do nothing . The default value is .
About Redis The cache problem
Problem description : If massive users request at the same time And then redis Server problem It may cause the whole system to crash .
Running speed :
- tomcat The server 150-250 Between JVM tuning 1000/ second
- NGINX 3-5 ten thousand / second
- REDIS read 11.2 ten thousand / second Write 8.6 ten thousand / second Average 10 ten thousand / second
Problem description : Because of users High concurrency Visit in the environment Data that does not exist in the database , Easy to cause cache penetration .
Solution : Set up IP Current limiting operation nginx in Or Microsoft Service Mechanism API Gateway implementation .
Problem description : Because of users High concurrency In the environment , Because some data existed in memory before , But for special reasons ( Data timeout / Data accidentally deleted ) Lead to redis Cache invalidation . So that a large number of users' requests directly access the database .
1. When setting the timeout Don't set the same time .
2. Set multi level cache .
Problem description : because High concurrency Under the condition of Yes A lot of data failed . Lead to redis The hit rate is too low . It allows users to access the database directly ( The server ) Cause crash , It's called cache avalanche .
1. Don't set the same timeout random number
2. Set multi level cache .
3. Improve redis Cache hit rate adjustment redis Memory optimization strategy use LRU And so on .