Detailed explanation of lettuce, an advanced client of redis

ThrowableDoge 2021-02-23 16:16:16
detailed explanation lettuce advanced client


Premise

Lettuce It's a Redis Of Java Drive pack , When I first met her, I used RedisTemplate I had some problems Debug Some source code in the bottom layer , Find out spring-data-redis The driver package of is replaced by Lettuce.Lettuce Translated into lettuce , you 're right , It's the kind of lettuce you eat , So it's Logo Long like this :

Since it can be Spring Recognized by ecology ,Lettuce There must be something extraordinary , So I took the time to read her official documents , Organize test samples , Write down this article . The version used when writing this article is Lettuce 5.1.8.RELEASE,SpringBoot 2.1.8.RELEASE,JDK [8,11].<font color=red> Long warned </font>: The article took two weeks off and on to finish , exceed 4 swastika .....

Lettuce brief introduction

Lettuce Is a high performance based on Java Compiling Redis Drive frame , The underlying integration Project Reactor Provide natural reactive programming , The communication framework integrates Netty Using non blocking IO,5.x After the version, the JDK1.8 Asynchronous programming features , In the guarantee of high performance at the same time provides a very rich and easy to use API,5.1 The new features of the version are as follows :

  • Support Redis New order for ZPOPMIN, ZPOPMAX, BZPOPMIN, BZPOPMAX.
  • Supported by Brave Module tracking Redis Command execution .
  • Support Redis Streams.
  • Support asynchronous master-slave connection .
  • Supports asynchronous connection pooling .
  • The new command executes the mode at most once ( No automatic reconnection ).
  • Global command timeout settings ( It also works with asynchronous and reactive commands ).
  • ...... wait

Be careful. Redis The version of requires at least 2.6, Of course, the higher the better ,API The compatibility of is more powerful .

You just need to introduce a single dependency to start using Lettuce

  • Maven
<dependency>
<groupId>io.lettuce</groupId>
<artifactId>lettuce-core</artifactId>
<version>5.1.8.RELEASE</version>
</dependency>
  • Gradle
dependencies {
compile 'io.lettuce:lettuce-core:5.1.8.RELEASE'
}

Connect Redis

stand-alone 、 sentry 、 Connection in cluster mode Redis A unified standard is needed to represent the connection details , stay Lettuce The unified standard in is RedisURI. There are three ways to construct a RedisURI example :

  • Custom string URI grammar :
RedisURI uri = RedisURI.create("redis://localhost/");
  • Use builder (RedisURI.Builder):
RedisURI uri = RedisURI.builder().withHost("localhost").withPort(6379).build();
  • Instantiate directly through the constructor :
RedisURI uri = new RedisURI("localhost", 6379, 60, TimeUnit.SECONDS);

Custom connection URI grammar

  • stand-alone ( The prefix for redis://
 Format :redis://[password@]host[:port][/databaseNumber][?[timeout=timeout[d|h|m|s|ms|us|ns]]
complete :redis://mypassword@127.0.0.1:6379/0?timeout=10s
Simple :redis://localhost
  • Stand alone and use SSL( The prefix for rediss://) <== Notice there's more s
 Format :rediss://[password@]host[:port][/databaseNumber][?[timeout=timeout[d|h|m|s|ms|us|ns]]
complete :rediss://mypassword@127.0.0.1:6379/0?timeout=10s
Simple :rediss://localhost
  • stand-alone Unix Domain Sockets Pattern ( The prefix for redis-socket://
 Format :redis-socket://path[?[timeout=timeout[d|h|m|s|ms|us|ns]][&_database=database_]]
complete :redis-socket:///tmp/redis?timeout=10s&_database=0
  • sentry ( The prefix for redis-sentinel://
 Format :redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber][?[timeout=timeout[d|h|m|s|ms|us|ns]]#sentinelMasterId
complete :redis-sentinel://mypassword@127.0.0.1:6379,127.0.0.1:6380/0?timeout=10s#mymaster

Time out unit :

  • d God
  • h Hours
  • m minute
  • s Second
  • ms millisecond
  • us Microsecond
  • ns nanosecond

I suggest using RedisURI Builder provided , After all, customized URI Simple as it is , But it's easy to make mistakes . In view of the fact that the author has not SSL and Unix Domain Socket Usage scenarios of , These two connection methods are not listed below .

Basic use

Lettuce It depends on four main components :

  • RedisURI: Connection information .
  • RedisClientRedis client , Specifically , Cluster connection has a custom RedisClusterClient.
  • ConnectionRedis Connect , Mainly StatefulConnection perhaps StatefulRedisConnection Subclasses of , The type of connection mainly depends on the specific way of connection ( stand-alone 、 sentry 、 colony 、 Subscription publishing and so on ) selected , More important .
  • RedisCommandsRedis command API Interface , It basically covers Redis All commands of the release , Provides synchronization (sync)、 asynchronous (async)、 Reaction formula (reative) How to invoke , For users , Will often follow RedisCommands Serial interface transactions .

A basic example is as follows :

@Test
public void testSetGet() throws Exception {
RedisURI redisUri = RedisURI.builder() // <1> Create connection information for a stand-alone connection 
.withHost("localhost")
.withPort(6379)
.withTimeout(Duration.of(10, ChronoUnit.SECONDS))
.build();
RedisClient redisClient = RedisClient.create(redisUri); // <2> Create client 
StatefulRedisConnection<String, String> connection = redisClient.connect(); // <3> Create thread safe connections 
RedisCommands<String, String> redisCommands = connection.sync(); // <4> Create synchronization command 
SetArgs setArgs = SetArgs.Builder.nx().ex(5);
String result = redisCommands.set("name", "throwable", setArgs);
Assertions.assertThat(result).isEqualToIgnoringCase("OK");
result = redisCommands.get("name");
Assertions.assertThat(result).isEqualTo("throwable");
// ... Other operating 
connection.close(); // <5> Close the connection 
redisClient.shutdown(); // <6> Close client 
}

Be careful :

  • <5>: Closing a connection is usually done before the application stops , One of the applications Redis The driver instance does not need too many connections ( In general, only one connection instance is needed , If you need more than one connection, you can consider using a connection pool , Actually Redis At present, the module dealing with commands is single thread , There is no effect in the theory of multi-threaded invocation of multiple connections on the client ).
  • <6>: Close the client before the general application stops , If conditions permit , be based on Open and close after principle , Client closing should be done after the connection is closed .

API

Lettuce There are mainly three kinds of API

  • Sync (sync):RedisCommands.
  • asynchronous (async):RedisAsyncCommands.
  • Reaction formula (reactive):RedisReactiveCommands.

Prepare a stand-alone first Redis Connect spare :

private static StatefulRedisConnection<String, String> CONNECTION;
private static RedisClient CLIENT;
@BeforeClass
public static void beforeClass() {
RedisURI redisUri = RedisURI.builder()
.withHost("localhost")
.withPort(6379)
.withTimeout(Duration.of(10, ChronoUnit.SECONDS))
.build();
CLIENT = RedisClient.create(redisUri);
CONNECTION = CLIENT.connect();
}
@AfterClass
public static void afterClass() throws Exception {
CONNECTION.close();
CLIENT.shutdown();
}

Redis command API The specific implementation of can be directly from StatefulRedisConnection Instance acquisition , See its interface definition :

public interface StatefulRedisConnection<K, V> extends StatefulConnection<K, V> {
boolean isMulti();
RedisCommands<K, V> sync();
RedisAsyncCommands<K, V> async();
RedisReactiveCommands<K, V> reactive();
}

It is worth noting that , When no codec is specified RedisCodec Under the premise of ,RedisClient Created StatefulRedisConnection Instances are generally generic instances StatefulRedisConnection<String,String>, That is, all orders API Of KEY and VALUE All are String type , This way of use can meet most of the use scenarios . Of course , You can customize the codec if necessary RedisCodec<K,V>.

Sync API

Build first RedisCommands example :

private static RedisCommands<String, String> COMMAND;
@BeforeClass
public static void beforeClass() {
COMMAND = CONNECTION.sync();
}

Basic use :

@Test
public void testSyncPing() throws Exception {
String pong = COMMAND.ping();
Assertions.assertThat(pong).isEqualToIgnoringCase("PONG");
}
@Test
public void testSyncSetAndGet() throws Exception {
SetArgs setArgs = SetArgs.Builder.nx().ex(5);
COMMAND.set("name", "throwable", setArgs);
String value = COMMAND.get("name");
log.info("Get value: {}", value);
}
// Get value: throwable

Sync API The result is returned immediately after all command calls . If you are familiar with Jedis Words ,RedisCommands In fact, its usage is similar to that of .

asynchronous API

Build first RedisAsyncCommands example :

private static RedisAsyncCommands<String, String> ASYNC_COMMAND;
@BeforeClass
public static void beforeClass() {
ASYNC_COMMAND = CONNECTION.async();
}

Basic use :

@Test
public void testAsyncPing() throws Exception {
RedisFuture<String> redisFuture = ASYNC_COMMAND.ping();
log.info("Ping result:{}", redisFuture.get());
}
// Ping result:PONG

RedisAsyncCommands All method execution returns RedisFuture example , and RedisFuture The interface is defined as follows :

public interface RedisFuture<V> extends CompletionStage<V>, Future<V> {
String getError();
boolean await(long timeout, TimeUnit unit) throws InterruptedException;
}

That is to say ,RedisFuture Can be used seamlessly Future perhaps JDK1.8 Introduced in CompletableFuture Methods provided . for instance :

@Test
public void testAsyncSetAndGet1() throws Exception {
SetArgs setArgs = SetArgs.Builder.nx().ex(5);
RedisFuture<String> future = ASYNC_COMMAND.set("name", "throwable", setArgs);
// CompletableFuture#thenAccept()
future.thenAccept(value -> log.info("Set Command return :{}", value));
// Future#get()
future.get();
}
// Set Command return :OK
@Test
public void testAsyncSetAndGet2() throws Exception {
SetArgs setArgs = SetArgs.Builder.nx().ex(5);
CompletableFuture<Void> result =
(CompletableFuture<Void>) ASYNC_COMMAND.set("name", "throwable", setArgs)
.thenAcceptBoth(ASYNC_COMMAND.get("name"),
(s, g) -> {
log.info("Set Command return :{}", s);
log.info("Get Command return :{}", g);
});
result.get();
}
// Set Command return :OK
// Get Command return :throwable

If you can use it skillfully CompletableFuture And functional programming skills , Can combine multiple RedisFuture Complete some complex operations .

Reaction formula API

Lettuce The framework of reactive programming introduced is Project Reactor, If you don't have reactive programming experience, you can learn about it yourself Project Reactor.

structure RedisReactiveCommands example :

private static RedisReactiveCommands<String, String> REACTIVE_COMMAND;
@BeforeClass
public static void beforeClass() {
REACTIVE_COMMAND = CONNECTION.reactive();
}

according to Project Reactor,RedisReactiveCommands If the returned result only contains 0 or 1 Elements , So the return value type is Mono, If the returned result contains 0 To N(N Greater than 0) Elements , So the return value is Flux. for instance :

@Test
public void testReactivePing() throws Exception {
Mono<String> ping = REACTIVE_COMMAND.ping();
ping.subscribe(v -> log.info("Ping result:{}", v));
Thread.sleep(1000);
}
// Ping result:PONG
@Test
public void testReactiveSetAndGet() throws Exception {
SetArgs setArgs = SetArgs.Builder.nx().ex(5);
REACTIVE_COMMAND.set("name", "throwable", setArgs).block();
REACTIVE_COMMAND.get("name").subscribe(value -> log.info("Get Command return :{}", value));
Thread.sleep(1000);
}
// Get Command return :throwable
@Test
public void testReactiveSet() throws Exception {
REACTIVE_COMMAND.sadd("food", "bread", "meat", "fish").block();
Flux<String> flux = REACTIVE_COMMAND.smembers("food");
flux.subscribe(log::info);
REACTIVE_COMMAND.srem("food", "bread", "meat", "fish").block();
Thread.sleep(1000);
}
// meat
// bread
// fish

Take a more complex example , Contains transactions 、 Function conversion, etc :

@Test
public void testReactiveFunctional() throws Exception {
REACTIVE_COMMAND.multi().doOnSuccess(r -> {
REACTIVE_COMMAND.set("counter", "1").doOnNext(log::info).subscribe();
REACTIVE_COMMAND.incr("counter").doOnNext(c -> log.info(String.valueOf(c))).subscribe();
}).flatMap(s -> REACTIVE_COMMAND.exec())
.doOnNext(transactionResult -> log.info("Discarded:{}", transactionResult.wasDiscarded()))
.subscribe();
Thread.sleep(1000);
}
// OK
// 2
// Discarded:false

This method opens a transaction , The first counter Set to 1, then counter Self increasing 1.

Publish and subscribe

Publish and subscribe in the non cluster mode depends on the customized connection StatefulRedisPubSubConnection, Publish and subscribe in cluster mode depends on the customized connection StatefulRedisClusterPubSubConnection, The two come from RedisClient#connectPubSub() A series of methods and RedisClusterClient#connectPubSub()

  • Non cluster mode :
// It could be a single machine 、 Common master and slave 、 Sentry and other non cluster mode clients 
RedisClient client = ...
StatefulRedisPubSubConnection<String, String> connection = client.connectPubSub();
connection.addListener(new RedisPubSubListener<String, String>() { ... });
// Synchronization command 
RedisPubSubCommands<String, String> sync = connection.sync();
sync.subscribe("channel");
// Asynchronous command 
RedisPubSubAsyncCommands<String, String> async = connection.async();
RedisFuture<Void> future = async.subscribe("channel");
// Reactive command 
RedisPubSubReactiveCommands<String, String> reactive = connection.reactive();
reactive.subscribe("channel").subscribe();
reactive.observeChannels().doOnNext(patternMessage -> {...}).subscribe()
  • Cluster pattern :
// In fact, the usage mode is basically the same as the non cluster mode 
RedisClusterClient clusterClient = ...
StatefulRedisClusterPubSubConnection<String, String> connection = clusterClient.connectPubSub();
connection.addListener(new RedisPubSubListener<String, String>() { ... });
RedisPubSubCommands<String, String> sync = connection.sync();
sync.subscribe("channel");
// ...

Here is a single machine synchronization command mode Redis Key space notification (Redis Keyspace Notifications) Example :

@Test
public void testSyncKeyspaceNotification() throws Exception {
RedisURI redisUri = RedisURI.builder()
.withHost("localhost")
.withPort(6379)
// Note that it can only be 0 Signal library 
.withDatabase(0)
.withTimeout(Duration.of(10, ChronoUnit.SECONDS))
.build();
RedisClient redisClient = RedisClient.create(redisUri);
StatefulRedisConnection<String, String> redisConnection = redisClient.connect();
RedisCommands<String, String> redisCommands = redisConnection.sync();
// Receive only key expired Events 
redisCommands.configSet("notify-keyspace-events", "Ex");
StatefulRedisPubSubConnection<String, String> connection = redisClient.connectPubSub();
connection.addListener(new RedisPubSubAdapter<>() {
@Override
public void psubscribed(String pattern, long count) {
log.info("pattern:{},count:{}", pattern, count);
}
@Override
public void message(String pattern, String channel, String message) {
log.info("pattern:{},channel:{},message:{}", pattern, channel, message);
}
});
RedisPubSubCommands<String, String> commands = connection.sync();
commands.psubscribe("__keyevent@0__:expired");
redisCommands.setex("name", 2, "throwable");
Thread.sleep(10000);
redisConnection.close();
connection.close();
redisClient.shutdown();
}
// pattern:__keyevent@0__:expired,count:1
// pattern:__keyevent@0__:expired,channel:__keyevent@0__:expired,message:name

actually , In the realization of RedisPubSubListener You can separate it from , Try not to design anonymous inner classes .

Transaction and batch command execution

The order related to the transaction is WATCHUNWATCHEXECMULTI and DISCARD, stay RedisCommands There are corresponding methods in the serial interface . for instance :

// Synchronous mode 
@Test
public void testSyncMulti() throws Exception {
COMMAND.multi();
COMMAND.setex("name-1", 2, "throwable");
COMMAND.setex("name-2", 2, "doge");
TransactionResult result = COMMAND.exec();
int index = 0;
for (Object r : result) {
log.info("Result-{}:{}", index, r);
index++;
}
}
// Result-0:OK
// Result-1:OK

Redis Of Pipeline That is to say, the pipeline mechanism can be understood as packaging multiple commands in one request and sending them to Redis Server side , then Redis The server packs all the response results and returns them at one time , So as to save unnecessary network resources ( The main thing is to reduce the number of network requests ).Redis about Pipeline There are no clear rules on how the mechanism can be implemented , There is no special command support Pipeline Mechanism .Jedis The middle and bottom layers adopt BIO( Blocking IO) Communications , So what it does is to cache the commands that the client will send , Finally, you need to trigger and send a huge command list package synchronously , Then receive and parse a huge response list package .Pipeline stay Lettuce It is transparent to users , Because the underlying communication framework is Netty, So the optimization of network communication Lettuce There is no need for too much intervention , In other words, it can be understood in this way :Netty help Lettuce From the bottom Redis Of Pipeline Mechanism . however ,Lettuce The asynchronous API It also provides manual Flush Methods :

@Test
public void testAsyncManualFlush() {
// Cancel automatic flush
ASYNC_COMMAND.setAutoFlushCommands(false);
List<RedisFuture<?>> redisFutures = Lists.newArrayList();
int count = 5000;
for (int i = 0; i < count; i++) {
String key = "key-" + (i + 1);
String value = "value-" + (i + 1);
redisFutures.add(ASYNC_COMMAND.set(key, value));
redisFutures.add(ASYNC_COMMAND.expire(key, 2));
}
long start = System.currentTimeMillis();
ASYNC_COMMAND.flushCommands();
boolean result = LettuceFutures.awaitAll(10, TimeUnit.SECONDS, redisFutures.toArray(new RedisFuture[0]));
Assertions.assertThat(result).isTrue();
log.info("Lettuce cost:{} ms", System.currentTimeMillis() - start);
}
// Lettuce cost:1302 ms

The above are just some theoretical terms seen from the document , But the reality is skeletal , Compared with Jedis Of Pipeline Methods provided , Found out Jedis Of Pipeline It takes less time to execute :

@Test
public void testJedisPipeline() throws Exception {
Jedis jedis = new Jedis();
Pipeline pipeline = jedis.pipelined();
int count = 5000;
for (int i = 0; i < count; i++) {
String key = "key-" + (i + 1);
String value = "value-" + (i + 1);
pipeline.set(key, value);
pipeline.expire(key, 2);
}
long start = System.currentTimeMillis();
pipeline.syncAndReturnAll();
log.info("Jedis cost:{} ms", System.currentTimeMillis() - start);
}
// Jedis cost:9 ms

I guess Lettuce Maybe the underlying layer doesn't merge all the commands at once ( It could even be a single send ), You may need to grab the bag to locate . From this point of view , If there's really a lot of execution Redis The scene of the command , May as well use Jedis Of Pipeline.

Be careful : Infer from the above test RedisTemplate Of executePipelined() The method is <strong><font color=red> fake </font></strong>Pipeline Execution method , Use RedisTemplate Please pay attention to this when .

Lua Script execution

Lettuce In the implementation of Redis Of Lua The command synchronization interface is as follows :

public interface RedisScriptingCommands<K, V> {
<T> T eval(String var1, ScriptOutputType var2, K... var3);
<T> T eval(String var1, ScriptOutputType var2, K[] var3, V... var4);
<T> T evalsha(String var1, ScriptOutputType var2, K... var3);
<T> T evalsha(String var1, ScriptOutputType var2, K[] var3, V... var4);
List<Boolean> scriptExists(String... var1);
String scriptFlush();
String scriptKill();
String scriptLoad(V var1);
String digest(V var1);
}

Asynchronous and reactive interface method definitions are similar , The difference is the return value type , What we usually use is eval()evalsha() and scriptLoad() Method . A simple example :

private static RedisCommands<String, String> COMMANDS;
private static String RAW_LUA = "local key = KEYS[1]\n" +
"local value = ARGV[1]\n" +
"local timeout = ARGV[2]\n" +
"redis.call('SETEX', key, tonumber(timeout), value)\n" +
"local result = redis.call('GET', key)\n" +
"return result;";
private static AtomicReference<String> LUA_SHA = new AtomicReference<>();
@Test
public void testLua() throws Exception {
LUA_SHA.compareAndSet(null, COMMANDS.scriptLoad(RAW_LUA));
String[] keys = new String[]{"name"};
String[] args = new String[]{"throwable", "5000"};
String result = COMMANDS.evalsha(LUA_SHA.get(), ScriptOutputType.VALUE, keys, args);
log.info("Get value:{}", result);
}
// Get value:throwable

High availability and fragmentation

in order to Redis High availability , The general master and slave will be adopted (Master/Replica, Here I call it the common master-slave mode , That is to say, we only made master-slave replication , Failure requires manual switching )、 Sentinels and groups . The normal master-slave mode can run independently , It can also work with sentinels , It's just sentinels that provide automatic failover and primary node elevation . Common master and slave and Sentry can use MasterSlave, Include by reference RedisClient、 Codec and one or more RedisURI Get the corresponding Connection example .

here Be careful. ,MasterSlave If only one of the methods provided in is required to be passed in RedisURI example , that Lettuce Will be carried out in Topology discovery mechanism , Automatic access to Redis Master and slave node information ; If you want to pass in a RedisURI aggregate , So for the normal master-slave mode, all node information is static , There will be no discovery or update .

The rules of topology discovery are as follows :

  • For the common master and slave (Master/Replica) Pattern , There is no need to feel RedisURI Point to slave or master , Only one-time topology lookup will be performed for all node information , After that, the node information will be saved in the static cache , Will not update .
  • For sentinel mode , Will subscribe to all sentinel instances and listen for subscriptions / Publish messages to trigger topology refresh mechanism , Update cached node information , In other words, sentinels are naturally dynamic discovery of node information , Static configuration is not supported .

Topology discovery mechanism API by TopologyProvider, Need to understand its principle can refer to the specific implementation .

For clusters (Cluster) Pattern ,Lettuce A set of independent API.

in addition , If Lettuce The connection is not single Redis node , The connection instance provides Data reading node preference ReadFrom) Set up , Optional values are :

  • MASTER: Only from Master Read in node .
  • MASTER_PREFERRED: Priority from Master Read in node .
  • SLAVE_PREFERRED: Priority from Slavor Read in node .
  • SLAVE: Only from Slavor Read in node .
  • NEAREST: Use the last connected Redis Instance read .

Normal master-slave mode

Let's say there are three Redis The service forms a tree master-slave relationship as follows :

  • Node one :localhost:6379, The role of Master.
  • Node two :localhost:6380, The role of Slavor, The slave node of node one .
  • Node three :localhost:6381, The role of Slavor, The slave node of node two .

For the first time, a dynamic node discovers the node information of the master-slave mode, which needs to be connected as follows :

@Test
public void testDynamicReplica() throws Exception {
// You only need to configure the connection information of one node , It doesn't have to be the information of the master node , Slave nodes can also 
RedisURI uri = RedisURI.builder().withHost("localhost").withPort(6379).build();
RedisClient redisClient = RedisClient.create(uri);
StatefulRedisMasterSlaveConnection<String, String> connection = MasterSlave.connect(redisClient, new Utf8StringCodec(), uri);
// Read data only from the from node 
connection.setReadFrom(ReadFrom.SLAVE);
// Carry out other Redis command 
connection.close();
redisClient.shutdown();
}

If you need to specify static Redis Master slave connection properties , So you can build connections like this :

@Test
public void testStaticReplica() throws Exception {
List<RedisURI> uris = new ArrayList<>();
RedisURI uri1 = RedisURI.builder().withHost("localhost").withPort(6379).build();
RedisURI uri2 = RedisURI.builder().withHost("localhost").withPort(6380).build();
RedisURI uri3 = RedisURI.builder().withHost("localhost").withPort(6381).build();
uris.add(uri1);
uris.add(uri2);
uris.add(uri3);
RedisClient redisClient = RedisClient.create();
StatefulRedisMasterSlaveConnection<String, String> connection = MasterSlave.connect(redisClient,
new Utf8StringCodec(), uris);
// Read data only from the master node 
connection.setReadFrom(ReadFrom.MASTER);
// Carry out other Redis command 
connection.close();
redisClient.shutdown();
}

Sentinel mode

because Lettuce It provides the topology discovery mechanism of sentinels , So you only need to configure a sentinel node RedisURI instance :

@Test
public void testDynamicSentinel() throws Exception {
RedisURI redisUri = RedisURI.builder()
.withPassword(" Your password ")
.withSentinel("localhost", 26379)
.withSentinelMasterId(" sentry Master Of ID")
.build();
RedisClient redisClient = RedisClient.create();
StatefulRedisMasterSlaveConnection<String, String> connection = MasterSlave.connect(redisClient, new Utf8StringCodec(), redisUri);
// Only read data from the node is allowed 
connection.setReadFrom(ReadFrom.SLAVE);
RedisCommands<String, String> command = connection.sync();
SetArgs setArgs = SetArgs.Builder.nx().ex(5);
command.set("name", "throwable", setArgs);
String value = command.get("name");
log.info("Get value:{}", value);
}
// Get value:throwable

Cluster pattern

In view of the author's right to Redis The cluster mode is not familiar ,Cluster Mode of API There are more restrictions on its use , So here is just a brief introduction to how to use . Let's start with a few features :

Below API Provide cross slot (Slot) Called functions

  • RedisAdvancedClusterCommands.
  • RedisAdvancedClusterAsyncCommands.
  • RedisAdvancedClusterReactiveCommands.

Static node selection function :

  • masters: Select all the master nodes to execute the command .
  • slaves: Select all the slave nodes to execute the command , In fact, it's read-only mode .
  • all nodes: Commands can be executed at all nodes .

Cluster topology view dynamic update function :

  • Update manually , Active call RedisClusterClient#reloadPartitions().
  • Background regular update .
  • Adaptive update , Based on disconnection and MOVED/ASK Command redirection auto update .

Redis Please refer to the official documents for the detailed process of cluster construction , Suppose that the cluster has been set up as follows (192.168.56.200 It's my virtual machine Host):

  • 192.168.56.200:7001 => Master node , Slot position 0-5460.
  • 192.168.56.200:7002 => Master node , Slot position 5461-10922.
  • 192.168.56.200:7003 => Master node , Slot position 10923-16383.
  • 192.168.56.200:7004 => 7001 The slave node .
  • 192.168.56.200:7005 => 7002 The slave node .
  • 192.168.56.200:7006 => 7003 The slave node .

Simple cluster connection and usage are as follows :

@Test
public void testSyncCluster(){
RedisURI uri = RedisURI.builder().withHost("192.168.56.200").build();
RedisClusterClient redisClusterClient = RedisClusterClient.create(uri);
StatefulRedisClusterConnection<String, String> connection = redisClusterClient.connect();
RedisAdvancedClusterCommands<String, String> commands = connection.sync();
commands.setex("name",10, "throwable");
String value = commands.get("name");
log.info("Get value:{}", value);
}
// Get value:throwable

Node selection :

@Test
public void testSyncNodeSelection() {
RedisURI uri = RedisURI.builder().withHost("192.168.56.200").withPort(7001).build();
RedisClusterClient redisClusterClient = RedisClusterClient.create(uri);
StatefulRedisClusterConnection<String, String> connection = redisClusterClient.connect();
RedisAdvancedClusterCommands<String, String> commands = connection.sync();
// commands.all(); // All nodes 
// commands.masters(); // Master node 
// Read only from node 
NodeSelection<String, String> replicas = commands.slaves();
NodeSelectionCommands<String, String> nodeSelectionCommands = replicas.commands();
// This is just a demonstration , In general, you should disable keys * command 
Executions<List<String>> keys = nodeSelectionCommands.keys("*");
keys.forEach(key -> log.info("key: {}", key));
connection.close();
redisClusterClient.shutdown();
}

Update the cluster topology view regularly ( Update every ten minutes , It's a time for self-determination , Not too often ):

@Test
public void testPeriodicClusterTopology() throws Exception {
RedisURI uri = RedisURI.builder().withHost("192.168.56.200").withPort(7001).build();
RedisClusterClient redisClusterClient = RedisClusterClient.create(uri);
ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions
.builder()
.enablePeriodicRefresh(Duration.of(10, ChronoUnit.MINUTES))
.build();
redisClusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(options).build());
StatefulRedisClusterConnection<String, String> connection = redisClusterClient.connect();
RedisAdvancedClusterCommands<String, String> commands = connection.sync();
commands.setex("name", 10, "throwable");
String value = commands.get("name");
log.info("Get value:{}", value);
Thread.sleep(Integer.MAX_VALUE);
connection.close();
redisClusterClient.shutdown();
}

Update cluster topology view adaptively :

@Test
public void testAdaptiveClusterTopology() throws Exception {
RedisURI uri = RedisURI.builder().withHost("192.168.56.200").withPort(7001).build();
RedisClusterClient redisClusterClient = RedisClusterClient.create(uri);
ClusterTopologyRefreshOptions options = ClusterTopologyRefreshOptions.builder()
.enableAdaptiveRefreshTrigger(
ClusterTopologyRefreshOptions.RefreshTrigger.MOVED_REDIRECT,
ClusterTopologyRefreshOptions.RefreshTrigger.PERSISTENT_RECONNECTS
)
.adaptiveRefreshTriggersTimeout(Duration.of(30, ChronoUnit.SECONDS))
.build();
redisClusterClient.setOptions(ClusterClientOptions.builder().topologyRefreshOptions(options).build());
StatefulRedisClusterConnection<String, String> connection = redisClusterClient.connect();
RedisAdvancedClusterCommands<String, String> commands = connection.sync();
commands.setex("name", 10, "throwable");
String value = commands.get("name");
log.info("Get value:{}", value);
Thread.sleep(Integer.MAX_VALUE);
connection.close();
redisClusterClient.shutdown();
}

Dynamic commands and custom commands

The custom command is Redis Command finite set , However, you can specify KEYARGV、 Command type 、 Codec and return value type , Depend on dispatch() Method :

// Custom implementation PING Method 
@Test
public void testCustomPing() throws Exception {
RedisURI redisUri = RedisURI.builder()
.withHost("localhost")
.withPort(6379)
.withTimeout(Duration.of(10, ChronoUnit.SECONDS))
.build();
RedisClient redisClient = RedisClient.create(redisUri);
StatefulRedisConnection<String, String> connect = redisClient.connect();
RedisCommands<String, String> sync = connect.sync();
RedisCodec<String, String> codec = StringCodec.UTF8;
String result = sync.dispatch(CommandType.PING, new StatusOutput<>(codec));
log.info("PING:{}", result);
connect.close();
redisClient.shutdown();
}
// PING:PONG
// Custom implementation Set Method 
@Test
public void testCustomSet() throws Exception {
RedisURI redisUri = RedisURI.builder()
.withHost("localhost")
.withPort(6379)
.withTimeout(Duration.of(10, ChronoUnit.SECONDS))
.build();
RedisClient redisClient = RedisClient.create(redisUri);
StatefulRedisConnection<String, String> connect = redisClient.connect();
RedisCommands<String, String> sync = connect.sync();
RedisCodec<String, String> codec = StringCodec.UTF8;
sync.dispatch(CommandType.SETEX, new StatusOutput<>(codec),
new CommandArgs<>(codec).addKey("name").add(5).addValue("throwable"));
String result = sync.get("name");
log.info("Get value:{}", result);
connect.close();
redisClient.shutdown();
}
// Get value:throwable

Dynamic commands are based on Redis Command finite set , And through annotation and dynamic agent to complete some complex command combination . The main notes are io.lettuce.core.dynamic.annotation Under package path . Just a quick example :

public interface CustomCommand extends Commands {
// SET [key] [value]
@Command("SET ?0 ?1")
String setKey(String key, String value);
// SET [key] [value]
@Command("SET :key :value")
String setKeyNamed(@Param("key") String key, @Param("value") String value);
// MGET [key1] [key2]
@Command("MGET ?0 ?1")
List<String> mGet(String key1, String key2);
/**
* Method name as command
*/
@CommandNaming(strategy = CommandNaming.Strategy.METHOD_NAME)
String mSet(String key1, String value1, String key2, String value2);
}
@Test
public void testCustomDynamicSet() throws Exception {
RedisURI redisUri = RedisURI.builder()
.withHost("localhost")
.withPort(6379)
.withTimeout(Duration.of(10, ChronoUnit.SECONDS))
.build();
RedisClient redisClient = RedisClient.create(redisUri);
StatefulRedisConnection<String, String> connect = redisClient.connect();
RedisCommandFactory commandFactory = new RedisCommandFactory(connect);
CustomCommand commands = commandFactory.getCommands(CustomCommand.class);
commands.setKey("name", "throwable");
commands.setKeyNamed("throwable", "doge");
log.info("MGET ===> " + commands.mGet("name", "throwable"));
commands.mSet("key1", "value1","key2", "value2");
log.info("MGET ===> " + commands.mGet("key1", "key2"));
connect.close();
redisClient.shutdown();
}
// MGET ===> [throwable, doge]
// MGET ===> [value1, value2]

High order properties

Lettuce There are many high-level features , Here are just two things that I think are commonly used :

  • Configure client resources .
  • Use connection pool .

More other features can be found in the official documentation .

Configure client resources

Client resource settings and Lettuce Performance of 、 Concurrency is related to event handling . The configuration of thread pool or thread group accounts for most of the client resource configuration (EventLoopGroups and EventExecutorGroup), These thread pools or thread groups are the basic components of the linker . In general , Client resources should be in multiple Redis Sharing between clients , And it needs to close itself when it is no longer in use . The author thinks , Client resources are for Netty Of . Be careful :<font color=red> Unless you are particularly familiar with it or spend a long time testing and adjusting the parameters mentioned below , Otherwise, modify the default value intuitively without experience , It's possible to step in a hole </font>.

The client resource interface is ClientResources, The implementation class is DefaultClientResources.

structure DefaultClientResources example :

//  Default 
ClientResources resources = DefaultClientResources.create();
//  Builders 
ClientResources resources = DefaultClientResources.builder()
.ioThreadPoolSize(4)
.computationThreadPoolSize(4)
.build()

Use :

ClientResources resources = DefaultClientResources.create();
// Non cluster 
RedisClient client = RedisClient.create(resources, uri);
// colony 
RedisClusterClient clusterClient = RedisClusterClient.create(resources, uris);
// ......
client.shutdown();
clusterClient.shutdown();
// close resource 
resources.shutdown();

Basic configuration of client resources :

attribute describe The default value is
ioThreadPoolSize I/O Number of threads Runtime.getRuntime().availableProcessors()
computationThreadPoolSize Number of task threads Runtime.getRuntime().availableProcessors()

Advanced configuration of client resources :

attribute describe The default value is
eventLoopGroupProvider EventLoopGroup provider -
eventExecutorGroupProvider EventExecutorGroup provider -
eventBus Event bus DefaultEventBus
commandLatencyCollectorOptions Command delay collector configuration DefaultCommandLatencyCollectorOptions
commandLatencyCollector Command delay collector DefaultCommandLatencyCollector
commandLatencyPublisherOptions Command delay publisher configuration DefaultEventPublisherOptions
dnsResolver DNS processor JDK perhaps Netty Provide
reconnectDelay Reconnection delay configuration Delay.exponential()
nettyCustomizer Netty Custom configurator -
tracing Track recorder -

Non clustered clients RedisClient Property configuration for :

Redis Non clustered clients RedisClient The configuration properties method is provided by itself :

RedisClient client = RedisClient.create(uri);
client.setOptions(ClientOptions.builder()
.autoReconnect(false)
.pingBeforeActivateConnection(true)
.build());

List of configuration properties for non clustered clients :

attribute describe The default value is
pingBeforeActivateConnection Whether to execute... Before connection activation PING command false
autoReconnect Automatic reconnection true
cancelCommandsOnReconnectFailure Whether to reject command execution if reconnection fails false
suspendReconnectOnProtocolFailure Whether the underlying protocol fails to suspend the reconnect operation false
requestQueueSize Request queue capacity 2147483647(Integer#MAX_VALUE)
disconnectedBehavior The act of losing a connection DEFAULT
sslOptions SSL To configure -
socketOptions Socket To configure 10 seconds Connection-Timeout, no keep-alive, no TCP noDelay
timeoutOptions The timeout configuration -
publishOnScheduler A scheduler that publishes reactive signal data Use I/O Threads

Cluster client attribute configuration :

Redis Cluster clients RedisClusterClient The configuration properties method is provided by itself :

RedisClusterClient client = RedisClusterClient.create(uri);
ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()
.enablePeriodicRefresh(refreshPeriod(10, TimeUnit.MINUTES))
.enableAllAdaptiveRefreshTriggers()
.build();
client.setOptions(ClusterClientOptions.builder()
.topologyRefreshOptions(topologyRefreshOptions)
.build());

Configuration property list of cluster client :

attribute describe The default value is
enablePeriodicRefresh Whether to allow periodic updating of the cluster topology view false
refreshPeriod Update cluster topology view cycle 60 second
enableAdaptiveRefreshTrigger Set the trigger to update the cluster topology view adaptively RefreshTrigger -
adaptiveRefreshTriggersTimeout Adaptive update cluster topology view trigger timeout 30 second
refreshTriggersReconnectAttempts Adaptive update cluster topology view trigger reconnection times 5
dynamicRefreshSources Whether to allow dynamic refresh of topology resources true
closeStaleConnections Allow old connections to be closed true
maxRedirects Maximum number of cluster redirections 5
validateClusterNodeMembership Whether to verify the membership of cluster nodes true

Use connection pool

Introduce connection pool dependency commons-pool2

<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.7.0</version>
</dependency

The basic use is as follows :

@Test
public void testUseConnectionPool() throws Exception {
RedisURI redisUri = RedisURI.builder()
.withHost("localhost")
.withPort(6379)
.withTimeout(Duration.of(10, ChronoUnit.SECONDS))
.build();
RedisClient redisClient = RedisClient.create(redisUri);
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
GenericObjectPool<StatefulRedisConnection<String, String>> pool
= ConnectionPoolSupport.createGenericObjectPool(redisClient::connect, poolConfig);
try (StatefulRedisConnection<String, String> connection = pool.borrowObject()) {
RedisCommands<String, String> command = connection.sync();
SetArgs setArgs = SetArgs.Builder.nx().ex(5);
command.set("name", "throwable", setArgs);
String n = command.get("name");
log.info("Get value:{}", n);
}
pool.close();
redisClient.shutdown();
}

among , Pooling support for synchronous connections requires ConnectionPoolSupport, Pooling support for asynchronous connections requires AsyncConnectionPoolSupportLettuce5.1 Only later ).

Several common examples of progressive deletion

Progressive deletion Hash In the domain - attribute :

@Test
public void testDelBigHashKey() throws Exception {
// SCAN Parameters 
ScanArgs scanArgs = ScanArgs.Builder.limit(2);
// TEMP The cursor 
ScanCursor cursor = ScanCursor.INITIAL;
// The goal is KEY
String key = "BIG_HASH_KEY";
prepareHashTestData(key);
log.info(" Start progressive deletion Hash The elements of ...");
int counter = 0;
do {
MapScanCursor<String, String> result = COMMAND.hscan(key, cursor, scanArgs);
// Reset TEMP The cursor 
cursor = ScanCursor.of(result.getCursor());
cursor.setFinished(result.isFinished());
Collection<String> fields = result.getMap().values();
if (!fields.isEmpty()) {
COMMAND.hdel(key, fields.toArray(new String[0]));
}
counter++;
} while (!(ScanCursor.FINISHED.getCursor().equals(cursor.getCursor()) && ScanCursor.FINISHED.isFinished() == cursor.isFinished()));
log.info(" Progressive deletion Hash The element of is over , The number of iterations :{} ...", counter);
}
private void prepareHashTestData(String key) throws Exception {
COMMAND.hset(key, "1", "1");
COMMAND.hset(key, "2", "2");
COMMAND.hset(key, "3", "3");
COMMAND.hset(key, "4", "4");
COMMAND.hset(key, "5", "5");
}

Progressively delete elements in the collection :

@Test
public void testDelBigSetKey() throws Exception {
String key = "BIG_SET_KEY";
prepareSetTestData(key);
// SCAN Parameters 
ScanArgs scanArgs = ScanArgs.Builder.limit(2);
// TEMP The cursor 
ScanCursor cursor = ScanCursor.INITIAL;
log.info(" Start progressive deletion Set The elements of ...");
int counter = 0;
do {
ValueScanCursor<String> result = COMMAND.sscan(key, cursor, scanArgs);
// Reset TEMP The cursor 
cursor = ScanCursor.of(result.getCursor());
cursor.setFinished(result.isFinished());
List<String> values = result.getValues();
if (!values.isEmpty()) {
COMMAND.srem(key, values.toArray(new String[0]));
}
counter++;
} while (!(ScanCursor.FINISHED.getCursor().equals(cursor.getCursor()) && ScanCursor.FINISHED.isFinished() == cursor.isFinished()));
log.info(" Progressive deletion Set The element of is over , The number of iterations :{} ...", counter);
}
private void prepareSetTestData(String key) throws Exception {
COMMAND.sadd(key, "1", "2", "3", "4", "5");
}

Progressive deletion of elements in ordered sets :

@Test
public void testDelBigZSetKey() throws Exception {
// SCAN Parameters 
ScanArgs scanArgs = ScanArgs.Builder.limit(2);
// TEMP The cursor 
ScanCursor cursor = ScanCursor.INITIAL;
// The goal is KEY
String key = "BIG_ZSET_KEY";
prepareZSetTestData(key);
log.info(" Start progressive deletion ZSet The elements of ...");
int counter = 0;
do {
ScoredValueScanCursor<String> result = COMMAND.zscan(key, cursor, scanArgs);
// Reset TEMP The cursor 
cursor = ScanCursor.of(result.getCursor());
cursor.setFinished(result.isFinished());
List<ScoredValue<String>> scoredValues = result.getValues();
if (!scoredValues.isEmpty()) {
COMMAND.zrem(key, scoredValues.stream().map(ScoredValue<String>::getValue).toArray(String[]::new));
}
counter++;
} while (!(ScanCursor.FINISHED.getCursor().equals(cursor.getCursor()) && ScanCursor.FINISHED.isFinished() == cursor.isFinished()));
log.info(" Progressive deletion ZSet The element of is over , The number of iterations :{} ...", counter);
}
private void prepareZSetTestData(String key) throws Exception {
COMMAND.zadd(key, 0, "1");
COMMAND.zadd(key, 0, "2");
COMMAND.zadd(key, 0, "3");
COMMAND.zadd(key, 0, "4");
COMMAND.zadd(key, 0, "5");
}

stay SpringBoot Use in Lettuce

Personally think that ,spring-data-redis Medium API Packaging is not very good , It's heavier to use , inflexible , Here's a combination of the previous examples and code , stay SpringBoot Configuration and integration in scaffolding projects Lettuce. Introduce dependency first :

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>2.1.8.RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>io.lettuce</groupId>
<artifactId>lettuce-core</artifactId>
<version>5.1.8.RELEASE</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.10</version>
<scope>provided</scope>
</dependency>
</dependencies>

In general , Each application should use a single Redis Client instance and single connection instance , Here's a scaffold , A single adapter 、 Common master and slave 、 Sentinel and cluster four use scenarios . For client resources , Use the default implementation . about Redis The connection properties of , The main ones are HostPort and Password, Others can be ignored temporarily . Based on the principle of convention over configuration , First, customize a series of attribute configuration classes ( In fact, some configurations can be completely shared , But consider clearly describing the relationships between classes , Here, multiple configuration property classes and multiple configuration methods are split ):

@Data
@ConfigurationProperties(prefix = "lettuce")
public class LettuceProperties {
private LettuceSingleProperties single;
private LettuceReplicaProperties replica;
private LettuceSentinelProperties sentinel;
private LettuceClusterProperties cluster;
}
@Data
public class LettuceSingleProperties {
private String host;
private Integer port;
private String password;
}
@EqualsAndHashCode(callSuper = true)
@Data
public class LettuceReplicaProperties extends LettuceSingleProperties {
}
@EqualsAndHashCode(callSuper = true)
@Data
public class LettuceSentinelProperties extends LettuceSingleProperties {
private String masterId;
}
@EqualsAndHashCode(callSuper = true)
@Data
public class LettuceClusterProperties extends LettuceSingleProperties {
}

The configuration classes are as follows , The main use of @ConditionalOnProperty Isolation , In general , Few people use more than one... In an application Redis Connection scenario :

@RequiredArgsConstructor
@Configuration
@ConditionalOnClass(name = "io.lettuce.core.RedisURI")
@EnableConfigurationProperties(value = LettuceProperties.class)
public class LettuceAutoConfiguration {
private final LettuceProperties lettuceProperties;
@Bean(destroyMethod = "shutdown")
public ClientResources clientResources() {
return DefaultClientResources.create();
}
@Bean
@ConditionalOnProperty(name = "lettuce.single.host")
public RedisURI singleRedisUri() {
LettuceSingleProperties singleProperties = lettuceProperties.getSingle();
return RedisURI.builder()
.withHost(singleProperties.getHost())
.withPort(singleProperties.getPort())
.withPassword(singleProperties.getPassword())
.build();
}
@Bean(destroyMethod = "shutdown")
@ConditionalOnProperty(name = "lettuce.single.host")
public RedisClient singleRedisClient(ClientResources clientResources, @Qualifier("singleRedisUri") RedisURI redisUri) {
return RedisClient.create(clientResources, redisUri);
}
@Bean(destroyMethod = "close")
@ConditionalOnProperty(name = "lettuce.single.host")
public StatefulRedisConnection<String, String> singleRedisConnection(@Qualifier("singleRedisClient") RedisClient singleRedisClient) {
return singleRedisClient.connect();
}
@Bean
@ConditionalOnProperty(name = "lettuce.replica.host")
public RedisURI replicaRedisUri() {
LettuceReplicaProperties replicaProperties = lettuceProperties.getReplica();
return RedisURI.builder()
.withHost(replicaProperties.getHost())
.withPort(replicaProperties.getPort())
.withPassword(replicaProperties.getPassword())
.build();
}
@Bean(destroyMethod = "shutdown")
@ConditionalOnProperty(name = "lettuce.replica.host")
public RedisClient replicaRedisClient(ClientResources clientResources, @Qualifier("replicaRedisUri") RedisURI redisUri) {
return RedisClient.create(clientResources, redisUri);
}
@Bean(destroyMethod = "close")
@ConditionalOnProperty(name = "lettuce.replica.host")
public StatefulRedisMasterSlaveConnection<String, String> replicaRedisConnection(@Qualifier("replicaRedisClient") RedisClient replicaRedisClient,
@Qualifier("replicaRedisUri") RedisURI redisUri) {
return MasterSlave.connect(replicaRedisClient, new Utf8StringCodec(), redisUri);
}
@Bean
@ConditionalOnProperty(name = "lettuce.sentinel.host")
public RedisURI sentinelRedisUri() {
LettuceSentinelProperties sentinelProperties = lettuceProperties.getSentinel();
return RedisURI.builder()
.withPassword(sentinelProperties.getPassword())
.withSentinel(sentinelProperties.getHost(), sentinelProperties.getPort())
.withSentinelMasterId(sentinelProperties.getMasterId())
.build();
}
@Bean(destroyMethod = "shutdown")
@ConditionalOnProperty(name = "lettuce.sentinel.host")
public RedisClient sentinelRedisClient(ClientResources clientResources, @Qualifier("sentinelRedisUri") RedisURI redisUri) {
return RedisClient.create(clientResources, redisUri);
}
@Bean(destroyMethod = "close")
@ConditionalOnProperty(name = "lettuce.sentinel.host")
public StatefulRedisMasterSlaveConnection<String, String> sentinelRedisConnection(@Qualifier("sentinelRedisClient") RedisClient sentinelRedisClient,
@Qualifier("sentinelRedisUri") RedisURI redisUri) {
return MasterSlave.connect(sentinelRedisClient, new Utf8StringCodec(), redisUri);
}
@Bean
@ConditionalOnProperty(name = "lettuce.cluster.host")
public RedisURI clusterRedisUri() {
LettuceClusterProperties clusterProperties = lettuceProperties.getCluster();
return RedisURI.builder()
.withHost(clusterProperties.getHost())
.withPort(clusterProperties.getPort())
.withPassword(clusterProperties.getPassword())
.build();
}
@Bean(destroyMethod = "shutdown")
@ConditionalOnProperty(name = "lettuce.cluster.host")
public RedisClusterClient redisClusterClient(ClientResources clientResources, @Qualifier("clusterRedisUri") RedisURI redisUri) {
return RedisClusterClient.create(clientResources, redisUri);
}
@Bean(destroyMethod = "close")
@ConditionalOnProperty(name = "lettuce.cluster")
public StatefulRedisClusterConnection<String, String> clusterConnection(RedisClusterClient clusterClient) {
return clusterClient.connect();
}
}

Finally, in order to let IDE Identify our configuration , You can add IDE familiarity ,/META-INF Add a file under the folder spring-configuration-metadata.json, The contents are as follows :

{
"properties": [
{
"name": "lettuce.single",
"type": "club.throwable.spring.lettuce.LettuceSingleProperties",
"description": " Single machine configuration ",
"sourceType": "club.throwable.spring.lettuce.LettuceProperties"
},
{
"name": "lettuce.replica",
"type": "club.throwable.spring.lettuce.LettuceReplicaProperties",
"description": " Master slave configuration ",
"sourceType": "club.throwable.spring.lettuce.LettuceProperties"
},
{
"name": "lettuce.sentinel",
"type": "club.throwable.spring.lettuce.LettuceSentinelProperties",
"description": " Sentinel configuration ",
"sourceType": "club.throwable.spring.lettuce.LettuceProperties"
},
{
"name": "lettuce.single",
"type": "club.throwable.spring.lettuce.LettuceClusterProperties",
"description": " Cluster configuration ",
"sourceType": "club.throwable.spring.lettuce.LettuceProperties"
}
]
}

If you want to IDE Kinship is better , You can add /META-INF/additional-spring-configuration-metadata.json Define in more detail . Simple use as follows :

@Slf4j
@Component
public class RedisCommandLineRunner implements CommandLineRunner {
@Autowired
@Qualifier("singleRedisConnection")
private StatefulRedisConnection<String, String> connection;
@Override
public void run(String... args) throws Exception {
RedisCommands<String, String> redisCommands = connection.sync();
redisCommands.setex("name", 5, "throwable");
log.info("Get value:{}", redisCommands.get("name"));
}
}
// Get value:throwable

Summary

This article is based on Lettuce Official documents of , Make a comprehensive analysis of its use , Including the main functions 、 The configuration has done some examples , Limited to space part of the features and configuration details are not analyzed .Lettuce Has been spring-data-redis Accept as official Redis Client driver , So trust , Some of it API The design is quite reasonable , High scalability and flexibility . Personal advice , be based on Lettuce The package adds its own configuration to SpringBoot Application will be handy , After all RedisTemplate It's too heavy , And it's shielded Lettuce Some advanced features and flexible API.

Reference material :

link

( The end of this paper c-14-d e-a-20190928 There have been so many things recently ...)

版权声明
本文为[ThrowableDoge]所创,转载请带上原文链接,感谢
https://javamana.com/2021/02/20210223161509449o.html

  1. 头条面试官:说说Kafka的消费者提交方式,怎么实现的
  2. 什么是HTTPS以及如何实施HTTPS?
  3. vue使用sdk进行七牛上传
  4. k8s-dns
  5. JavaScript 邮箱验证 - 正则验证
  6. k8s-dashboard
  7. HashMap连环问你能答出几道?
  8. Where does memory overflow occur in the JVM? What are the reasons for this?
  9. How many questions can you answer?
  10. k8s-cronjob
  11. spring注解--Transactional
  12. k8s-cert
  13. Will the Spring Festival holiday be extended to February 27 in 2021? Here comes the response
  14. Headline Interviewer: talk about Kafka's consumer submission method, how to achieve it
  15. 【k8s集群】搭建步骤
  16. k8s-kubeadm
  17. k8s-etcd
  18. What is HTTPS and how to implement it?
  19. Java中使用HashMap改进查找性能
  20. maven发布jar包运行时找不到类问题
  21. J2EE
  22. Vue uses SDK to upload seven cows
  23. k8s-dns
  24. JavaScript mailbox verification - regular verification
  25. k8s-dashboard
  26. How many questions can you answer?
  27. Spring annotation -- transactional
  28. [k8s cluster] construction steps
  29. k8s-kubeadm
  30. k8s-etcd
  31. Using HashMap to improve search performance in Java
  32. There is no class problem when Maven publishes jar package
  33. JavaScriptBOM操作
  34. J2EE
  35. k8s-prometheus-memory
  36. k8s-prometheus disk
  37. k8s-prometheus
  38. JavaScript BOM operation
  39. k8s-prometheus-memory
  40. k8s-prometheus disk
  41. k8s-prometheus
  42. Linux Disk Command
  43. Linux FS
  44. 使用docker-compose &WordPress建站
  45. Linux Command
  46. This time, thoroughly grasp the depth of JavaScript copy
  47. Linux Disk Command
  48. Linux FS
  49. Using docker compose & WordPress to build a website
  50. Linux Command
  51. 摊牌了,我 HTTP 功底贼好!
  52. shiro 报 Submitted credentials for token
  53. It's a showdown. I'm good at it!
  54. Shiro submitted credentials for token
  55. Linux Stress test
  56. Linux Root Disk Extension
  57. Linux Stress test
  58. Linux Root Disk Extension
  59. Redis高级客户端Lettuce详解
  60. springboot学习-综合运用(一)