redis读写分离之lettuce

ゞ 浴缸里的玫瑰 2022-12-20 11:16 278阅读 0赞

问题

redis使用过程中,很多情况都是读多写少,而不管是主从、哨兵、集群,从节点都只是用来备份,为了最大化节约用户成本,我们需要利用从节点来进行读,分担主节点压力,这里我们继续上一章的jedis的读写分离,由于springboot现在redis集群默认用的是lettuce,所以介绍下lettuce读写分离

读写分离

主从读写分离

这里先建一个主从集群,1主3从,一般情况下只需要进行相关配置如下:

  1. spring:
  2. redis:
  3. host: redisMastHost
  4. port: 6379
  5. lettuce:
  6. pool:
  7. max-active: 512
  8. max-idle: 256
  9. min-idle: 256
  10. max-wait: -1

这样就可以直接注入redisTemplate,读写数据了,但是这个默认只能读写主,如果需要设置readfrom,则需要自定义factory,下面给出两种方案

方案一(适用于非aws)

只需要配置主节点,从节点会信息会自动从主节点获取

  1. @Configuration
  2. class WriteToMasterReadFromReplicaConfiguration {
  3. @Bean
  4. public LettuceConnectionFactory redisConnectionFactory() {
  5. LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
  6. .readFrom(ReadFrom.SLAVE_PREFERRED)
  7. .build();
  8. RedisStandaloneConfiguration serverConfig = new RedisStandaloneConfiguration("server", 6379);
  9. return new LettuceConnectionFactory(serverConfig, clientConfig);
  10. }
  11. }

方案二(云上redis,比如aws)

下面给个demo

  1. import io.lettuce.core.ReadFrom;
  2. import io.lettuce.core.models.role.RedisNodeDescription;
  3. import org.apache.commons.lang3.StringUtils;
  4. import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
  5. import org.slf4j.Logger;
  6. import org.slf4j.LoggerFactory;
  7. import org.springframework.beans.factory.annotation.Qualifier;
  8. import org.springframework.beans.factory.annotation.Value;
  9. import org.springframework.context.annotation.Bean;
  10. import org.springframework.context.annotation.Configuration;
  11. import org.springframework.data.redis.connection.RedisStaticMasterReplicaConfiguration;
  12. import org.springframework.data.redis.connection.lettuce.LettuceClientConfiguration;
  13. import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
  14. import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
  15. import org.springframework.data.redis.core.RedisTemplate;
  16. import org.springframework.data.redis.serializer.StringRedisSerializer;
  17. import java.time.Duration;
  18. import java.util.List;
  19. import java.util.concurrent.atomic.AtomicInteger;
  20. import java.util.stream.Collectors;
  21. import java.util.stream.IntStream;
  22. import java.util.stream.Stream;
  23. @Configuration
  24. public class RedisConfig {
  25. @Value("${spring.redis1.master}")
  26. private String master;
  27. @Value("${spring.redis1.slaves:}")
  28. private String slaves;
  29. @Value("${spring.redis1.port}")
  30. private int port;
  31. @Value("${spring.redis1.timeout:200}")
  32. private long timeout;
  33. @Value("${spring.redis1.lettuce.pool.max-idle:256}")
  34. private int maxIdle;
  35. @Value("${spring.redis1.lettuce.pool.min-idle:256}")
  36. private int minIdle;
  37. @Value("${spring.redis1.lettuce.pool.max-active:512}")
  38. private int maxActive;
  39. @Value("${spring.redis1.lettuce.pool.max-wait:-1}")
  40. private long maxWait;
  41. private static Logger logger = LoggerFactory.getLogger(RedisConfig.class);
  42. private final AtomicInteger index = new AtomicInteger(-1);
  43. @Bean(value = "lettuceConnectionFactory1")
  44. LettuceConnectionFactory lettuceConnectionFactory1(GenericObjectPoolConfig genericObjectPoolConfig) {
  45. RedisStaticMasterReplicaConfiguration configuration = new RedisStaticMasterReplicaConfiguration(
  46. this.master, this.port);
  47. if(StringUtils.isNotBlank(slaves)){
  48. String[] slaveHosts=slaves.split(",");
  49. for (int i=0;i<slaveHosts.length;i++){
  50. configuration.addNode(slaveHosts[i], this.port);
  51. }
  52. }
  53. LettuceClientConfiguration clientConfig =
  54. LettucePoolingClientConfiguration.builder().readFrom(ReadFrom.SLAVE).commandTimeout(Duration.ofMillis(timeout))
  55. .poolConfig(genericObjectPoolConfig).build();
  56. return new LettuceConnectionFactory(configuration, clientConfig);
  57. }
  58. /**
  59. * GenericObjectPoolConfig 连接池配置
  60. * @return
  61. */
  62. @Bean
  63. public GenericObjectPoolConfig genericObjectPoolConfig() {
  64. GenericObjectPoolConfig genericObjectPoolConfig = new GenericObjectPoolConfig();
  65. genericObjectPoolConfig.setMaxIdle(maxIdle);
  66. genericObjectPoolConfig.setMinIdle(minIdle);
  67. genericObjectPoolConfig.setMaxTotal(maxActive);
  68. genericObjectPoolConfig.setMaxWaitMillis(maxWait);
  69. return genericObjectPoolConfig;
  70. }
  71. @Bean(name = "redisTemplate1")
  72. public RedisTemplate redisTemplate(@Qualifier("lettuceConnectionFactory1") LettuceConnectionFactory connectionFactory) {
  73. RedisTemplate<String,String> template = new RedisTemplate<String,String>();
  74. template.setConnectionFactory(connectionFactory);
  75. template.setKeySerializer(new StringRedisSerializer());
  76. template.setValueSerializer(new StringRedisSerializer());
  77. template.setHashKeySerializer(new StringRedisSerializer());
  78. template.setHashValueSerializer(new StringRedisSerializer());
  79. logger.info("redis 连接成功");
  80. return template;
  81. }
  82. }

这里的核心代码在readfrom的设置,lettuce提供了5中选项,分别是

  • MASTER
  • MASTER_PREFERRED
  • SLAVE_PREFERRED
  • SLAVE
  • NEAREST
    最新的版本SLAVE改成了ReadFrom.REPLICA
    这里设置为SlAVE,那么读请求都会走从节点,但是这里有个bug,每次都会读取最后一个从节点,其他从节点都不会有请求过去,跟踪源代码发现节点顺序是一定的,但是每次getConnection时每次都会获取最后一个,下面是缓存命令情况
    在这里插入图片描述
    解决方案就是自定义一个readFrom,如下

    LettuceClientConfiguration clientConfig =

    1. LettucePoolingClientConfiguration.builder().readFrom(new ReadFrom() {
    2. @Override
    3. public List<RedisNodeDescription> select(Nodes nodes) {
    4. List<RedisNodeDescription> allNodes = nodes.getNodes();
    5. int ind = Math.abs(index.incrementAndGet() % allNodes.size());
    6. RedisNodeDescription selected = allNodes.get(ind);
    7. logger.info("Selected random node {} with uri {}", ind, selected.getUri());
    8. List<RedisNodeDescription> remaining = IntStream.range(0, allNodes.size())
    9. .filter(i -> i != ind)
    10. .mapToObj(allNodes::get).collect(Collectors.toList());
    11. return Stream.concat(
    12. Stream.of(selected),
    13. remaining.stream()
    14. ).collect(Collectors.toList());
    15. }
    16. }).commandTimeout(Duration.ofMillis(timeout))
    17. .poolConfig(genericObjectPoolConfig).build();
    18. return new LettuceConnectionFactory(configuration, clientConfig);

手动实现顺序读各个从节点,修改后调用情况如下,由于还有其他应用连接该redis,所以监控图中非绝对均衡
在这里插入图片描述

哨兵模式

这个我就提供一个简单demo

  1. @Configuration
  2. @ComponentScan("com.redis")
  3. public class RedisConfig {
  4. @Bean
  5. public LettuceConnectionFactory redisConnectionFactory() {
  6. // return new LettuceConnectionFactory(new RedisStandaloneConfiguration("192.168.80.130", 6379));
  7. RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration()
  8. .master("mymaster")
  9. // 哨兵地址
  10. .sentinel("192.168.80.130", 26379)
  11. .sentinel("192.168.80.130", 26380)
  12. .sentinel("192.168.80.130", 26381);
  13. LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder().
  14. readFrom(ReadFrom.SLAVE_PREFERRED).build();
  15. return new LettuceConnectionFactory(sentinelConfig, clientConfig);
  16. }
  17. @Bean
  18. public RedisTemplate redisTemplate(RedisConnectionFactory redisConnectionFactory) {
  19. RedisTemplate redisTemplate = new RedisTemplate();
  20. redisTemplate.setConnectionFactory(redisConnectionFactory);
  21. // 可以配置对象的转换规则,比如使用json格式对object进行存储。
  22. // Object --> 序列化 --> 二进制流 --> redis-server存储
  23. redisTemplate.setKeySerializer(new StringRedisSerializer());
  24. redisTemplate.setValueSerializer(new JdkSerializationRedisSerializer());
  25. return redisTemplate;
  26. }
  27. }

集群模式

集群模式就比较简单了,直接套用下面demo

  1. import io.lettuce.core.ReadFrom;
  2. import io.lettuce.core.resource.ClientResources;
  3. import lombok.extern.slf4j.Slf4j;
  4. import org.apache.commons.lang3.StringUtils;
  5. import org.springframework.beans.factory.annotation.Qualifier;
  6. import org.springframework.beans.factory.annotation.Value;
  7. import org.springframework.context.annotation.Bean;
  8. import org.springframework.context.annotation.Configuration;
  9. import org.springframework.data.redis.connection.RedisClusterConfiguration;
  10. import org.springframework.data.redis.connection.RedisConnectionFactory;
  11. import org.springframework.data.redis.connection.RedisNode;
  12. import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
  13. import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
  14. import org.springframework.data.redis.core.RedisTemplate;
  15. import org.springframework.data.redis.serializer.StringRedisSerializer;
  16. import java.time.Duration;
  17. import java.util.HashSet;
  18. import java.util.Set;
  19. @Slf4j
  20. @Configuration
  21. public class Redis2Config {
  22. @Value("${spring.redis2.cluster.nodes: com:9736}")
  23. public String REDIS_HOST;
  24. @Value("${spring.redis2.cluster.port:9736}")
  25. public int REDIS_PORT;
  26. @Value("${spring.redis2.cluster.type:}")
  27. public String REDIS_TYPE;
  28. @Value("${spring.redis2.cluster.read-from:master}")
  29. public String READ_FROM;
  30. @Value("${spring.redis2.cluster.max-redirects:1}")
  31. public int REDIS_MAX_REDIRECTS;
  32. @Value("${spring.redis2.cluster.share-native-connection:true}")
  33. public boolean REDIS_SHARE_NATIVE_CONNECTION;
  34. @Value("${spring.redis2.cluster.validate-connection:false}")
  35. public boolean VALIDATE_CONNECTION;
  36. @Value("${spring.redis2.cluster.shutdown-timeout:100}")
  37. public long SHUTDOWN_TIMEOUT;
  38. @Bean(value = "myRedisConnectionFactory")
  39. public RedisConnectionFactory connectionFactory(ClientResources clientResources) {
  40. RedisClusterConfiguration clusterConfiguration = new RedisClusterConfiguration();
  41. if (StringUtils.isNotEmpty(REDIS_HOST)) {
  42. String[] serverArray = REDIS_HOST.split(",");
  43. Set<RedisNode> nodes = new HashSet<RedisNode>();
  44. for (String ipPort : serverArray) {
  45. String[] ipAndPort = ipPort.split(":");
  46. nodes.add(new RedisNode(ipAndPort[0].trim(), Integer.valueOf(ipAndPort[1])));
  47. }
  48. clusterConfiguration.setClusterNodes(nodes);
  49. }
  50. if (REDIS_MAX_REDIRECTS > 0) {
  51. clusterConfiguration.setMaxRedirects(REDIS_MAX_REDIRECTS);
  52. }
  53. LettucePoolingClientConfiguration.LettucePoolingClientConfigurationBuilder clientConfigurationBuilder = LettucePoolingClientConfiguration.builder()
  54. .clientResources(clientResources).shutdownTimeout(Duration.ofMillis(SHUTDOWN_TIMEOUT));
  55. if (READ_FROM.equals("slave")) {
  56. clientConfigurationBuilder.readFrom(ReadFrom.SLAVE_PREFERRED);
  57. } else if (READ_FROM.equals("nearest")) {
  58. clientConfigurationBuilder.readFrom(ReadFrom.NEAREST);
  59. } else if (READ_FROM.equals("master")) {
  60. clientConfigurationBuilder.readFrom(ReadFrom.MASTER_PREFERRED);
  61. }
  62. LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(clusterConfiguration, clientConfigurationBuilder.build());
  63. lettuceConnectionFactory.afterPropertiesSet();
  64. return lettuceConnectionFactory;
  65. }
  66. @Bean(name = "myRedisTemplate")
  67. public RedisTemplate myRedisTemplate(@Qualifier("myRedisConnectionFactory") RedisConnectionFactory connectionFactory) {
  68. RedisTemplate template = new RedisTemplate();
  69. template.setConnectionFactory(connectionFactory);
  70. template.setKeySerializer(new StringRedisSerializer());
  71. template.setValueSerializer(new StringRedisSerializer());
  72. return template;
  73. }
  74. }

不过这里集群模式不推荐读取从节点,因为在生产中有可能导致某一分片挂掉以至于整个集群都不可用,可以考虑从节点整多个,然后配置读写分离。

发表评论

表情:
评论列表 (有 0 条评论,278人围观)

还没有评论,来说两句吧...

相关阅读

    相关 Redis分离

    概述: 1. 为什么要用redis读写分离:单机情况下redis能承受大约2万的QPS(具体数据因机器配置与业务场景而异),如果想要承接更高数值的QP

    相关 redis分离lettuce

    问题 redis使用过程中,很多情况都是读多写少,而不管是主从、哨兵、集群,从节点都只是用来备份,为了最大化节约用户成本,我们需要利用从节点来进行读,分担主节点压力,这里

    相关 redis主从库分离

    前言:随着web2.0的进一步发展,网民的生产力进一步提升,存储总量开始增加。 此时虽然仍然是读多写少的模式,但写入量已经大大提升。 原有的缓存技术不能缓解写入压力,而且原有的

    相关 Redis分离

    场景:Redis的主从架构,能帮助我们实现读多,写少的情况 意义: 读写分离主要是为了扩展读。你也可以理解为提高了并发吞吐和负载能力。 读写分离一致性