Redis分布式集群

为了提高Redis的可用性以及提升性能,从而来应对越来越大的并发访问量,Redis从单机不断发展成集群。Redis的发展集群经历了主从复制-哨兵模式-Cluster等过程。

主从复制很简单,类似Mysql的主从复制,一主多从,主节点负责将数据同步给从节点,也是从服务器会发送一个SYNC命令给主服务器,主服务器会生成RDB文件,并发送给从服务器。

主从复制也有完整重同步和部分重同步两种,是通过PSYNC命令执行的(之前版本是SYNC,但对于断线后复制效率较低)。完整同步比较好理解,从节点就是初次接入,将主节点的所有数据都同步过来。

部分重同步一般发生在断线之后,通过偏移量只需要同步断线之后的数据即可。

关于主从复制更详细的介绍在: Redis主从复制的配置和实现原理

主从复制最大的缺点就是出现问题,需要人工维护。如果master挂了,那就要手动修改配置调整主从节点。针对于此,后来Redis又出现了sentinel模式。该模式就是在主从复制的基础上,增加了sentinel节点,用来监测数据节点的。如果数据节点有问题会自动摘除,如果master挂了,会重新选举一个master节点。功能有点像keepAlived的功能。

关于sentinel跟多的介绍可以看: Redis哨兵模式

Redis Cluster现在已经是首选了,基本上满足了分布式的所有优点。Sentinel虽然也很不错,但它毕竟是基于主从复制的,并不能缓解写压力。Cluster真正做到了负载均衡,通过将key打散分配到不同的slot上,分担了读写压力。

下面是一个架构图:

整个Rediscluster集群是去中心化的,每个节点都保存了其他节点的元信息,每个节点都可以有一主多从。节点之间通过Gossip协议通信,更多的介绍可以看我之前写的文章: 分布式一致性算法

在RedisCluster中有几个比较重要的通信类别:

  • Meet 通过,当前集群的节点会向新节点发送握手请求,加入现有集群。
  • Ping 节点每秒会向集群中其他节点发送 ping 消息,消息中带有自己已知的两个节点的地址、槽、状态信息、最后一次通信时间等。
  • Pong 节点收到 ping 消息后会回复 pong 消息,消息中同样带有自己已知的两个节点信息。
  • Fail 节点 ping 不通某节点后,会向集群所有节点广播该节点挂掉的消息。其他节点收到消息后标记已下线。

Gossip的特点就是随机冗余发送,最终达到了一个一致性。

Cluster会每100ms发送一次 ping,这里也不是发送给集群中的所有节点,而是随机选择部分节点。一个clusterSendPing会带上当前节点以及其他部分节点的信息发送出去,下面的代码只是部分的。

/* Send a PING or PONG packet to the specified node, making sure to add enough
 * gossip information. */
void clusterSendPing(clusterLink *link, int type) {
    unsigned char *buf;
    clusterMsg *hdr;
    int gossipcount = 0; /* Number of gossip sections added so far. */
    int wanted; /* Number of gossip sections we want to append if possible. */
    int totlen; /* Total packet length. */
    /* freshnodes is the max number of nodes we can hope to append at all:
     * nodes available minus two (ourself and the node we are sending the
     * message to). However practically there may be less valid nodes since
     * nodes in handshake state, disconnected, are not considered. */
    int freshnodes = dictSize(server.cluster->nodes)-2;

    /* How many gossip sections we want to add? 1/10 of the number of nodes
     * and anyway at least 3. Why 1/10?
     *
     * If we have N masters, with N/10 entries, and we consider that in
     * node_timeout we exchange with each other node at least 4 packets
     * (we ping in the worst case in node_timeout/2 time, and we also
     * receive two pings from the host), we have a total of 8 packets
     * in the node_timeout*2 failure reports validity time. So we have
     * that, for a single PFAIL node, we can expect to receive the following
     * number of failure reports (in the specified window of time):
     *
     * PROB * GOSSIP_ENTRIES_PER_PACKET * TOTAL_PACKETS:
     *
     * PROB = probability of being featured in a single gossip entry,
     *        which is 1 / NUM_OF_NODES.
     * ENTRIES = 10.
     * TOTAL_PACKETS = 2 * 4 * NUM_OF_MASTERS.
     *
     * If we assume we have just masters (so num of nodes and num of masters
     * is the same), with 1/10 we always get over the majority, and specifically
     * 80% of the number of nodes, to account for many masters failing at the
     * same time.
     *
     * Since we have non-voting slaves that lower the probability of an entry
     * to feature our node, we set the number of entries per packet as
     * 10% of the total nodes we have. */
    wanted = floor(dictSize(server.cluster->nodes)/10);
    if (wanted < 3) wanted = 3;
    if (wanted > freshnodes) wanted = freshnodes;

    /* Include all the nodes in PFAIL state, so that failure reports are
     * faster to propagate to go from PFAIL to FAIL state. */
    int pfail_wanted = server.cluster->stats_pfail_nodes;

    /* Compute the maximum totlen to allocate our buffer. We'll fix the totlen
     * later according to the number of gossip sections we really were able
     * to put inside the packet. */
    totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
    totlen += (sizeof(clusterMsgDataGossip)*(wanted+pfail_wanted));
  

    /* Ready to send... fix the totlen fiend and queue the message in the
     * output buffer. */
    totlen = sizeof(clusterMsg)-sizeof(union clusterMsgData);
    totlen += (sizeof(clusterMsgDataGossip)*gossipcount);
    hdr->count = htons(gossipcount);
    hdr->totlen = htonl(totlen);
    clusterSendMessage(link,buf,totlen);
    zfree(buf);
}

其实上面也不能完全保证每个节点都能收到其他节点的消息。因此集群还会每1秒都会发送随机选择5个节点,并从中选择最久没有接受pong消息的节点,发送消息。

此外,还会遍历当前节点中存储的所有其他节点,选择一个最近接受ping消息时间已经超过了cluster_node_timeout/2的就会发送消息。

节点下线

如果一个节点向某个节点发送ping消息,超过一定时间不会,就认为对方疑似下线,Probable Fail,如果集群超过一半的节点都标记为疑似下线,那么某个节点就会将该疑似线下节点标记为下线,并发送一条广播消息通知其他节点该异常已经已经下线。

随后如果从节点发现主节点下线,就会等待一段随机时间长度后,向其他一条广播消息,给自己拉票,希望自己成为主节点。选举过程就是基于Raft算法实现的。每一轮投票发起,其他Master节点,如果没有投票给其他节点,就会给该从节点投票,直到有从节点获得了N/2+1的投票数,那么该从节点就升级为Master;如果一轮投票后,并没有某个从节点获取到N/2+1的投票数,就会发起新一轮投票。

如果某个从节点升级成了主节点,那么所有之前所有主节点的槽指派给它,随后它会发送一个广播消息通知一下,我成为了新的主节点。

Cluster将数据进行了分片,但它并不是用一致性hash的算法,也不是直接hash后,集群数量取模。而是使用了哈希槽,一个集群有16384槽,这些slot槽平均分配到整个集群的Master节点中。当有节点加入时,现有节点会将部分的slot分配给新的机器。

对于集群中的任何一个节点,都会有两个重要的数据结构,一个是clusterNode,记录了当前节点的状态,一个是clusterState,包含整个集群的状态等信息。

typedef struct clusterNode {
    mstime_t ctime; /* Node object creation time. */
    char name[CLUSTER_NAMELEN]; /* Node name, hex string, sha1-size */
    int flags;      /* CLUSTER_NODE_... */
    uint64_t configEpoch; /* Last configEpoch observed for this node */
    unsigned char slots[CLUSTER_SLOTS/8]; /* slots handled by this node */
    int numslots;   /* Number of slots handled by this node */
    int numslaves;  /* Number of slave nodes, if this is a master */
    struct clusterNode **slaves; /* pointers to slave nodes */
    struct clusterNode *slaveof; /* pointer to the master node. Note that it
                                    may be NULL even if the node is a slave
                                    if we don't have the master node in our
                                    tables. */
    mstime_t ping_sent;      /* Unix time we sent latest ping */
    mstime_t pong_received;  /* Unix time we received the pong */
    mstime_t data_received;  /* Unix time we received any data */
    mstime_t fail_time;      /* Unix time when FAIL flag was set */
    mstime_t voted_time;     /* Last time we voted for a slave of this master */
    mstime_t repl_offset_time;  /* Unix time we received offset for this node */
    mstime_t orphaned_time;     /* Starting time of orphaned master condition */
    long long repl_offset;      /* Last known repl offset for this node. */
    char ip[NET_IP_STR_LEN];  /* Latest known IP address of this node */
    int port;                   /* Latest known clients port of this node */
    int cport;                  /* Latest known cluster port of this node. */
    clusterLink *link;          /* TCP/IP link with this node */
    list *fail_reports;         /* List of nodes signaling this as failing */
} clusterNode;

clusterNode中的slots数组记录了当前节点负责的slot,数组长度是2048个字节。

再看一下clusterState的数据结构:

typedef struct clusterState {
    clusterNode *myself;  /* This node */
    uint64_t currentEpoch;
    int state;            /* CLUSTER_OK, CLUSTER_FAIL, ... */
    int size;             /* Num of master nodes with at least one slot */
    dict *nodes;          /* Hash table of name -> clusterNode structures */
    dict *nodes_black_list; /* Nodes we don't re-add for a few seconds. */
    clusterNode *migrating_slots_to[CLUSTER_SLOTS];
    clusterNode *importing_slots_from[CLUSTER_SLOTS];
    clusterNode *slots[CLUSTER_SLOTS];
    uint64_t slots_keys_count[CLUSTER_SLOTS];
    rax *slots_to_keys;
    /* The following fields are used to take the slave state on elections. */
    mstime_t failover_auth_time; /* Time of previous or next election. */
    int failover_auth_count;    /* Number of votes received so far. */
    int failover_auth_sent;     /* True if we already asked for votes. */
    int failover_auth_rank;     /* This slave rank for current auth request. */
    uint64_t failover_auth_epoch; /* Epoch of the current election. */
    int cant_failover_reason;   /* Why a slave is currently not able to
                                   failover. See the CANT_FAILOVER_* macros. */
    /* Manual failover state in common. */
    mstime_t mf_end;            /* Manual failover time limit (ms unixtime).
                                   It is zero if there is no MF in progress. */
    /* Manual failover state of master. */
    clusterNode *mf_slave;      /* Slave performing the manual failover. */
    /* Manual failover state of slave. */
    long long mf_master_offset; /* Master offset the slave needs to start MF
                                   or zero if still not received. */
    int mf_can_start;           /* If non-zero signal that the manual failover
                                   can start requesting masters vote. */
    /* The following fields are used by masters to take state on elections. */
    uint64_t lastVoteEpoch;     /* Epoch of the last vote granted. */
    int todo_before_sleep; /* Things to do in clusterBeforeSleep(). */
    /* Messages received and sent by type. */
    long long stats_bus_messages_sent[CLUSTERMSG_TYPE_COUNT];
    long long stats_bus_messages_received[CLUSTERMSG_TYPE_COUNT];
    long long stats_pfail_nodes;    /* Number of nodes in PFAIL status,
                                       excluding nodes without address. */
} clusterState;

clusterState的slots数组存储的是所有槽分配的地址信息,注意和clsuterNode的区别。clusterNode的slots数组记录的就是当前节点的槽分配信息。如果我们想要查某个slot在哪个节点上,直接使用clusterState的slot记录,时间复杂度是O(1)。

一致性

我觉得说一致性可以分为两个方面,一个是元消息的一致性。各个节点之间通过Gossip通信,只能实现最终一致性。第二个是数据的一致性。本身主从复制是异步进行的,当一个客户端执行了写入命令,写入到一个节点中,之后服务器就返回了。之后才会同步给从节点,所以不能保证强一致性。此外,当发生分区时,如果一个主节点自己在一个分区上,长时间的话会导致该段时间内写入的数据最终都会丢失。

如果说Cluster的缺点,我想最大的就是信息冗余吧,可能每个节点都会发送重复的信息。当然,我觉得这都是完全可以接受的,任何事情都可能十全十美,都需要做取舍,信息的冗余可以确保整个集群节点的最终一致性。

现在就实践一下,自己建立一个redis-cluster集群,架构就是上面的三主三从,使用docker搭建。

其实搭建很简单,就是建立6个容器,每个容器配置不同的端口号,其他的都是一样的。

假设建立6个端口号分别为 5555到5560。

port 5555
cluster-enabled yes 
cluster-config-file nodes.conf 
cluster-node-timeout 5000 
appendonly yes 
daemonize no 
bind 0.0.0.0
protected-mode no 
pidfile /tmp/redis/redis-cluster/55555/redis.pid

上面的参数都是必要的。不过如果protected-mode设置为了no,bing可以不加。cluster-enabled是打开了cluster开关,允许使用集群。 cluster-config-file会自动生成,关于该节点的配置信息。 cluster-node-timeout是非常关键的参数,它的值直接影响到了集群通信的效率。看源码就会看到集群节点间的通信都会使用这个值,是单向通信超时的最大时间。

OK 配置好了,就直接编写docker-compose。就是配置各个service,不过下面的教法不准确,这时我们并没指定哪个是主,哪个是从。主从是redis替我们做的工作。

docker-compose文件:


version: '3'

networks:
  redis-cluster-net:
    external:
      name: redis-cluster-net

services:
  redis-master-1:
    image: redis
    restart: always
    container_name: redis-master-1
    command: [ "redis-server", "/home/redis/cluster/redis.conf" ]
    networks:
      redis-cluster-net:
        ipv4_address: 192.168.200.11
    volumes:
      - /tmp/redis/redis-cluster/55555/data:/data
      - /tmp/redis/redis-cluster/55555/redis.conf:/home/redis/cluster/redis.conf
    ports:
      - 5555:5555       #服务端口
      - 15555:15555   #集群端口


  redis-master-2:
      image: redis
      restart: always
      container_name: redis-master-2
      command: [ "redis-server", "/home/redis/cluster/redis.conf" ]
      networks:
        redis-cluster-net:
          ipv4_address: 192.168.200.12
      volumes:
        - /tmp/redis/redis-cluster/55557/data:/data
        - /tmp/redis/redis-cluster/55557/redis.conf:/home/redis/cluster/redis.conf

      ports:
        - 5557:5557       #服务端口
        - 15557:15557   #集群端口



  redis-master-3:
    image: redis
    restart: always
    container_name: redis-master-3
    command: [ "redis-server", "/home/redis/cluster/redis.conf" ]
    networks:
      redis-cluster-net:
        ipv4_address: 192.168.200.13
    volumes:
      - /tmp/redis/redis-cluster/55559/data:/data
      - /tmp/redis/redis-cluster/55559/redis.conf:/home/redis/cluster/redis.conf

    ports:
      - 5559:5559       #服务端口
      - 15559:15559   #集群端口

  redis-slave-1:
    image: redis
    restart: always
    container_name: redis-slave-1
    command: [ "redis-server", "/home/redis/cluster/redis.conf" ]
    networks:
      redis-cluster-net:
        ipv4_address: 192.168.200.14
    volumes:
      - /tmp/redis/redis-cluster/55556/data:/data
      - /tmp/redis/redis-cluster/55556/redis.conf:/home/redis/cluster/redis.conf

    ports:
      - 5556:5556      #服务端口
      - 15556:15556   #集群端口

  redis-slave-2:
    image: redis
    restart: always
    container_name: redis-slave-2
    command: [ "redis-server", "/home/redis/cluster/redis.conf" ]
    networks:
      redis-cluster-net:
        ipv4_address: 192.168.200.15
    volumes:
      - /tmp/redis/redis-cluster/55558/data:/data
      - /tmp/redis/redis-cluster/55558/redis.conf:/home/redis/cluster/redis.conf

    ports:
      - 5558:5558       #服务端口
      - 15558:15558   #集群端口

  redis-slave-3:
    image: redis
    restart: always
    container_name: redis-slave-3
    command: [ "redis-server", "/home/redis/cluster/redis.conf" ]
    networks:
      redis-cluster-net:
        ipv4_address: 192.168.200.16
    volumes:
      - /tmp/redis/redis-cluster/55510/data:/data
      - /tmp/redis/redis-cluster/55510/redis.conf:/home/redis/cluster/redis.conf
    ports:
      - 5510:5510       #服务端口
      - 15510:15510   #集群端口

启动一下:docker-compose up -d。容器启动之后,6个节点的redis还是完全独立的,需要手动建立cluster集群。直接使用redis-cli的命令即可。

/tmp/redis/redis-cluster$ redis-cli --cluster create 10.231.23.240:5555 10.231.23.240:5556 10.231.23.240:5557 10.231.23.240:5558 10.231.23.240:5559 10.231.23.240:5510 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.231.23.240:5559 to 10.231.23.240:5555
Adding replica 10.231.23.240:5510 to 10.231.23.240:5556
Adding replica 10.231.23.240:5558 to 10.231.23.240:5557
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 69cf0edf78f88458535c05446deaad8bce0123eb 10.231.23.240:5555
   slots:[0-5460] (5461 slots) master
M: ef9eaf11a9664a023c2a686186b4d69c44415bb3 10.231.23.240:5556
   slots:[5461-10922] (5462 slots) master
M: 17debd2a9e30af742956428d959d87ad3b2f6fd9 10.231.23.240:5557
   slots:[10923-16383] (5461 slots) master
S: b5983582f3c04ff7639e09b405fb8ef82c6cbb5d 10.231.23.240:5558
   replicates 69cf0edf78f88458535c05446deaad8bce0123eb
S: 326176c22211e64b94697ddc12748e15ffd98402 10.231.23.240:5559
   replicates ef9eaf11a9664a023c2a686186b4d69c44415bb3
S: 738f0fe576d1433302c967d954fc09a92ccbba46 10.231.23.240:5510
   replicates 17debd2a9e30af742956428d959d87ad3b2f6fd9
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join

>>> Performing Cluster Check (using node 10.231.23.240:5555)
M: 69cf0edf78f88458535c05446deaad8bce0123eb 10.231.23.240:5555
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 738f0fe576d1433302c967d954fc09a92ccbba46 192.168.200.1:5510
   slots: (0 slots) slave
   replicates 17debd2a9e30af742956428d959d87ad3b2f6fd9
S: b5983582f3c04ff7639e09b405fb8ef82c6cbb5d 192.168.200.1:5558
   slots: (0 slots) slave
   replicates 69cf0edf78f88458535c05446deaad8bce0123eb
S: 326176c22211e64b94697ddc12748e15ffd98402 192.168.200.1:5559
   slots: (0 slots) slave
   replicates ef9eaf11a9664a023c2a686186b4d69c44415bb3
M: 17debd2a9e30af742956428d959d87ad3b2f6fd9 192.168.200.1:5557
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: ef9eaf11a9664a023c2a686186b4d69c44415bb3 192.168.200.1:5556
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

看上面的信息已经很清晰了,建立了三个master,并分别指派了slot范围。此外,还添加了三个从节点。

集群建立后,查看集群信息:

10.231.23.240:5555> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:1872
cluster_stats_messages_pong_sent:1877
cluster_stats_messages_sent:3749
cluster_stats_messages_ping_received:1872
cluster_stats_messages_pong_received:1872
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:3749

查看集群节点:

10.231.23.240:5555> cluster nodes
738f0fe576d1433302c967d954fc09a92ccbba46 192.168.200.1:5510@15510 slave 17debd2a9e30af742956428d959d87ad3b2f6fd9 0 1605753009829 3 connected
b5983582f3c04ff7639e09b405fb8ef82c6cbb5d 192.168.200.1:5558@15558 slave 69cf0edf78f88458535c05446deaad8bce0123eb 0 1605753009527 1 connected
326176c22211e64b94697ddc12748e15ffd98402 192.168.200.1:5559@15559 slave ef9eaf11a9664a023c2a686186b4d69c44415bb3 0 1605753010834 2 connected
17debd2a9e30af742956428d959d87ad3b2f6fd9 192.168.200.1:5557@15557 master - 0 1605753009000 3 connected 10923-16383
ef9eaf11a9664a023c2a686186b4d69c44415bb3 192.168.200.1:5556@15556 master - 0 1605753010029 2 connected 5461-10922
69cf0edf78f88458535c05446deaad8bce0123eb 192.168.200.11:5555@15555 myself,master - 0 1605753010000 1 connected 0-5460

测试一下添加数据:

127.0.0.1:5555> set test1 haibo
OK
127.0.0.1:5555> set test2 haibo
-> Redirected to slot [8899] located at 192.168.200.1:5557
OK
192.168.200.1:5557> get test2
"haibo"
192.168.200.1:5557> get test1
-> Redirected to slot [4768] located at 192.168.200.1:5555
"haibo"
192.168.200.1:5555> get test2
-> Redirected to slot [8899] located at 192.168.200.1:5557
"haibo"
192.168.200.1:5557> 

现在三主三从的Redis-cluster集群已经正常运行了,现在我们像其中加入两个节点,一主一从。首先还是用docker启动两个容器,端口5561,5562。

首先将5561加入到集群为master节点:

redis-cli --cluster add-node 10.231.23.240:5561 10.231.23.240:5555
>>> Adding node 10.231.23.240:5561 to cluster 10.231.23.240:5555
>>> Performing Cluster Check (using node 10.231.23.240:5555)
M: 69cf0edf78f88458535c05446deaad8bce0123eb 10.231.23.240:5555
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 738f0fe576d1433302c967d954fc09a92ccbba46 192.168.200.1:5510
   slots: (0 slots) slave
   replicates 17debd2a9e30af742956428d959d87ad3b2f6fd9
S: b5983582f3c04ff7639e09b405fb8ef82c6cbb5d 192.168.200.1:5558
   slots: (0 slots) slave
   replicates 69cf0edf78f88458535c05446deaad8bce0123eb
S: 326176c22211e64b94697ddc12748e15ffd98402 192.168.200.1:5559
   slots: (0 slots) slave
   replicates ef9eaf11a9664a023c2a686186b4d69c44415bb3
M: 17debd2a9e30af742956428d959d87ad3b2f6fd9 192.168.200.1:5557
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: ef9eaf11a9664a023c2a686186b4d69c44415bb3 192.168.200.1:5556
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...

加入之后,slot还没有分片,新加入节点的slot还是空的。

现在执行重新分片操作:

redis-cli --cluster reshard 10.231.23.240:5561 --cluster-from 69cf0edf78f88458535c05446deaad8bce0123eb,ef9eaf11a9664a023c2a686186b4d69c44415bb3,17debd2a9e30af742956428d959d87ad3b2f6fd9 --cluster-to 10cc93e09a91e0e9cfd749d58e4ae1220f5f412c  --cluster-slots 4000

分片之后,可以看到新节点已经指派了slot:

192.168.200.1:5556> cluster nodes
17debd2a9e30af742956428d959d87ad3b2f6fd9 192.168.200.1:5557@15557 master - 0 1605756921113 3 connected 12256-16383
738f0fe576d1433302c967d954fc09a92ccbba46 192.168.200.1:5510@15510 slave 17debd2a9e30af742956428d959d87ad3b2f6fd9 0 1605756921615 3 connected
b5983582f3c04ff7639e09b405fb8ef82c6cbb5d 192.168.200.1:5558@15558 slave 69cf0edf78f88458535c05446deaad8bce0123eb 0 1605756920000 1 connected
326176c22211e64b94697ddc12748e15ffd98402 192.168.200.1:5559@15559 slave ef9eaf11a9664a023c2a686186b4d69c44415bb3 0 1605756921514 2 connected
10cc93e09a91e0e9cfd749d58e4ae1220f5f412c 192.168.200.1:5561@15561 master - 0 1605756920109 7 connected 0-1332 5461-6794 10923-12255
ef9eaf11a9664a023c2a686186b4d69c44415bb3 192.168.200.14:5556@15556 myself,master - 0 1605756920000 2 connected 6795-10922
69cf0edf78f88458535c05446deaad8bce0123eb 192.168.200.1:5555@15555 master - 0 1605756920000 1 connected 1333-5460

参考资料:

最通俗易懂的 Redis 架构模式详解

RedisCluster 官网

Redis 集群(下)

浅谈集群版Redis和Gossip协议

第三章:深入浅出理解分布式一致性协议Gossip和Redis集群原理

--------EOF---------
微信分享/微信扫码阅读