Redis.conf 迈不过友情╰ 2022-06-12 14:38 54阅读 0赞 \# Redis configuration file example. \# \# Note that in order to read the configuration file, Redis must be \# started with the file path as first argument: \# \# ./redis-server /path/to/redis.conf \# Note on units: when memory size is needed, it is possible to specify \# it in the usual form of 1k 5GB 4M and so forth: \# \# 1k => 1000 bytes \# 1kb => 1024 bytes \# 1m => 1000000 bytes \# 1mb => 1024\*1024 bytes \# 1g => 1000000000 bytes \# 1gb => 1024\*1024\*1024 bytes \# \# units are case insensitive so 1GB 1Gb 1gB are all the same. \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# INCLUDES \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# Include one or more other config files here. This is useful if you \# have a standard template that goes to all Redis servers but also need \# to customize a few per-server settings. Include files can include \# other files, so use this wisely. \# \# Notice option "include" won't be rewritten by command "CONFIG REWRITE" \# from admin or Redis Sentinel. Since Redis always uses the last processed \# line as value of a configuration directive, you'd better put includes \# at the beginning of this file to avoid overwriting config change at runtime. \# \# If instead you are interested in using includes to override configuration \# options, it is better to use include as the last line. \# \# include /path/to/local.conf \# include /path/to/other.conf \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# GENERAL \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# By default Redis does not run as a daemon. Use 'yes' if you need it. \# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. **daemonize no 是否后台运行** \# When running daemonized, Redis writes a pid file in /var/run/redis.pid by \# default. You can specify a custom pid file location here. pidfile /var/run/redis.pid pid进程号存储路径及文件 \# Accept connections on the specified port, default is 6379. \# If port 0 is specified Redis will not listen on a TCP socket. port 6379 端口 \# TCP listen() backlog. \# \# In high requests-per-second environments you need an high backlog in order \# to avoid slow clients connections issues. Note that the Linux kernel \# will silently truncate it to the value of /proc/sys/net/core/somaxconn so \# make sure to raise both the value of somaxconn and tcp\_max\_syn\_backlog \# in order to get the desired effect. tcp-backlog 511 \# By default Redis listens for connections from all the network interfaces \# available on the server. It is possible to listen to just one or multiple \# interfaces using the "bind" configuration directive, followed by one or \# more IP addresses. \# \# Examples: \# \# bind 192.168.1.100 10.0.0.1 --运行访问(需要监听)的IP \# bind 127.0.0.1 \# Specify the path for the Unix socket that will be used to listen for \# incoming connections. There is no default, so Redis will not listen \# on a unix socket when not specified. \# \# unixsocket /tmp/redis.sock \# unixsocketperm 700 \# Close the connection after a client is idle for N seconds (0 to disable) timeout 0 \# TCP keepalive. \# \# If non-zero, use SO\_KEEPALIVE to send TCP ACKs to clients in absence \# of communication. This is useful for two reasons: \# \# 1) Detect dead peers. \# 2) Take the connection alive from the point of view of network \# equipment in the middle. \# \# On Linux, the specified value (in seconds) is the period used to send ACKs. \# Note that to close the connection the double of the time is needed. \# On other kernels the period depends on the kernel configuration. \# \# A reasonable value for this option is 60 seconds. tcp-keepalive 0 \# Specify the server verbosity level. \# This can be one of: \# debug (a lot of information, useful for development/testing) \# verbose (many rarely useful info, but not a mess like the debug level) \# notice (moderately verbose, what you want in production probably) \# warning (only very important / critical messages are logged) **\#日志级别,默认是verbose(详细),各种日志级别:** **\#debug:很详细的信息,适合开发和测试** **\#verbose:包含许多不太有用的信息,但比debug要清爽一些(many rarely useful info, but not a mess like \#the debug level)** **\#notice:比较适合生产环境** **\#warning:警告信息** loglevel notice \# Specify the log file name. Also the empty string can be used to force \# Redis to log on the standard output. Note that if you use standard \# output for logging but daemonize, logs will be sent to /dev/null **\#日志文件存储路径及文件名** logfile "" \# To enable logging to the system logger, just set 'syslog-enabled' to yes, \# and optionally update the other syslog parameters to suit your needs. **\#'syslog-enabled'设置为yes会把日志输出到系统日志,默认是no** \# syslog-enabled no **\# Specify the syslog identity.** **\#指定syslog的标示符,如果'syslog-enabled'是no,则这个选项无效。** \# syslog-ident redis **\# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.** **\#指定syslog 设备(facility), 必须是USER或者LOCAL0到LOCAL7.** \# syslog-facility local0 \# Set the number of databases. The default database is DB 0, you can select \# a different one on a per-connection basis using SELECT <dbid> where \# dbid is a number between 0 and 'databases'-1 **设置数据库数目。默认的数据库是DB 0。可以通过SELECT <dbid>来选择一个数据库,dbid是\[0,'databases'-1\]的数字** databases 16 \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# SNAPSHOTTING \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# \# Save the DB on disk: \# \# save <seconds> <changes> \# \# Will save the DB if both the given number of seconds and the given \# number of write operations against the DB occurred. \# \# In the example below the behaviour will be to save: \# after 900 sec (15 min) if at least 1 key changed \# after 300 sec (5 min) if at least 10 keys changed \# after 60 sec if at least 10000 keys changed \# \# Note: you can disable saving completely by commenting out all "save" lines. \# \# It is also possible to remove all the previously configured save \# points by adding a save directive with a single empty string argument \# like in the following example: \# \# save "" **\# 以下面的例子来说明:** **\# 过了900秒并且有1个key发生了改变 就会触发save动作** **\# 过了300秒并且有10个key发生了改变 就会触发save动作** **\# 过了60秒并且至少有10000个key发生了改变 也会触发save动作** **\#** **\# 注意:如果你不想让redis自动保存数据,那就把下面的配置注释掉!** save 900 1 每隔900s只要有1个change就保存 save 300 10 save 60 10000 \# By default Redis will stop accepting writes if RDB snapshots are enabled \# (at least one save point) and the latest background save failed. \# This will make the user aware (in a hard way) that data is not persisting \# on disk properly, otherwise chances are that no one will notice and some \# disaster will happen. \# \# If the background saving process will start working again Redis will \# automatically allow writes again. \# \# However if you have setup your proper monitoring of the Redis server \# and persistence, you may want to disable this feature so that Redis will \# continue to work as usual even if there are problems with disk, \# permissions, and so forth. \#\#\#快照持久化选项\#\#\#\#\#\#\#\#\# stop-writes-on-bgsave-error yes \# Compress string objects using LZF when dump .rdb databases? \# For default that's set to 'yes' as it's almost always a win. \# If you want to save some CPU in the saving child set it to 'no' but \# the dataset will likely be bigger if you have compressible values or keys. **\#存储数据时是否压缩数据。默认是yes。** rdbcompression yes \# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. \# This makes the format more resistant to corruption but there is a performance \# hit to pay (around 10%) when saving and loading RDB files, so you can disable it \# for maximum performances. \# \# RDB files created with checksum disabled have a checksum of zero that will \# tell the loading code to skip the check. **\#是否启用checksum校验** rdbchecksum yes \# The filename where to dump the DB **\#数据文件名格式** dbfilename dump.rdb \#\#\#快照持久化选项\#\#\#\#\#\#\#\#\# \# The working directory. \# \# The DB will be written inside this directory, with the filename specified \# above using the 'dbfilename' configuration directive. \# \# The Append Only File will also be created inside this directory. \# \# Note that you must specify a directory here, not a file name. **\# 数据会被持久化到这个目录下的‘dbfilename’指定的文件中。** **\# 注意,这里指定的必须是目录而不能是文件。** **\# 保存dump数据的目录** dir ./ \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# REPLICATION \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# Master-Slave replication. Use slaveof to make a Redis instance a copy of \# another Redis server. A few things to understand ASAP about Redis replication. \# \# 1) Redis replication is asynchronous, but you can configure a master to \# stop accepting writes if it appears to be not connected with at least \# a given number of slaves. \# 2) Redis slaves are able to perform a partial resynchronization with the \# master if the replication link is lost for a relatively small amount of \# time. You may want to configure the replication backlog size (see the next \# sections of this file) with a sensible value depending on your needs. \# 3) Replication is automatic and does not need user intervention. After a \# network partition slaves automatically try to reconnect to masters \# and resynchronize with them. \# **\# Master-Slave replication. 使用slaveof把一个 Redis 实例设置成为另一个Redis server的从库(热备). 注意: \#配置只对当前slave有效。** **\# 因此可以把某个slave配置成使用不同的时间间隔来保存数据或者监听其他端口等等。** **\#命令格式:** **\# slaveof <masterip> <masterport>** **\#指定 mater的IP和端口** \# If the master is password protected (using the "requirepass" configuration \# directive below) it is possible to tell the slave to authenticate before \# starting the replication synchronization process, otherwise the master will \# refuse the slave request. \# **\#如果master有密码保护,则在slave与master进行数据同步之前需要进行密码校验,否则master会拒绝slave的请\#求。** **\# masterauth <master-password>** **\#访问master的密码** \# When a slave loses its connection with the master, or when the replication \# is still in progress, the slave can act in two different ways: \# \# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will \# still reply to client requests, possibly with out of date data, or the \# data set may just be empty if this is the first synchronization. \# \# 2) if slave-serve-stale-data is set to 'no' the slave will reply with \# an error "SYNC with master in progress" to all the kind of commands \# but to INFO and SLAVEOF. \# **\#当slave丢失与master的连接时,或者slave仍然在于master进行数据同步时(还没有与master保持一致),\#slave可以有两种方式来响应客户端请求:** **\#** **\# 1) 如果 slave-serve-stale-data 设置成 'yes' (the default) slave会仍然响应客户端请求,此时可能会有问题。** **\#** **\# 2) 如果 slave-serve-stale data设置成 'no' slave会返回"SYNC with master in progress"这样的错误信息。 但 INFO 和SLAVEOF命令除外。** **\#** slave-serve-stale-data yes \# You can configure a slave instance to accept writes or not. Writing against \# a slave instance may be useful to store some ephemeral data (because data \# written on a slave will be easily deleted after resync with the master) but \# may also cause problems if clients are writing to it because of a \# misconfiguration. \# \# Since Redis 2.6 by default slaves are read-only. \# \# Note: read only slaves are not designed to be exposed to untrusted clients \# on the internet. It's just a protection layer against misuse of the instance. \# Still a read only slave exports by default all the administrative commands \# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve \# security of read only slaves using 'rename-command' to shadow all the \# administrative / dangerous commands. **\#从库只允许 执行读不允许写** slave-read-only yes \# Replication SYNC strategy: disk or socket. \# \# ------------------------------------------------------- \# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY \# ------------------------------------------------------- \# \# New slaves and reconnecting slaves that are not able to continue the replication \# process just receiving differences, need to do what is called a "full \# synchronization". An RDB file is transmitted from the master to the slaves. \# The transmission can happen in two different ways: \# \# 1) Disk-backed: The Redis master creates a new process that writes the RDB \# file on disk. Later the file is transferred by the parent \# process to the slaves incrementally. \# 2) Diskless: The Redis master creates a new process that directly writes the \# RDB file to slave sockets, without touching the disk at all. \# \# With disk-backed replication, while the RDB file is generated, more slaves \# can be queued and served with the RDB file as soon as the current child producing \# the RDB file finishes its work. With diskless replication instead once \# the transfer starts, new slaves arriving will be queued and a new transfer \# will start when the current one terminates. \# \# When diskless replication is used, the master waits a configurable amount of \# time (in seconds) before starting the transfer in the hope that multiple slaves \# will arrive and the transfer can be parallelized. \# \# With slow disks and fast (large bandwidth) networks, diskless replication \# works better. repl-diskless-sync no \# When diskless replication is enabled, it is possible to configure the delay \# the server waits in order to spawn the child that transfers the RDB via socket \# to the slaves. \# \# This is important since once the transfer starts, it is not possible to serve \# new slaves arriving, that will be queued for the next RDB transfer, so the server \# waits a delay in order to let more slaves arrive. \# \# The delay is specified in seconds, and by default is 5 seconds. To disable \# it entirely just set it to 0 seconds and the transfer will start ASAP. repl-diskless-sync-delay 5 \# Slaves send PINGs to server in a predefined interval. It's possible to change \# this interval with the repl\_ping\_slave\_period option. The default value is 10 \# seconds. \# **\# 从库会按照一个时间间隔向主库发送 PINGs. 可以通过 repl-ping-slave-period 设置这个时间间隔,默认是 10 秒** \# repl-ping-slave-period 10 \# The following option sets the replication timeout for: \# \# 1) Bulk transfer I/O during SYNC, from the point of view of slave. \# 2) Master timeout from the point of view of slaves (data, pings). \# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). \# \# It is important to make sure that this value is greater than the value \# specified for repl-ping-slave-period otherwise a timeout will be detected \# every time there is low traffic between the master and the slave. \# **设置主库批量数据传输时间或者 ping 回复时间间隔,默认值是 60 秒** **\# repl-timeout 60** **\# 一定要确保 repl-timeout 大于 repl-ping-slave-period** \# Disable TCP\_NODELAY on the slave socket after SYNC? \# \# If you select "yes" Redis will use a smaller number of TCP packets and \# less bandwidth to send data to slaves. But this can add a delay for \# the data to appear on the slave side, up to 40 milliseconds with \# Linux kernels using a default configuration. \# \# If you select "no" the delay for data to appear on the slave side will \# be reduced but more bandwidth will be used for replication. \# \# By default we optimize for low latency, but in very high traffic conditions \# or when the master and slaves are many hops away, turning this to "yes" may \# be a good idea. **\# 在 slave socket 的 SYNC 后禁用 TCP\_NODELAY** **\# 如果选择“ yes ” ,Redis 将使用一个较小的数字 TCP 数据包和更少的带宽将数据发送到 slave , 但是这可能导致数据发送到 slave 端会有延迟 , 如果是 Linux kernel 的默认配置,会达到 40 毫秒 .** **\# 如果选择 "no" ,则发送数据到 slave 端的延迟会降低,但将使用更多的带宽用于复制 .** repl-disable-tcp-nodelay no \# Set the replication backlog size. The backlog is a buffer that accumulates \# slave data when slaves are disconnected for some time, so that when a slave \# wants to reconnect again, often a full resync is not needed, but a partial \# resync is enough, just passing the portion of data the slave missed while \# disconnected. \# \# The bigger the replication backlog, the longer the time the slave can be \# disconnected and later be able to perform a partial resynchronization. \# \# The backlog is only allocated once there is at least a slave connected. \# **\# 设置复制的后台日志大小。** **\# 复制的后台日志越大, slave 断开连接及后来可能执行部分复制花的时间就越长。** **\# 后台日志在至少有一个 slave 连接时,仅仅分配一次** \# repl-backlog-size 1mb \# After a master has no longer connected slaves for some time, the backlog \# will be freed. The following option configures the amount of seconds that \# need to elapse, starting from the time the last slave disconnected, for \# the backlog buffer to be freed. \# \# A value of 0 means to never release the backlog. \# **\# 在 master 不再连接 slave 后,后台日志将被释放。下面的配置定义从最后一个 slave 断开连接后需要释放的时间(秒)。** **\# 0 意味着从不释放后台日志** \# repl-backlog-ttl 3600 \# The slave priority is an integer number published by Redis in the INFO output. \# It is used by Redis Sentinel in order to select a slave to promote into a \# master if the master is no longer working correctly. \# \# A slave with a low priority number is considered better for promotion, so \# for instance if there are three slaves with priority 10, 100, 25 Sentinel will \# pick the one with priority 10, that is the lowest. \# \# However a special priority of 0 marks the slave as not able to perform the \# role of master, so a slave with priority of 0 will never be selected by \# Redis Sentinel for promotion. \# \# By default the priority is 100. **\# 如果 master 不能再正常工作,那么会在多个 slave 中,选择优先值最小的一个 slave 提升为 master ,优先值为 0 表示不能提升为 master 。** slave-priority 100 \# It is possible for a master to stop accepting writes if there are less than \# N slaves connected, having a lag less or equal than M seconds. \# \# The N slaves need to be in "online" state. \# \# The lag in seconds, that must be <= the specified value, is calculated from \# the last ping received from the slave, that is usually sent every second. \# \# This option does not GUARANTEE that N replicas will accept the write, but \# will limit the window of exposure for lost writes in case not enough slaves \# are available, to the specified number of seconds. \# \# For example to require at least 3 slaves with a lag <= 10 seconds use: \# **\# 如果少于 N 个 slave 连接,且延迟时间 <=M 秒,则 master 可配置停止接受写操作。** **\# 例如需要至少 3 个 slave 连接,且延迟 <=10 秒的配置:** \# min-slaves-to-write 3 \# min-slaves-max-lag 10 **\# 设置 0 为禁用** **\# 默认 min-slaves-to-write 为 0 (禁用), min-slaves-max-lag 为 10** \# \# Setting one or the other to 0 disables the feature. \# \# By default min-slaves-to-write is set to 0 (feature disabled) and \# min-slaves-max-lag is set to 10. \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# SECURITY \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# Require clients to issue AUTH <PASSWORD> before processing any other \# commands. This might be useful in environments in which you do not trust \# others with access to the host running redis-server. \# \# This should stay commented out for backward compatibility and because most \# people do not need auth (e.g. they run their own servers). \# \# Warning: since Redis is pretty fast an outside user can try up to \# 150k passwords per second against a good box. This means that you should \# use a very strong password otherwise it will be very easy to break. **\# 警告:因为 redis 速度相当快,所以在一台比较好的服务器下,一个外部的用户可以在一秒钟进行 150K 次的密码尝试,这意味着你需要指定非常非常强大的密码来防止暴力破解** **\#** **\# 需要客户端在执行任何命令之前指定 AUTH <PASSWORD>** \# requirepass foobared \# Command renaming. \# 命令重命名. \# \# It is possible to change the name of dangerous commands in a shared \# environment. For instance the CONFIG command may be renamed into something \# hard to guess so that it will still be available for internal-use tools \# but not available for general clients. \# \# Example: \# \# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 \# \# It is also possible to completely kill a command by renaming it into \# an empty string: \# \# rename-command CONFIG "" \# \# Please note that changing the name of commands that are logged into the \# AOF file or transmitted to slaves may cause problems. AOF (append-only file 只追加文件) \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# LIMITS \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# Set the max number of connected clients at the same time. By default \# this limit is set to 10000 clients, however if the Redis server is not \# able to configure the process file limit to allow for the specified limit \# the max number of allowed clients is set to the current file limit \# minus 32 (as Redis reserves a few file descriptors for internal uses). \# \# Once the limit is reached Redis will close all the new connections sending \# an error 'max number of clients reached'. \# **\#设置同一时间最大客户端连接数,默认无限制,** **\#Redis 可以同时打开的客户端连接数为 Redis 进程可以打开的最大文件描述符数,** **\#如果设置 maxclients 0 ,表示不作限制。** **\#当客户端连接数到达限制时, Redis 会关闭新的连接并向客户端返回 max number of clients reached 错误信息** **\# 设置最大连接数. 默认没有限制, '0' 意味着不限制.** \# maxclients 10000 \# Don't use more memory than the specified amount of bytes. \# When the memory limit is reached Redis will try to remove keys \# according to the eviction policy selected (see maxmemory-policy). \#最大可使用内存。如果超过,Redis会试图删除EXPIRE集合中的keys,具体做法是:Redis会试图释放即将过期的\#keys,而保护还有很长生命周期的keys。 \# If Redis can't remove keys according to the policy, or if the policy is \# set to 'noeviction', Redis will start to reply with errors to commands \# that would use more memory, like SET, LPUSH, and so on, and will continue \# to reply to read-only commands like GET. \#如果这样还不行,Redis就会报错,但像GET之类的查询请求还是会得到响应。 \# \# This option is usually useful when using Redis as an LRU cache, or to set \# a hard memory limit for an instance (using the 'noeviction' policy). \# \# WARNING: If you have slaves attached to an instance with maxmemory on, \# the size of the output buffers needed to feed the slaves are subtracted \# from the used memory count, so that network problems / resyncs will \# not trigger a loop where keys are evicted, and in turn the output \# buffer of slaves is full with DELs of keys evicted triggering the deletion \# of more keys, and so forth until the database is completely emptied. \#警告:如果你想把Redis视为一个真正的DB的话,那不要设置<maxmemory>,只有你只想把Redis作为cache或者 \#有状态的server('state' server)时才需要设置。 \# \# In short... if you have slaves attached it is suggested that you set a lower \# limit for maxmemory so that there is some free RAM on the system for slave \# output buffers (but this is not needed if the policy is 'noeviction'). \# **\# 指定 Redis 最大内存限制, Redis 在启动时会把数据加载到内存中,达到最大内存后, Redis 会按照清除策略尝试清除已到期的 Key** **\# 如果 Redis 依照策略清除后无法提供足够空间,或者策略设置为 ”noeviction” ,则使用更多空间的命令将会报错,例如 SET, LPUSH 等。但仍然可以进行读取操作** **\# 注意: Redis 新的 vm 机制,会把 Key 存放内存, Value 会存放在 swap 区** **\# 该选项对 LRU 策略很有用。** **\# maxmemory 的设置比较适合于把 redis 当作于类似 memcached 的缓存来使用,而不适合当做一个真实的 DB 。** **\# 当把 Redis 当做一个真实的数据库使用的时候,内存使用将是一个很大的开销** \# maxmemory <bytes> \# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory \# is reached. You can select among five behaviors: \# volatile-lru -> remove the key with an expire set using an LRU algorithm **\#内存清理策略:如果达到了maxmemory,你可以采取如下动作:** **\# volatile-lru -> 使用LRU算法来删除过期的set** **\# allkeys-lru -> 删除任何遵循LRU算法的key** **\# volatile-random ->随机地删除过期set中的key** **\# allkeys->random -> 随机地删除一个key** **\# volatile-ttl -> 删除最近即将过期的key(the nearest expire time (minor TTL))** **\# noeviction -> 根本不过期,写操作直接报错** \# allkeys-lru -> remove any key according to the LRU algorithm \# volatile-random -> remove a random key with an expire set \# allkeys-random -> remove a random key, any key \# volatile-ttl -> remove the key with the nearest expire time (minor TTL) \# noeviction -> don't expire at all, just return an error on write operations \# \# Note: with any of the above policies, Redis will return an error on write \# operations, when there are no suitable keys for eviction. \# \# At the date of writing these commands are: set setnx setex append \# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd \# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby \# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby \# getset mset msetnx exec sort \# **\# The default is:** **\# 默认策略:** \# \# maxmemory-policy noeviction \# LRU and minimal TTL algorithms are not precise algorithms but approximated \# algorithms (in order to save memory), so you can tune it for speed or \# accuracy. For default Redis will check five keys and pick the one that was \# used less recently, you can change the sample size using the following \# configuration directive. \# \# The default of 5 produces good enough results. 10 Approximates very closely \# true LRU but costs a bit more CPU. 3 is very fast but not very accurate. \# **\# 对于处理redis内存来说,LRU和minor TTL算法不是精确的,而是近似的(估计的)算法。所以我们会检查某些样本\#来达到内存检查的目的。默认的样本数是3,你可以修改它。** \# maxmemory-samples 5 \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# APPEND ONLY MODE 只追加文件 \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# By default Redis asynchronously dumps the dataset on disk. This mode is \# good enough in many applications, but an issue with the Redis process or \# a power outage may result into a few minutes of writes lost (depending on \# the configured save points). \# \# The Append Only File is an alternative persistence mode that provides \# much better durability. For instance using the default data fsync policy \# (see later in the config file) Redis can lose just one second of writes in a \# dramatic event like a server power outage, or a single write if something \# wrong with the Redis process itself happens, but the operating system is \# still running correctly. \# \# AOF and RDB persistence can be enabled at the same time without problems. \# If the AOF is enabled on startup Redis will load the AOF, that is the file \# with the better durability guarantees. \# \# Please check http://redis.io/topics/persistence for more information. **\#默认情况下,Redis会异步的把数据保存到硬盘。如果你的应用场景允许因为系统崩溃等极端情况而导致最新数据丢失\#的话,那这种做法已经很ok了。否则你应该打开‘append only’模式,开启这种模式后,Redis会在\#appendonly.aof文件中添加每一个写操作,这个文件会在Redis启动时被读取来在内存中重新构建数据集。** **\# 默认情况下, redis 会在后台异步的把数据库镜像备份到磁盘,但是该备份是非常耗时的,而且备份也不能很频繁,如果发生诸如拉闸限电、拔插头等状况,那么将造成比较大范围的数据丢失。** **\# 所以 redis 提供了另外一种更加高效的数据库备份及灾难恢复方式。** **\# 开启 append only 模式之后, redis 会把所接收到的每一次写操作请求都追加到 appendonly.aof 文件中,当 redis 重新启动时,会从该文件恢复出之前的状态。** **\# 但是这样会造成 appendonly.aof 文件过大,所以 redis 还支持了 BGREWRITEAOF 指令,对 appendonly.aof 进行重新整理。** **\# 你可以同时开启 asynchronous dumps 和 AOF** **\#注意:如果你需要,你可以同时开启‘append only’模式和异步dumps模式(你需要注释掉上面的‘save’表达式来禁\#止dumps),这种情况下,Redis重建数据集时会优先使用appendonly.aof而忽略dump.rdb** appendonly no \# The name of the append only file (default: "appendonly.aof") **\# append only 文件名 (默认: "appendonly.aof")** appendfilename "appendonly.aof" \# The fsync() call tells the Operating System to actually write data on disk **\# 调用fsync()函数通知操作系统立刻向硬盘写数据** \# instead of waiting for more data in the output buffer. Some OS will really flush \# data on disk, some other OS will just try to do it ASAP. \# \# Redis supports three different modes: **\# Redis支持3中模式:** \# \# no: don't fsync, just let the OS flush the data when it wants. Faster. \# always: fsync after every write to the append only log. Slow, Safest. \# everysec: fsync only one time every second. Compromise. \# \# The default is "everysec", as that's usually the right compromise between \# speed and data safety. It's up to you to understand if you can relax this to \# "no" that will let the operating system flush the output buffer when \# it wants, for better performances (but if you can live with the idea of \# some data loss consider the default persistence mode that's snapshotting), \# or on the contrary, use "always" that's very slow but a bit safer than \# everysec. \# \# More details please check the following article: \# http://antirez.com/post/redis-persistence-demystified.html \# \# If unsure, use "everysec". **\# no:不fsync, 只是通知OS可以flush数据了,具体是否flush取决于OS.性能更好.** **\# always: 每次写入append only 日志文件后都会fsync . 性能差,但很安全.** **\# everysec: 没间隔1秒进行一次fsync. 折中.** **\# 默认是 "everysec"** **\# appendfsync always \#固态硬盘不要使用,会使得固态硬盘的寿命由年变为月。对固态硬盘的损坏是最严重的。** appendfsync everysec \#推荐使用该种 从安全性和写入性能综合考虑。 \# appendfsync no \#不推荐使用 \# When the AOF fsync policy is set to always or everysec, and a background \# saving process (a background save or AOF log background rewriting) is \# performing a lot of I/O against the disk, in some Linux configurations \# Redis may block too long on the fsync() call. Note that there is no fix for \# this currently, as even performing fsync in a different thread will block \# our synchronous write(2) call. \# In order to mitigate this problem it's possible to use the following option **\# 当AOF fsync策略被设置为always或者everysec并且后台保存进程(saving process)正在执行大量I/O操作时** **\# Redis可能会在fsync()调用上阻塞过长时间** \# that will prevent fsync() from being called in the main process while a \# BGSAVE or BGREWRITEAOF is in progress. \# \# This means that while another child is saving, the durability of Redis is \# the same as "appendfsync none". In practical terms, this means that it is \# possible to lose up to 30 seconds of log in the worst scenario (with the \# default Linux settings). \# \# If you have latency problems turn this to "yes". Otherwise leave it as \# "no" that is the safest pick from the point of view of durability. **\# AOF 策略设置为 always 或者 everysec 时,后台处理进程 ( 后台保存或者 AOF 日志重写 ) 会执行大量的 I/O 操作** **\# 在某些 Linux 配置中会阻止过长的 fsync() 请求。注意现在没有任何修复,即使 fsync 在另外一个线程进行处理** **\# 为了减缓这个问题,可以设置下面这个参数 no-appendfsync-on-rewrite** no-appendfsync-on-rewrite no \# Automatic rewrite of the append only file. \# Redis is able to automatically rewrite the log file implicitly calling \# BGREWRITEAOF when the AOF log size grows by the specified percentage. \# \# This is how it works: Redis remembers the size of the AOF file after the \# latest rewrite (if no rewrite has happened since the restart, the size of \# the AOF at startup is used). \# \# This base size is compared to the current size. If the current size is \# bigger than the specified percentage, the rewrite is triggered. Also \# you need to specify a minimal size for the AOF file to be rewritten, this \# is useful to avoid rewriting the AOF file even if the percentage increase \# is reached but it is still pretty small. \# \# Specify a percentage of zero in order to disable the automatic AOF \# rewrite feature. **\# append only 文件的自动重写** **\# 当AOF 日志文件即将增长到指定百分比时,Redis可以通过调用BGREWRITEAOF 来自动重写append only文件。** **\# 它是这么干的:Redis会记住最近一次重写后的AOF 文件size。然后它会把这个size与当前size进行比较,如果当前\# size比指定的百分比大,就会触发重写。同样,你需要指定AOF文件被重写的最小size,这对避免虽然百分比达到了\# 但是实际上文件size还是很小(这种情况没有必要重写)却导致AOF文件重写的情况很有用。** **即通过 **auto-aof-rewrite-percentage auto-aof-rewrite-min-size 这俩个参数来自动执行 BGREWRITEAOF.以下面参数为例,指 启动了AOF持久化,那么当AOF文件的体积大于64MB,并且AOF文件的体积比上一次重写之后的体积大了至少一倍(100%)的时候,Redis将执行BGREWRITEAOF命令。 **\# auto-aof-rewrite-percentage 设置为 0 可以关闭AOF重写功能** auto-aof-rewrite-percentage 100 \#可以是任意的整数 auto-aof-rewrite-min-size 64mb \# An AOF file may be found to be truncated at the end during the Redis \# startup process, when the AOF data gets loaded back into memory. \# This may happen when the system where Redis is running \# crashes, especially when an ext4 filesystem is mounted without the \# data=ordered option (however this can't happen when Redis itself \# crashes or aborts but the operating system still works correctly). \# \# Redis can either exit with an error when this happens, or load as much \# data as possible (the default now) and start if the AOF file is found \# to be truncated at the end. The following option controls this behavior. \# \# If aof-load-truncated is set to yes, a truncated AOF file is loaded and \# the Redis server starts emitting a log to inform the user of the event. \# Otherwise if the option is set to no, the server aborts with an error \# and refuses to start. When the option is set to no, the user requires \# to fix the AOF file using the "redis-check-aof" utility before to restart \# the server. \# \# Note that if the AOF file will be found to be corrupted in the middle \# the server will still exit with an error. This option only applies when \# Redis will try to read more data from the AOF file but not enough bytes \# will be found. aof-load-truncated yes \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# LUA SCRIPTING \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# Max execution time of a Lua script in milliseconds. \# \# If the maximum execution time is reached Redis will log that a script is \# still in execution after the maximum allowed time and will start to \# reply to queries with an error. \# \# When a long running script exceeds the maximum execution time only the \# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be \# used to stop a script that did not yet called write commands. The second \# is the only way to shut down the server in the case a write command was \# already issued by the script but the user doesn't want to wait for the natural \# termination of the script. \# \# Set it to 0 or a negative value for unlimited execution without warnings. **\# 一个 Lua 脚本最长的执行时间为 5000 毫秒( 5 秒),如果为 0 或负数表示无限执行时间。** lua-time-limit 5000 \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# REDIS CLUSTER \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# \# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ \# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however \# in order to mark it as "mature" we need to wait for a non trivial percentage \# of users to deploy it in production. \# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ \# \# Normal Redis instances can't be part of a Redis Cluster; only nodes that are \# started as cluster nodes can. In order to start a Redis instance as a \# cluster node enable the cluster support uncommenting the following: \# \# cluster-enabled yes \# Every cluster node has a cluster configuration file. This file is not \# intended to be edited by hand. It is created and updated by Redis nodes. \# Every Redis Cluster node requires a different cluster configuration file. \# Make sure that instances running in the same system do not have \# overlapping cluster configuration file names. \# \# cluster-config-file nodes-6379.conf \# Cluster node timeout is the amount of milliseconds a node must be unreachable \# for it to be considered in failure state. \# Most other internal time limits are multiple of the node timeout. \# \# cluster-node-timeout 15000 \# A slave of a failing master will avoid to start a failover if its data \# looks too old. \# \# There is no simple way for a slave to actually have a exact measure of \# its "data age", so the following two checks are performed: \# \# 1) If there are multiple slaves able to failover, they exchange messages \# in order to try to give an advantage to the slave with the best \# replication offset (more data from the master processed). \# Slaves will try to get their rank by offset, and apply to the start \# of the failover a delay proportional to their rank. \# \# 2) Every single slave computes the time of the last interaction with \# its master. This can be the last ping or command received (if the master \# is still in the "connected" state), or the time that elapsed since the \# disconnection with the master (if the replication link is currently down). \# If the last interaction is too old, the slave will not try to failover \# at all. \# \# The point "2" can be tuned by user. Specifically a slave will not perform \# the failover if, since the last interaction with the master, the time \# elapsed is greater than: \# \# (node-timeout \* slave-validity-factor) + repl-ping-slave-period \# \# So for example if node-timeout is 30 seconds, and the slave-validity-factor \# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the \# slave will not try to failover if it was not able to talk with the master \# for longer than 310 seconds. \# \# A large slave-validity-factor may allow slaves with too old data to failover \# a master, while a too small value may prevent the cluster from being able to \# elect a slave at all. \# \# For maximum availability, it is possible to set the slave-validity-factor \# to a value of 0, which means, that slaves will always try to failover the \# master regardless of the last time they interacted with the master. \# (However they'll always try to apply a delay proportional to their \# offset rank). \# \# Zero is the only value able to guarantee that when all the partitions heal \# the cluster will always be able to continue. \# \# cluster-slave-validity-factor 10 \# Cluster slaves are able to migrate to orphaned masters, that are masters \# that are left without working slaves. This improves the cluster ability \# to resist to failures as otherwise an orphaned master can't be failed over \# in case of failure if it has no working slaves. \# \# Slaves migrate to orphaned masters only if there are still at least a \# given number of other working slaves for their old master. This number \# is the "migration barrier". A migration barrier of 1 means that a slave \# will migrate only if there is at least 1 other working slave for its master \# and so forth. It usually reflects the number of slaves you want for every \# master in your cluster. \# \# Default is 1 (slaves migrate only if their masters remain with at least \# one slave). To disable migration just set it to a very large value. \# A value of 0 can be set but is useful only for debugging and dangerous \# in production. \# \# cluster-migration-barrier 1 \# By default Redis Cluster nodes stop accepting queries if they detect there \# is at least an hash slot uncovered (no available node is serving it). \# This way if the cluster is partially down (for example a range of hash slots \# are no longer covered) all the cluster becomes, eventually, unavailable. \# It automatically returns available as soon as all the slots are covered again. \# \# However sometimes you want the subset of the cluster which is working, \# to continue to accept queries for the part of the key space that is still \# covered. In order to do so, just set the cluster-require-full-coverage \# option to no. \# \# cluster-require-full-coverage yes \# In order to setup your cluster make sure to read the documentation \# available at http://redis.io web site. \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# SLOW LOG \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# The Redis Slow Log is a system to log queries that exceeded a specified \# execution time. The execution time does not include the I/O operations \# like talking with the client, sending the reply and so forth, \# but just the time needed to actually execute the command (this is the only \# stage of command execution where the thread is blocked and can not serve \# other requests in the meantime). \# \# You can configure the slow log with two parameters: one tells Redis \# what is the execution time, in microseconds, to exceed in order for the \# command to get logged, and the other parameter is the length of the \# slow log. When a new command is logged the oldest one is removed from the \# queue of logged commands. \# The following time is expressed in microseconds, so 1000000 is equivalent \# to one second. Note that a negative number disables the slow log, while \# a value of zero forces the logging of every command. **\# Redis slow log用来记录超过指定执行时间的查询。执行时间不包括 I/O 计算比如连接客户端,返回结果等,只是命令执行时间** **\# 你可以指定两个参数:一个是慢查询的阀值,单位是微秒;另外一个是slow log的长度,相当于一个队列。** **\# 负数则关闭slow log,0则会导致每个命令都被记录** slowlog-log-slower-than 10000 \# There is no limit to this length. Just be aware that it will consume memory. \# You can reclaim memory used by the slow log with SLOWLOG RESET. **\# 对日志长度没有限制,只是要注意它会消耗内存** **\# 可以通过 SLOWLOG RESET 回收被慢日志消耗的内存** **\# 推荐使用默认值 128 ,当慢日志超过 128 时,最先进入队列的记录会被踢出** **\# 不设置会消耗过多内存,所以还是要设置一下。可以使用SLOWLOG RESET命令来回收slow log使用的内存** slowlog-max-len 128 \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# LATENCY MONITOR \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# The Redis latency monitoring subsystem samples different operations \# at runtime in order to collect data related to possible sources of \# latency of a Redis instance. \# \# Via the LATENCY command this information is available to the user that can \# print graphs and obtain reports. \# \# The system only logs operations that were performed in a time equal or \# greater than the amount of milliseconds specified via the \# latency-monitor-threshold configuration directive. When its value is set \# to zero, the latency monitor is turned off. \# \# By default latency monitoring is disabled since it is mostly not needed \# if you don't have latency issues, and collecting data has a performance \# impact, that while very small, can be measured under big load. Latency \# monitoring can easily be enabled at runtime using the command \# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. latency-monitor-threshold 0 \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# EVENT NOTIFICATION \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# Redis can notify Pub/Sub clients about events happening in the key space. \# This feature is documented at http://redis.io/topics/notifications \# \# For instance if keyspace events notification is enabled, and a client \# performs a DEL operation on key "foo" stored in the Database 0, two \# messages will be published via Pub/Sub: \# \# PUBLISH \_\_keyspace@0\_\_:foo del \# PUBLISH \_\_keyevent@0\_\_:del foo \# \# It is possible to select the events that Redis will notify among a set \# of classes. Every class is identified by a single character: \# \# K Keyspace events, published with \_\_keyspace@<db>\_\_ prefix. \# E Keyevent events, published with \_\_keyevent@<db>\_\_ prefix. \# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... \# $ String commands \# l List commands \# s Set commands \# h Hash commands \# z Sorted set commands \# x Expired events (events generated every time a key expires) \# e Evicted events (events generated when a key is evicted for maxmemory) \# A Alias for g$lshzxe, so that the "AKE" string means all the events. **\#** **\# 当事件发生时, Redis 可以通知 Pub/Sub 客户端。** **\# 可以在下表中选择 Redis 要通知的事件类型。事件类型由单个字符来标识:** **\# K Keyspace 事件,以 \_keyspace@\_ 的前缀方式发布** **\# E Keyevent 事件,以 \_keysevent@\_ 的前缀方式发布** **\# g 通用事件(不指定类型),像 DEL, EXPIRE, RENAME, …** **\# $ String 命令** **\# s Set 命令** **\# h Hash 命令** **\# z 有序集合命令** **\# x 过期事件(每次 key 过期时生成)** **\# e 清除事件(当 key 在内存被清除时生成)** **\# A g$lshzxe 的别称,因此 ”AKE” 意味着所有的事件** \# The "notify-keyspace-events" takes as argument a string that is composed \# of zero or multiple characters. The empty string means that notifications \# are disabled. **\# 带一个由 0 到多个字符组成的字符串参数。空字符串意思是通知被禁用。** \# \# Example: to enable list and generic events, from the point of view of the \# event name, use: \# \# notify-keyspace-events Elg \# \# Example 2: to get the stream of the expired keys subscribing to channel \# name \_\_keyevent@0\_\_:expired use: \# \# notify-keyspace-events Ex \# \# By default all notifications are disabled because most users don't need \# this feature and the feature has some overhead. Note that if you don't \# specify at least one of K or E, no events will be delivered. **\# 例子:启用 list 和通用事件:** **\# notify-keyspace-events Elg** **\# 默认所用的通知被禁用,因为用户通常不需要改特性,并且该特性会有性能损耗。** **\# 注意如果你不指定至少 K 或 E 之一,不会发送任何事件。** notify-keyspace-events "" \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# ADVANCED CONFIG \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# 常用内存优化手段与参数 redis的性能如何是完全依赖于内存的,所以我们需要知道如何来控制和节省内存。 首先最重要的一点是不要开启Redis的VM选项,即虚拟内存功能,这个本来是作为Redis存储超出物理内存数据的一种数据在内存与磁盘换入换出的一个持久化策略,但是其内存管理成本非常的高,所以要关闭VM功能,请检查你的redis.conf文件中 vm-enabled 为 no。 其次最好设置下redis.conf中的maxmemory选项,该选项是告诉Redis当使用了多少物理内存后就开始拒绝后续的写入请求,该参数能很好的保护好你的Redis不会因为使用了过多的物理内存而导致swap,最终严重影响性能甚至崩溃。 另外Redis为不同数据类型分别提供了一组参数来控制内存使用,我们知道Redis Hash是value内部为一个HashMap,如果该Map的成员数比较少,则会采用类似一维线性的紧凑格式来存储该Map, 即省去了大量指针的内存开销,这个参数控制对应在redis.conf配置文件中下面2项: hash-max-zipmap-entries 64 hash-max-zipmap-value 512 含义是当value这个Map内部不超过多少个成员时会采用线性紧凑格式存储,默认是64,即value内部有64个以下的成员就是使用线性紧凑存储,超过该值自动转成真正的HashMap。 hash-max-zipmap-value 含义是当 value这个Map内部的每个成员值长度不超过多少字节就会采用线性紧凑存储来节省空间。 以上2个条件任意一个条件超过设置值都会转换成真正的HashMap,也就不会再节省内存了,那么这个值是不是设置的越大越好呢,答案当然是否定的,HashMap的优势就是查找和操作的时间复杂度都是O(1)的,而放弃Hash采用一维存储则是O(n)的时间复杂度,如果成员数量很少,则影响不大,否则会严重影响性能,所以要权衡好这个值的设置,总体上还是最根本的时间成本和空间成本上的权衡。 同样类似的参数还有: list-max-ziplist-entries 512 说明:list数据类型多少节点以下会采用去指针的紧凑存储格式。 list-max-ziplist-value 64 说明:list数据类型节点值大小小于多少字节会采用紧凑存储格式。 set-max-intset-entries 512 说明:set数据类型内部数据如果全部是数值型,且包含多少节点以下会采用紧凑格式存储。 Redis内部实现没有对内存分配方面做过多的优化,在一定程度上会存在内存碎片,不过大多数情况下这个不会成为Redis的性能瓶颈,不过如果在Redis内部存储的大部分数据是数值型的话,Redis内部采用了一个shared integer的方式来省去分配内存的开销,即在系统启动时先分配一个从1~n 那么多个数值对象放在一个池子中,如果存储的数据恰好是这个数值范围内的数据,则直接从池子里取出该对象,并且通过引用计数的方式来共享,这样在系统存储了大量数值下,也能一定程度上节省内存并且提高性能,这个参数值n的设置需要修改源代码中的一行宏定义REDIS\_SHARED\_INTEGERS,该值默认是10000,可以根据自己的需要进行修改,修改后重新编译就可以了。 \# Hashes are encoded using a memory efficient data structure when they have a \# small number of entries, and the biggest entry does not exceed a given \# threshold. These thresholds can be configured using the following directives. **默认值512,当某个map的元素个数达到最大值,但是其中最大元素的长度没有达到设定阀值时,其HASH的编码采用一种特殊的方式(更有效利用内存)来存储。本参数与下面的参数组合使用来设置这两项阀值。设置元素个数;** ** \# Redis Hash 对应 Value 内部实际就是一个 HashMap ,实际这里会有 2 种不同实现,** **\# 这个 Hash 的成员比较少时 Redis 为了节省内存会采用类似一维数组的方式来紧凑存储,而不会采用真正的 HashMap 结构,对应的 valueredisObject 的 encoding 为 zipmap,** **\# 当成员数量增大时会自动转成真正的 HashMap, 此时 encoding 为 ht** ** ** **hash-max-ziplist-entries 512** **默认值64,设置map中元素的值的最大长度;** hash-max-ziplist-value 64 \# Similarly to hashes, small lists are also encoded in a special way in order \# to save a lot of space. The special representation is only used when \# you are under the following limits: **\# 和 Hash 一样,多个小的 list 以特定的方式编码来节省空间。** **\# list 数据类型节点值大小小于多少字节会采用紧凑存储格式。 ** **\#默认值512,** **list-max-ziplist-entries 512** **默认值64** list-max-ziplist-value 64 \# Sets have a special encoding in just one case: when a set is composed \# of just strings that happen to be integers in radix 10 in the range \# of 64 bit signed integers. \# The following configuration setting sets the limit in the size of the \# set in order to use this special memory saving encoding. **当set类型中的数据都是数值类型,并且set中整型元素的数量不超过指定值时,使用特殊的编码方式。** **\# set 数据类型内部数据如果全部是数值型,且包含多少节点以下会采用紧凑格式存储。** **\#默认值512,** set-max-intset-entries 512 \# Similarly to hashes and lists, sorted sets are also specially encoded in \# order to save a lot of space. This encoding is only used when the length and \# elements of a sorted set are below the following limits: **\# 和 hashe 和 list 一样 , 排序的 set 在指定的长度内以指定编码方式存储以节省空间** **\# zsort 数据类型节点值大小小于多少字节会采用紧凑存储格式。** **默认值128 ** **zset-max-ziplist-entries 128** **默认值64** zset-max-ziplist-value 64 \# HyperLogLog sparse representation bytes limit. The limit includes the \# 16 bytes header. When an HyperLogLog using the sparse representation crosses \# this limit, it is converted into the dense representation. \# \# A value greater than 16000 is totally useless, since at that point the \# dense representation is more memory efficient. \# \# The suggested value is ~ 3000 in order to have the benefits of \# the space efficient encoding without slowing down too much PFADD, \# which is O(N) with the sparse encoding. The value can be raised to \# ~ 10000 when CPU is not a concern, but space is, and the data set is \# composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 \# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in \# order to help rehashing the main Redis hash table (the one mapping top-level \# keys to values). The hash table implementation Redis uses (see dict.c) \# performs a lazy rehashing: the more operation you run into a hash table \# that is rehashing, the more rehashing "steps" are performed, so if the \# server is idle the rehashing is never complete and some more memory is used \# by the hash table. \# \# The default is to use this millisecond 10 times every second in order to \# actively rehash the main dictionaries, freeing memory when possible. \# \# If unsure: \# use "activerehashing no" if you have hard latency requirements and it is \# not a good thing in your environment that Redis can reply from time to time \# to queries with 2 milliseconds delay. \# \# use "activerehashing yes" if you don't have such hard requirements but \# want to free memory asap when possible. **默认值yes,用来控制是否自动重建hash。Active rehashing每100微秒使用1微秒cpu时间排序,以重组Redis的hash表。重建是通过一种lazy方式,写入hash表的操作越多,需要执行rehashing的步骤也越多,如果服务器当前空闲,那么rehashing操作会一直执行。如果对实时性要求较高,难以接受redis时不时出现的2微秒的延迟,则可以设置activerehashing为no,否则建议设置为yes,以节省内存空间。** **\# Redis 将在每 100 毫秒时使用 1 毫秒的 CPU 时间来对 redis 的 hash 表进行重新 hash ,可以降低内存的使用** **\# 当你的使用场景中,有非常严格的实时性需要,不能够接受 Redis 时不时的对请求有 2 毫秒的延迟的话,把这项配置为 no 。** **\# 如果没有这么严格的实时性要求,可以设置为 yes ,以便能够尽可能快的释放内存 ** activerehashing yes \# The client output buffer limits can be used to force disconnection of clients \# that are not reading data from the server fast enough for some reason (a \# common reason is that a Pub/Sub client can't consume messages as fast as the \# publisher can produce them). \# \# The limit can be set differently for the three different classes of clients: \# \# normal -> normal clients including MONITOR clients \# slave -> slave clients \# pubsub -> clients subscribed to at least one pubsub channel or pattern \# \# The syntax of every client-output-buffer-limit directive is the following: \# \# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> \# \# A client is immediately disconnected once the hard limit is reached, or if \# the soft limit is reached and remains reached for the specified number of \# seconds (continuously). \# So for instance if the hard limit is 32 megabytes and the soft limit is \# 16 megabytes / 10 seconds, the client will get disconnected immediately \# if the size of the output buffers reach 32 megabytes, but will also get \# disconnected if the client reaches 16 megabytes and continuously overcomes \# the limit for 10 seconds. \# \# By default normal clients are not limited because they don't receive data \# without asking (in a push way), but just after a request, so only \# asynchronous clients may create a scenario where data is requested faster \# than it can read. \# \# Instead there is a default limit for pubsub and slave clients, since \# subscribers and slaves receive data in a push fashion. \# \# Both the hard or the soft limit can be disabled by setting them to zero. **\# 客户端的输出缓冲区的限制,因为某种原因客户端从服务器读取数据的速度不够快,** **\# 可用于强制断开连接(一个常见的原因是一个发布 / 订阅客户端消费消息的速度无法赶上生产它们的速度)。** **\# 可以三种不同客户端的方式进行设置:** **\# normal -> 正常客户端** **\# slave -> slave 和 MONITOR 客户端** **\# pubsub -> 至少订阅了一个 pubsub channel 或 pattern 的客户端** **\# 每个 client-output-buffer-limit 语法 :** **\# client-output-buffer-limit ** **\# 一旦达到硬限制客户端会立即断开,或者达到软限制并保持达成的指定秒数(连续)。** **\# 例如,如果硬限制为 32 兆字节和软限制为 16 兆字节 /10 秒,客户端将会立即断开** **\# 如果输出缓冲区的大小达到 32 兆字节,客户端达到 16 兆字节和连续超过了限制 10 秒,也将断开连接。** **\# 默认 normal 客户端不做限制,因为他们在一个请求后未要求时(以推的方式)不接收数据,** **\# 只有异步客户端可能会出现请求数据的速度比它可以读取的速度快的场景。** **\# 把硬限制和软限制都设置为 0 来禁用该特性** client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 \# Redis calls an internal function to perform many background tasks, like \# closing connections of clients in timeout, purging expired keys that are \# never requested, and so forth. \# \# Not all tasks are performed with the same frequency, but Redis checks for \# tasks to perform according to the specified "hz" value. \# \# By default "hz" is set to 10. Raising the value will use more CPU when \# Redis is idle, but at the same time will make Redis more responsive when \# there are many keys expiring at the same time, and timeouts may be \# handled with more precision. \# \# The range is between 1 and 500, however a value over 100 is usually not \# a good idea. Most users should use the default of 10 and raise this up to \# 100 only in environments where very low latency is required. hz 10 \# When a child rewrites the AOF file, if the following option is enabled \# the file will be fsync-ed every 32 MB of data generated. This is useful \# in order to commit the file t **\# Redis 调用内部函数来执行许多后台任务,如关闭客户端超时的连接,清除过期的 Key ,等等。** **\# 不是所有的任务都以相同的频率执行,但 Redis 依照指定的“ Hz ”值来执行检查任务。** **\# 默认情况下,“ Hz ”的被设定为 10 。** **\# 提高该值将在 Redis 空闲时使用更多的 CPU 时,但同时当有多个 key 同时到期会使 Redis 的反应更灵敏,以及超时可以更精确地处理。** **\# 范围是 1 到 500 之间,但是值超过 100 通常不是一个好主意。** **\# 大多数用户应该使用 10 这个预设值,只有在非常低的延迟的情况下有必要提高最大到 100 。** **hz 10 ** **\# 当一个子节点重写 AOF 文件时,如果启用下面的选项,则文件每生成 32M 数据进行同步。o the disk more incrementally and avoid** \# big latency spikes. aof-rewrite-incremental-fsync yes
还没有评论,来说两句吧...