Compare commits
66 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ca1630a6b4 | ||
|
|
9b85bd0e38 | ||
|
|
c15f54e274 | ||
|
|
fbb5a8acb3 | ||
|
|
393ff81e69 | ||
|
|
ace6ed2941 | ||
|
|
dacf3fb30c | ||
|
|
ebf7bd2d82 | ||
|
|
ca9cde0487 | ||
|
|
df3aa1aa92 | ||
|
|
4d9166443b | ||
|
|
57de3c2a03 | ||
|
|
63596e951d | ||
|
|
fb1ac64251 | ||
|
|
0635178c2e | ||
|
|
a2a5d78fbd | ||
|
|
36152f570d | ||
|
|
20bfcb657e | ||
|
|
34cc6d151e | ||
|
|
7fda4b77c1 | ||
|
|
bbbe798629 | ||
|
|
28edb79c61 | ||
|
|
a856f2607f | ||
|
|
4b8f578a2f | ||
|
|
bf635bbfab | ||
|
|
a13bb36159 | ||
|
|
a559ac0d4f | ||
|
|
9d1d1af5bf | ||
|
|
dcdc9a1ba0 | ||
|
|
95f4580ccf | ||
|
|
aa013991f7 | ||
|
|
75228f3272 | ||
|
|
ce210cdee3 | ||
|
|
a14f11cec9 | ||
|
|
0891036e8c | ||
|
|
4eda710da4 | ||
|
|
9afc183a1b | ||
|
|
8be48e16e5 | ||
|
|
ec36627f91 | ||
|
|
37b3a07b9f | ||
|
|
86183a4a97 | ||
|
|
1644a9bf1b | ||
|
|
55cc0dccfd | ||
|
|
b9a468dc97 | ||
|
|
3a5239dd18 | ||
|
|
7ad5f0f794 | ||
|
|
681f5e37d0 | ||
|
|
3207be46ef | ||
|
|
996e02fd4d | ||
|
|
45661645ed | ||
|
|
bd9d2ed57e | ||
|
|
dc1de3935c | ||
|
|
6da3706900 | ||
|
|
db2b856211 | ||
|
|
203816964f | ||
|
|
d1cae0638d | ||
|
|
ac8bfd1e15 | ||
|
|
345ff2c20f | ||
|
|
1569b57bd7 | ||
|
|
017ae6461b | ||
|
|
aeedfae544 | ||
|
|
022a71e4e7 | ||
|
|
b3aba17748 | ||
|
|
ae20d7681c | ||
|
|
9927ab93ed | ||
|
|
9b6d726e76 |
30
README.md
@ -1,6 +1,6 @@
|
||||
# Predixy
|
||||
# Predixy [中文版](https://github.com/joyieldInc/predixy/blob/master/README_CN.md)
|
||||
|
||||
**Predixy** is a high performance and full features proxy for redis sentinel and redis cluster
|
||||
**Predixy** is a high performance and fully featured proxy for redis sentinel and redis cluster
|
||||
|
||||
## Features
|
||||
|
||||
@ -11,6 +11,7 @@
|
||||
+ Supports Redis Cluster.
|
||||
+ Supports redis block command, eg:blpop, brpop, brpoplpush.
|
||||
+ Supports scan command, even multi redis instances.
|
||||
+ Multi-keys command support: mset/msetnx/mget/del/unlink/touch/exists.
|
||||
+ Multi-databases support, means redis command select is avaliable.
|
||||
+ Supports redis transaction, limit in Redis Sentinel single redis group.
|
||||
+ Supports redis Scripts, script load, eval, evalsha.
|
||||
@ -24,7 +25,7 @@
|
||||
|
||||
## Build
|
||||
|
||||
Predixy can be compiled and used on Linux, OSX, BSD, Windows([Cygwin](http://www.cygwin.com/)).
|
||||
Predixy can be compiled and used on Linux, OSX, BSD, Windows([Cygwin](http://www.cygwin.com/)). Requires C++11 compiler.
|
||||
|
||||
It is as simple as:
|
||||
|
||||
@ -47,10 +48,16 @@ For examples:
|
||||
$ make MT=false
|
||||
$ make debug MT=false TS=true
|
||||
|
||||
## Install
|
||||
|
||||
Just copy src/predixy to the install path
|
||||
|
||||
$ cp src/predixy /path/to/bin
|
||||
|
||||
## Configuration
|
||||
|
||||
See below files:
|
||||
+ predixy.conf, basic config.
|
||||
+ predixy.conf, basic config, will refrence below config files.
|
||||
+ cluster.conf, Redis Cluster backend config.
|
||||
+ sentinel.conf, Redis Sentinel backend config.
|
||||
+ auth.conf, authority control config.
|
||||
@ -59,7 +66,7 @@ See below files:
|
||||
|
||||
## Running
|
||||
|
||||
$ ./predixy ../conf/predixy.conf
|
||||
$ src/predixy conf/predixy.conf
|
||||
|
||||
With default predixy.conf, Predixy will listen at 0.0.0.0:7617 and
|
||||
proxy to Redis Cluster 127.0.0.1:6379.
|
||||
@ -70,7 +77,7 @@ So you will look mass log output, but you can still test it with redis-cli.
|
||||
|
||||
More command line arguments:
|
||||
|
||||
$ ./predixy -h
|
||||
$ src/predixy -h
|
||||
|
||||
## Stats
|
||||
|
||||
@ -99,7 +106,7 @@ A latency monitor example:
|
||||
<= 1000 67788 77 99.68%
|
||||
> 1000 601012 325 100.00%
|
||||
T 60 6032643 100001
|
||||
The last line is total summary, 624 is average latency(us)
|
||||
The last line is total summary, 60 is average latency(us)
|
||||
|
||||
|
||||
Show latency monitors by server address and latency name
|
||||
@ -110,6 +117,13 @@ Reset all stats and latency monitors, require admin permission.
|
||||
|
||||
redis> CONFIG ResetStat
|
||||
|
||||
## Benchmark
|
||||
|
||||
predixy is fast, how fast? more than twemproxy, codis, redis-cerberus
|
||||
|
||||
See wiki
|
||||
[benchmark](https://github.com/joyieldInc/predixy/wiki/Benchmark)
|
||||
|
||||
## License
|
||||
|
||||
Copyright (C) 2017 Joyield, Inc. <joyield.com#gmail.com>
|
||||
@ -117,3 +131,5 @@ Copyright (C) 2017 Joyield, Inc. <joyield.com#gmail.com>
|
||||
All rights reserved.
|
||||
|
||||
License under BSD 3-clause "New" or "Revised" License
|
||||
|
||||
WeChat:cppfan 
|
||||
|
||||
141
README_CN.md
Normal file
@ -0,0 +1,141 @@
|
||||
# Predixy
|
||||
|
||||
**Predixy** 是一款高性能全特征redis代理,支持redis-sentinel和redis-cluster
|
||||
|
||||
## 特性
|
||||
|
||||
+ 高性能并轻量级
|
||||
+ 支持多线程
|
||||
+ 多平台支持:Linux、OSX、BSD、Windows([Cygwin](http://www.cygwin.com/))
|
||||
+ 支持Redis Sentinel,可配置一组或者多组redis
|
||||
+ 支持Redis Cluster
|
||||
+ 支持redis阻塞型命令,包括blpop、brpop、brpoplpush
|
||||
+ 支持scan命令,无论是单个redis还是多个redis实例都支持
|
||||
+ 多key命令支持: mset/msetnx/mget/del/unlink/touch/exists
|
||||
+ 支持redis的多数据库,即可以使用select命令
|
||||
+ 支持事务,当前仅限于Redis Sentinel下单一redis组可用
|
||||
+ 支持脚本,包括命令:script load、eval、evalsha
|
||||
+ 支持发布订阅机制,也即Pub/Sub系列命令
|
||||
+ 多数据中心支持,读写分离支持
|
||||
+ 扩展的AUTH命令,强大的读、写、管理权限控制机制,健空间限制机制
|
||||
+ 日志可按级别采样输出,异步日志记录避免线程被io阻塞
|
||||
+ 日志文件可以按时间、大小自动切分
|
||||
+ 丰富的统计信息,包括CPU、内存、请求、响应等信息
|
||||
+ 延迟监控信息,可以看到整体延迟,分后端redis实例延迟
|
||||
|
||||
## 编译
|
||||
|
||||
Predixy可以在所有主流平台下编译,推荐在linux下使用,需要支持C++11的编译器。
|
||||
|
||||
编译非常简单,下载或者git clone代码后进到predixy目录,直接执行:
|
||||
|
||||
$ make
|
||||
|
||||
编译后会在src目录生成一个可执行文件predixy
|
||||
|
||||
编译debug版本:
|
||||
|
||||
$ make debug
|
||||
|
||||
更多编译选项:
|
||||
|
||||
+ CXX=c++compiler,指定c++编译器,缺省是g++,可以指定为其它,例如:CXX=clang++
|
||||
+ EV=epoll|poll|kqueue,指定异步io模型,缺省情况下是根据平台来检测
|
||||
+ MT=false, 关闭多线程支持
|
||||
+ TS=true, 开启函数调用耗时分析,该选项仅用于开发模式
|
||||
|
||||
一些使用参数编译的例子:
|
||||
|
||||
$ make CXX=clang++
|
||||
$ make EV=poll
|
||||
$ make MT=false
|
||||
$ make debug MT=false TS=true
|
||||
|
||||
## 安装
|
||||
|
||||
简单的只要拷贝src/predixy到目标路径即可
|
||||
|
||||
$ cp src/predixy /path/to/bin
|
||||
|
||||
## 配置 [详细文档](https://github.com/joyieldInc/predixy/blob/master/doc/config_CN.md)
|
||||
|
||||
predixy的配置类似redis, 具体配置项的含义在配置文件里有详细解释,请参考下列配置文件:
|
||||
|
||||
+ predixy.conf,整体配置文件,会引用下面的配置文件
|
||||
+ cluster.conf,用于Redis Cluster时,配置后端redis信息
|
||||
+ sentinel.conf,用于Redis Sentinel时,配置后端redis信息
|
||||
+ auth.conf,访问权限控制配置,可以定义多个验证密码,可每个密码指定读、写、管理权限,以及定义可访问的健空间
|
||||
+ dc.conf,多数据中心支持,可以定义读写分离规则,读流量权重分配
|
||||
+ latency.conf, 延迟监控规则定义,可以指定需要监控的命令以及延时时间间隔
|
||||
|
||||
提供这么多配置文件实际上是按功能分开了,所有配置都可以写到一个文件里,也可以写到多个文件里然后在主配置文件里引用进来。
|
||||
|
||||
## 运行
|
||||
|
||||
$ src/predixy conf/predixy.conf
|
||||
|
||||
使用默认的配置文件predixy.conf, predixy将监听地址0.0.0.0:7617,后端的redis是Redis Cluster 127.0.0.1:6379。通常,127.0.0.1:6379并不是运行在Redis Clusterr模式下,因此Predixy将会有大量的错误日志输出。不过你依然可以用redis-cli连接predixy来试用一下:
|
||||
|
||||
$ redis-cli -p 7617 info
|
||||
|
||||
执行上条命令后可以看到predixy自身的一些信息,如果127.0.0.1:6379在运行的话,你可以试试其它redis命令,看看效果如何。
|
||||
|
||||
更多的启动命令行参数请看帮助:
|
||||
|
||||
$ src/predixy -h
|
||||
|
||||
## 统计信息
|
||||
|
||||
和redis一样,predixy用INFO命令来给出统计信息。
|
||||
|
||||
在redis-cli连接下执行下面的命令:
|
||||
|
||||
redis> INFO
|
||||
|
||||
你将看到类似redis执行INFO命令的输出,不过这里是predixy的统计信息。
|
||||
|
||||
特别提一下predixy里面的延迟监控信息,可以通过在配置里定义的延迟监控名来看延迟信息
|
||||
|
||||
redis> INFO Latency <latency-name>
|
||||
|
||||
下面是一个延迟信息输出的例子:
|
||||
|
||||
LatencyMonitorName:all
|
||||
latency(us) sum(us) counts
|
||||
<= 100 3769836 91339 91.34%
|
||||
<= 200 777185 5900 97.24%
|
||||
<= 300 287565 1181 98.42%
|
||||
<= 400 185891 537 98.96%
|
||||
<= 500 132773 299 99.26%
|
||||
<= 600 85050 156 99.41%
|
||||
<= 700 85455 133 99.54%
|
||||
<= 800 40088 54 99.60%
|
||||
<= 1000 67788 77 99.68%
|
||||
> 1000 601012 325 100.00%
|
||||
T 60 6032643 100001
|
||||
The last line is total summary, 60 is average latency(us)
|
||||
|
||||
还可以单独看某个redis后端的延迟信息:
|
||||
|
||||
redis> INFO ServerLatency <server-address> [latency-name]
|
||||
|
||||
要重置所有的统计信息,和redis执行的命令是一样的,不过predixy要求有管理权限才可以重置统计信息
|
||||
|
||||
redis> CONFIG ResetStat
|
||||
|
||||
## 性能评测
|
||||
|
||||
predixy很快,有多快?对比几个流行的redis代理(twemproxy,codis,redis-cerberus), predixy要比它们快得多。
|
||||
|
||||
具体比较参见Wiki
|
||||
[benchmark](https://github.com/joyieldInc/predixy/wiki/Benchmark)
|
||||
|
||||
## 许可
|
||||
|
||||
Copyright (C) 2017 Joyield, Inc. <joyield.com#gmail.com>
|
||||
|
||||
All rights reserved.
|
||||
|
||||
License under BSD 3-clause "New" or "Revised" License
|
||||
|
||||
微信:cppfan 
|
||||
1
_config.yml
Normal file
@ -0,0 +1 @@
|
||||
theme: jekyll-theme-cayman
|
||||
@ -5,9 +5,12 @@
|
||||
## [MasterReadPriority [0-100]] #default 50
|
||||
## [StaticSlaveReadPriority [0-100]] #default 0
|
||||
## [DynamicSlaveReadPriority [0-100]] #default 0
|
||||
## [RefreshInterval seconds] #default 1
|
||||
## [RefreshInterval number[s|ms|us]] #default 1, means 1 second
|
||||
## [ServerTimeout number[s|ms|us]] #default 0, server connection socket read/write timeout
|
||||
## [ServerFailureLimit number] #default 10
|
||||
## [ServerRetryTimeout seconds] #default 1
|
||||
## [ServerRetryTimeout number[s|ms|us]] #default 1
|
||||
## [KeepAlive seconds] #default 0, server connection tcp keepalive
|
||||
|
||||
## Servers {
|
||||
## + addr
|
||||
## ...
|
||||
@ -21,8 +24,10 @@
|
||||
# StaticSlaveReadPriority 50
|
||||
# DynamicSlaveReadPriority 50
|
||||
# RefreshInterval 1
|
||||
# ServerTimeout 1
|
||||
# ServerFailureLimit 10
|
||||
# ServerRetryTimeout 1
|
||||
# KeepAlive 120
|
||||
# Servers {
|
||||
# + 192.168.2.107:2211
|
||||
# + 192.168.2.107:2212
|
||||
|
||||
94
conf/command.conf
Normal file
@ -0,0 +1,94 @@
|
||||
## Custom Command
|
||||
## CustomCommand {
|
||||
## command { #command string, must be lowercase
|
||||
## [Mode read|write|admin[|[keyAt2|keyAt3]] #default write, default key position is 1
|
||||
## [MinArgs [2-]] #default 2, including command itself
|
||||
## [MaxArgs [2-]] #default 2, must be MaxArgs >= MinArgs
|
||||
## }...
|
||||
## }
|
||||
|
||||
## Currently support maximum 16 custom commands
|
||||
|
||||
## Example:
|
||||
#CustomCommand {
|
||||
##------------------------------------------------------------------------
|
||||
# custom.ttl {
|
||||
# Mode keyAt2
|
||||
# MinArgs 3
|
||||
# MaxArgs 3
|
||||
# }
|
||||
#### custom.ttl miliseconds key
|
||||
#### Mode = write|keyAt2, MinArgs/MaxArgs = 3 = command + miliseconds + key
|
||||
##------------------------------------------------------------------------
|
||||
## from redis source src/modules/hello.c
|
||||
# hello.push.native {
|
||||
# MinArgs 3
|
||||
# MaxArgs 3
|
||||
# }
|
||||
#### hello.push.native key value
|
||||
#### Mode = write, MinArgs/MaxArgs = 3 = command) + key + value
|
||||
##------------------------------------------------------------------------
|
||||
# hello.repl2 {
|
||||
# }
|
||||
#### hello.repl2 <list-key>
|
||||
#### Mode = write, MinArgs/MaxArgs = 2 = command + list-key
|
||||
##------------------------------------------------------------------------
|
||||
# hello.toggle.case {
|
||||
# }
|
||||
#### hello.toggle.case key
|
||||
#### Mode = write, MinArgs/MaxArgs = 2 = command + key
|
||||
##------------------------------------------------------------------------
|
||||
# hello.more.expire {
|
||||
# MinArgs 3
|
||||
# MaxArgs 3
|
||||
# }
|
||||
#### hello.more.expire key milliseconds
|
||||
#### Mode = write, MinArgs/MaxArgs = 3 = command + key + milliseconds
|
||||
##------------------------------------------------------------------------
|
||||
# hello.zsumrange {
|
||||
# MinArgs 4
|
||||
# MaxArgs 4
|
||||
# Mode read
|
||||
# }
|
||||
#### hello.zsumrange key startscore endscore
|
||||
#### Mode = read, MinArgs/MaxArgs = 4 = command + key + startscore + endscore
|
||||
##------------------------------------------------------------------------
|
||||
# hello.lexrange {
|
||||
# MinArgs 6
|
||||
# MaxArgs 6
|
||||
# Mode read
|
||||
# }
|
||||
#### hello.lexrange key min_lex max_lex min_age max_age
|
||||
#### Mode = read, MinArgs/MaxArgs = 6 = command + key + min_lex + max_lex + min_age + max_age
|
||||
##------------------------------------------------------------------------
|
||||
# hello.hcopy {
|
||||
# MinArgs 4
|
||||
# MaxArgs 4
|
||||
# }
|
||||
#### hello.hcopy key srcfield dstfield
|
||||
#### Mode = write, MinArgs/MaxArgs = 4 = command + key + srcfield) + dstfield
|
||||
##------------------------------------------------------------------------
|
||||
## from redis source src/modules/hellotype.c
|
||||
# hellotype.insert {
|
||||
# MinArgs 3
|
||||
# MaxArgs 3
|
||||
# }
|
||||
#### hellotype.insert key value
|
||||
#### Mode = write, MinArgs/MaxArgs = 3 = command + key + value
|
||||
##------------------------------------------------------------------------
|
||||
# hellotype.range {
|
||||
# MinArgs 4
|
||||
# MaxArgs 4
|
||||
# Mode read
|
||||
# }
|
||||
#### hellotype.range key first count
|
||||
#### Mode = read, MinArgs/MaxArgs = 4 = command + key + first + count
|
||||
##------------------------------------------------------------------------
|
||||
# hellotype.len {
|
||||
# Mode read
|
||||
# }
|
||||
#### hellotype.len key
|
||||
#### Mode = read, MinArgs/MaxArgs = 2 = command + key
|
||||
##------------------------------------------------------------------------
|
||||
#}
|
||||
|
||||
@ -93,6 +93,10 @@ Include try.conf
|
||||
# Include dc.conf
|
||||
|
||||
|
||||
################################### COMMAND ####################################
|
||||
## Custom command define, see command.conf
|
||||
#Include command.conf
|
||||
|
||||
################################### LATENCY ####################################
|
||||
## Latency monitor define, see latency.conf
|
||||
Include latency.conf
|
||||
|
||||
@ -9,9 +9,11 @@
|
||||
## [MasterReadPriority [0-100]] #default 50
|
||||
## [StaticSlaveReadPriority [0-100]] #default 0
|
||||
## [DynamicSlaveReadPriority [0-100]] #default 0
|
||||
## [RefreshInterval seconds] #default 1
|
||||
## [RefreshInterval number[s|ms|us]] #default 1, means 1 second
|
||||
## [ServerTimeout number[s|ms|us]] #default 0, server connection socket read/write timeout
|
||||
## [ServerFailureLimit number] #default 10
|
||||
## [ServerRetryTimeout seconds] #default 1
|
||||
## [ServerRetryTimeout number[s|ms|us]] #default 1
|
||||
## [KeepAlive seconds] #default 0, server connection tcp keepalive
|
||||
## Sentinels {
|
||||
## + addr
|
||||
## ...
|
||||
@ -33,12 +35,14 @@
|
||||
# StaticSlaveReadPriority 50
|
||||
# DynamicSlaveReadPriority 50
|
||||
# RefreshInterval 1
|
||||
# ServerTimeout 1
|
||||
# ServerFailureLimit 10
|
||||
# ServerRetryTimeout 1
|
||||
# KeepAlive 120
|
||||
# Sentinels {
|
||||
# + 10.2.2.2
|
||||
# + 10.2.2.3
|
||||
# + 10.2.2.4
|
||||
# + 10.2.2.2:7500
|
||||
# + 10.2.2.3:7500
|
||||
# + 10.2.2.4:7500
|
||||
# }
|
||||
# Group shard001 {
|
||||
# }
|
||||
|
||||
71
conf/standalone.conf
Normal file
@ -0,0 +1,71 @@
|
||||
## redis standalone server pool define
|
||||
|
||||
##StandaloneServerPool {
|
||||
## [Password xxx] #default no
|
||||
## [Databases number] #default 1
|
||||
## Hash atol|crc16
|
||||
## [HashTag "xx"] #default no
|
||||
## Distribution modula|random
|
||||
## [MasterReadPriority [0-100]] #default 50
|
||||
## [StaticSlaveReadPriority [0-100]] #default 0
|
||||
## [DynamicSlaveReadPriority [0-100]] #default 0
|
||||
## RefreshMethod fixed|sentinel #
|
||||
## [RefreshInterval number[s|ms|us]] #default 1, means 1 second
|
||||
## [ServerTimeout number[s|ms|us]] #default 0, server connection socket read/write timeout
|
||||
## [ServerFailureLimit number] #default 10
|
||||
## [ServerRetryTimeout number[s|ms|us]] #default 1
|
||||
## [KeepAlive seconds] #default 0, server connection tcp keepalive
|
||||
## Sentinels [sentinel-password] {
|
||||
## + addr
|
||||
## ...
|
||||
## }
|
||||
## Group xxx {
|
||||
## [+ addr] #if RefreshMethod==fixed: the first addr is master in a group, then all addrs is slaves in this group
|
||||
## ...
|
||||
## }
|
||||
##}
|
||||
|
||||
|
||||
## Examples:
|
||||
#StandaloneServerPool {
|
||||
# Databases 16
|
||||
# Hash crc16
|
||||
# HashTag "{}"
|
||||
# Distribution modula
|
||||
# MasterReadPriority 60
|
||||
# StaticSlaveReadPriority 50
|
||||
# DynamicSlaveReadPriority 50
|
||||
# RefreshMethod sentinel
|
||||
# RefreshInterval 1
|
||||
# ServerTimeout 1
|
||||
# ServerFailureLimit 10
|
||||
# ServerRetryTimeout 1
|
||||
# KeepAlive 120
|
||||
# Sentinels {
|
||||
# + 10.2.2.2:7500
|
||||
# + 10.2.2.3:7500
|
||||
# + 10.2.2.4:7500
|
||||
# }
|
||||
# Group shard001 {
|
||||
# }
|
||||
# Group shard002 {
|
||||
# }
|
||||
#}
|
||||
|
||||
#StandaloneServerPool {
|
||||
# Databases 16
|
||||
# Hash crc16
|
||||
# HashTag "{}"
|
||||
# Distribution modula
|
||||
# MasterReadPriority 60
|
||||
# StaticSlaveReadPriority 50
|
||||
# DynamicSlaveReadPriority 50
|
||||
# RefreshMethod fixed
|
||||
# ServerTimeout 1
|
||||
# ServerFailureLimit 10
|
||||
# ServerRetryTimeout 1
|
||||
# KeepAlive 120
|
||||
# Group shard001 {
|
||||
# + 10.2.3.2:6379
|
||||
# }
|
||||
#}
|
||||
52
doc/Config-description-for-Redis-cluster.md
Normal file
@ -0,0 +1,52 @@
|
||||
## Configuration example
|
||||
cluster.conf
|
||||
|
||||
ClusterServerPool {
|
||||
MasterReadPriority 60
|
||||
StaticSlaveReadPriority 50
|
||||
DynamicSlaveReadPriority 50
|
||||
RefreshInterval 300ms
|
||||
ServerTimeout 300ms #added in commit 86183a4
|
||||
ServerFailureLimit 2
|
||||
ServerRetryTimeout 500ms
|
||||
KeepAlive 120s #added in commit 86183a4
|
||||
Servers {
|
||||
+ 192.168.2.107:2211
|
||||
+ 192.168.2.108:2212
|
||||
+ 192.168.2.109:2213
|
||||
}
|
||||
}
|
||||
|
||||
## Configuration parameters description
|
||||
|
||||
**MasterReadPriority, StaticSlaveReadPriority** and **DynamicSlaveReadPriority** - these parameters work in conjunction only in predixy and have _nothing_ with Redis configuration directives like slave-priority. As predixy can work both with Redis Sentinel or Redis Cluster deployments, two options can be used: MasterReadPriority and StaticSlaveReadPriority, or MasterReadPriority and DynamicSlaveReadPriority.
|
||||
|
||||
If your Redis deployment implies nodes auto-discovery with Sentinel or other cluster nodes, the DynamicSlaveReadPriority option will be used; if you plan to add nodes in predixy config to **Servers {...}** manually, StaticSlaveReadPriority will be used.
|
||||
|
||||
In other words, predixy can discover automatically added Redis-related nodes polling existing **Servers {...}** and also route queries to them, eliminating the need of manually editing of cluster.conf and restarting the predixy.
|
||||
|
||||
These three parameters tell predixy in what way and proportion it should route queries to available nodes.
|
||||
|
||||
Some use cases you can see below:
|
||||
|
||||
| Master/SlaveReadPriority | Master | Slave1 | Slave2 | Fail-over notes |
|
||||
| ------------- | ------------- | ------------- | ------------- | ------------- |
|
||||
| 60/50 | all requests | 0 requests | 0 requests | Master dead, read requests deliver to slave until master(maybe new master) alive |
|
||||
| 60/0 | all requests | 0 requests | 0 requests | Master dead, all requests fail |
|
||||
| 50/50 | all write requests, 33.33%read requests | 33.33% read requests | 33.33% read requests | - |
|
||||
| 0/50 | all write requests, 0 read requests | 50% read requests | 50% read requests | all slaves dead, all read requests fail |
|
||||
| 10/50 | all write requests, 0 read requests | 50% read requests | 50% read requests | all slaves dead, read requests deliver to master |
|
||||
|
||||
**RefreshInterval** - seconds, milliseconds or microseconds [s|ms|us], tells predixy how often it should poll nodes info and allocated hash slots in case of Cluster usage
|
||||
|
||||
**ServerFailureLimit** - amount of failed queries to node when predixy stop forwarding queries to this node
|
||||
|
||||
**ServerTimeout** - seconds, milliseconds or microseconds [s|ms|us], is timeout for nearly all commands except BRPOP, BLPOP, BRPOPLUSH, transactions and PUB/SUB. It's Redis server connection socket read/write timeout, to avoid wait for a Redis response too long. When timeout reached, Predixy will close connection between itself and Redis and throw an error.
|
||||
**Available from the commit 86183a4.**
|
||||
|
||||
**KeepAlive** - seconds, applied to redis connection(socket), it works for all commands that not covered with the **ServerTimeout**, i.e. applied also to commands with blocking nature like BRPOP, BLPOP, BRPOPLPUSH. hen timeout reached, Predixy will close connection between itself and Redis and throw an error.
|
||||
**Available from the commit 86183a4.**
|
||||
|
||||
**ServerRetryTimeout** - seconds, milliseconds or microseconds [s|ms|us], tells predixy how often it should try to check health of failed nodes and decide if they still failed or alive and can be used for queries processing
|
||||
|
||||
**Servers** - just line-by-line list of static Redis nodes in socket fashion, like `+ IP:PORT`
|
||||
BIN
doc/bench/1.0.1/mt_pipeline_set_get.png
Normal file
|
After Width: | Height: | Size: 129 KiB |
BIN
doc/bench/1.0.1/st_pipeline_set_get.png
Normal file
|
After Width: | Height: | Size: 142 KiB |
BIN
doc/bench/1.0.1/st_set_get.png
Normal file
|
After Width: | Height: | Size: 141 KiB |
101
doc/bench/redis/benchmark_with_redis_CN.md
Normal file
@ -0,0 +1,101 @@
|
||||
从我们的直观感受来讲,对于任何服务,只要在中间增加了一层,肯定会对服务性能造成影响。那么到底会影响什么呢?在考察一个服务性能的时候,有两个最重要的指标,那就是吞吐和延迟。吞吐定义为服务端单位时间内能处理的请求数,延迟定义为客户端从发出请求到收到请求的耗时。中间环节的引入我们首先想到的就是那会增加处理时间,这就会增加服务的延迟,于是顺便我们也会认为吞吐也会下降。从单个用户的角度来讲,事实确实如此,我完成一个请求的时间增加了,那么我单位时间内所能完成的请求量必定就减少了。然而站在服务端的角度来看,虽然单个请求的处理时间增加了,但是总的吞吐就一定会减少吗?
|
||||
|
||||
接下来我们就来对redis来进行一系列的测试,利用redis自带的redis-benchmark,分别对set和get命令;单个发送和批量发送;直连redis和连接redis代理[predixy](https://github.com/joyieldInc/predixy)。这样组合起来总共就是八种情况。redis-benchmark、redis是单线程的,predixy支持多线程,但是我们也只运行一个线程,这三个程序都运行在一台机器上。
|
||||
|
||||
|项目|内容|
|
||||
|---|---|
|
||||
|CPU|AMD Ryzen 7 1700X Eight-Core Processor 3.775GHz|
|
||||
|内存|16GB DDR4 3000
|
||||
|OS|x86_64 GNU/Linux 4.10.0-42-generic #46~16.04.1-Ubuntu
|
||||
|redis|版本3.2.9,端口7200
|
||||
|predixy|版本1.0.2,端口7600
|
||||
|
||||
|
||||
八个测试命令
|
||||
|
||||
|测试命令|命令行|
|
||||
|--|--|
|
||||
|redis set|redis-benchmark -h xxx -p 7200 -t set -r 3000 -n 40000000
|
||||
|predixy set|redis-benchmark -h xxx -p 7600 -t set -r 3000 -n 40000000
|
||||
|redis get|redis-benchmark -h xxx -p 7200 -t get -r 3000 -n 40000000
|
||||
|predixy get|redis-benchmark -h xxx -p 7600 -t get -r 3000 -n 40000000
|
||||
|redis 批量set|redis-benchmark -h xxx -p 7200 -t set -r 3000 -n 180000000 -P 20
|
||||
|predixy 批量set|redis-benchmark -h xxx -p 7600 -t set -r 3000 -n 180000000 -P 20
|
||||
|redis 批量get|redis-benchmark -h xxx -p 7200 -t get -r 3000 -n 420000000 -P 20
|
||||
|predixy 批量get|redis-benchmark -h xxx -p 7600 -t get -r 3000 -n 220000000 -P 20
|
||||
|
||||
以上8条命令采取redis-benchmark默认的50个并发连接,数据大小为2字节,指定3000个key,批量测试时一次发送20个请求。依次间隔2分钟执行以上命令,每一个测试完成时间大约4分钟。最后得到下图的总体结果:
|
||||

|
||||
眼花缭乱是不是?左边的纵轴表示CPU使用率,右边的纵轴表示吞吐。其中redis used表示redis总的CPU使用率,redis user表示redis CPU用户态使用率,redis sys表示redis CPU内核态使用率,其它类推。先别担心分不清里面的内容,下面我们会一一标出数值来。在这图中总共可以看出有八个凸起,依次对应我们上面提到的八个测试命令。
|
||||
|
||||
1 redis set测试
|
||||

|
||||
|
||||
2 predixy set测试
|
||||

|
||||
|
||||
3 redis get测试
|
||||

|
||||
|
||||
4 predixy get测试
|
||||

|
||||
|
||||
5 redis 批量set测试
|
||||

|
||||
|
||||
6 predixy 批量set测试
|
||||

|
||||
|
||||
7 redis 批量get测试
|
||||

|
||||
|
||||
8 predixy 批量get测试
|
||||

|
||||
|
||||
图片还是不方便看,我们总结为表格:
|
||||
|
||||
|测试\指标|redis used|redis user|redis sys|predixy used|predixy user|predixy sys|redis qps|predixy qps|
|
||||
|--|--|--|--|--|--|--|--|--|
|
||||
|redis set|0.990|0.247|0.744|0|0|0|167000|3|
|
||||
|predixy set|0.475|0.313|0.162|0.986|0.252|0.734|174000|174000|
|
||||
|redis get|0.922|0.180|0.742|0|0|0|163000|3|
|
||||
|predixy get|0.298|0.195|0.104|0.988|0.247|0.741|172000|172000|
|
||||
|redis批量set|1.006|0.796|0.21|0|0|0|782000|3|
|
||||
|predixy批量set|0.998|0.940|0.058|0.796|0.539|0.256|724000|724000|
|
||||
|redis批量get|1|0.688|0.312|0|0|0|1708000|3|
|
||||
|predixy批量get|0.596|0.582|0.014|0.999|0.637|0.362|935000|935000|
|
||||
|
||||
看到前四个的结果如果感到惊讶不用怀疑是自己看错了或者是测试结果有问题,这个结果是无误的。根据这个结果,那么可以回答我们最初提出的疑问,增加了代理之后并不一定会降低服务整体的吞吐!虽然benchmark并不是我们的实际应用,但是redis的大部分应用场景都是这样的,并发的接受众多客户端的请求,处理然后返回。
|
||||
|
||||
为什么会是这样的结果,看到这个结果后我们肯定想知道原因,这好像跟我们的想象不太一样。要分析这个问题,我们还是从测试的数据来入手,首先看测试1的数据,redis的CPU使用率几乎已经达到了1,对于单线程程序来说,这意味着CPU已经跑满了,性能已经达到了极限,不可能再提高了,然而这时redis的吞吐却只有167000。测试2的redis吞吐都比它高,并且我们明显能看出测试2里redis的CPU使用率还不如测试1的高,测试2里redis CPU使用率只有0.475。为什么CPU使用率降低了吞吐反而却还高了呢?仔细对比一下两个测试的数据,可以发现在测试1里,redis的CPU大部分都花在了内核态,高达0.744,而用户态只有0.247,CPU运行在内核态时虽然我们不能称之为瞎忙活,但是却无助于提升程序的性能,只有CPU运行在用户态才可能提升我们的程序性能,相比测试1,测试2的redis用户态CPU使用率提高到了0.313,而内核态CPU则大幅下降至0.162。这也就解释了为什么测试2的吞吐比测试1还要高。当然了,我们还是要继续刨根问底,为什么测试2里经过一层代理predixy后,redis的CPU使用情况发生变化了呢?这是因为redis接受一个连接批量的发送命令过来处理,也就是redis里所谓的pipeline。而predixy正是利用这一特性,predixy与redis之间只有一个连接(大多数情况下),predixy在收到客户端的请求后,会将它们批量的通过这个连接发送给redis处理,这样一来就大大降低了redis用于网络IO操作的开销,而这一部分开销基本都是花费在内核态。
|
||||
|
||||
对比测试1和测试2,引入predixy不仅直接提高了吞吐,还带来一个好处,就是redis的CPU使用率只有一半不到了,这也就意味着如果我再把剩下的这一半CPU用起来还可以得到更高的吞吐,而如果没有predixy这样一层的话,测试1结果告诉我们redis的CPU利用率已经到头了,吞吐已经不可能再提高。
|
||||
|
||||
测试3和测试4说明的问题与测试1和测试2一样,如果我只做了这四个测试,那么看起来好像代理的引入完全有助于提升我们的吞吐嘛。正如上面所分析的那样,predixy提升吞吐的原因是因为采用了批量发送手段。那么如果客户端的使用场景就是批量发送命令,那结果会如何呢?
|
||||
|
||||
于是有了后面四个测试,后面四个测试给我们的直接感受就是太震撼了,吞吐直接提升几倍甚至10倍!其实也正是因为redis批量模式下性能非常强悍,才使得predixy在单命令情况下改进吞吐成为可能。当然到了批量模式,从测试结果看,predixy使得服务的吞吐下降了。
|
||||
|
||||
具体到批量set时,直连redis和通过predixy,redis的CPU使用率都满了,虽然采用predixy使得redis的用户态CPU从0.796提高到了0.940,但是吞吐却不升反降,从782000到724000,大约下降了7.4%。至于为什么用户态CPU利用率提高了吞吐却下降了,要想知道原因就需要分析redis本身的实现,这里我们就不做详细探讨。可以做一个粗糙的解释,在redis CPU跑满的情况下,不同的负载情况会使得用户态和内核态的使用率不同,而这其中有一种分配情况会是吞吐最大,而用户态使用率高于或者低于这种情况时都会出现吞吐下降的情况。
|
||||
|
||||
再来看批量get,直连redis时吞吐高达1708000,而通过predixy的话就只有935000了,下降了45%!就跟纳了个人所得税上限一般。看到这,刚刚对predixy建立的美好形象是不是又突然觉得要坍塌了?先别急,再看看其它指标,直连redis时,redis CPU跑满;而通过predixy时redis CPU只用了0.596,也就是说redis还有四成的CPU等待我们去压榨。
|
||||
|
||||
写到这,既然上面提到批量get时,通过predixy的话redis并未发挥出全部功力,于是就想着如果全部发挥出来会是什么情况呢?我们继续增加两个测试,既然单个predixy在批量的情况下造成了吞吐下降,但是给我们带来了一个好处是redis还可以提升的余地,那么我们就增加predixy的处理能力。因此我们把predixy改为三线程,再来跑一遍测试6和测试8。
|
||||
两个测试的整体结果如下。
|
||||

|
||||
|
||||
三线程predixy批量set
|
||||

|
||||
|
||||
三线程predixy批量get
|
||||

|
||||
|
||||
|测试\指标|redis used|redis user|redis sys|predixy used|predixy user|predixy sys|redis qps|predixy qps|
|
||||
|--|--|--|--|--|--|--|--|--|
|
||||
|predixy pipeline set|1.01|0.93|0.07|1.37|0.97|0.41|762000|762000|
|
||||
|predixy pipeline get|0.93|0.85|0.08|2.57|1.85|0.72|1718000|1718000|
|
||||
|
||||
原本在单线程predixy的批量set测试中,predixy和redis的CPU都已经跑满了,我们觉得吞吐已经达到了极限,但是实际结果显示在三线程predixy的批量set测试中,吞吐还是提高了,从原来的724000到现在的76200,与直连的782000只有2.5%的差距。多线程和单线程的主要差别在于单线程时predixy与redis只有一个连接,而三线程时有三个连接。
|
||||
|
||||
而对于三线程predixy的批量get测试,不出我们所料的吞吐得到了极大的提升,从之前的935000直接飙到1718000,已经超过了直连的1708000。
|
||||
|
||||
最后,我们来总结一下,我们整个测试的场景比较简单,只是单纯的set、get测试,并且数据大小为默认的2字节,实际的redis应用场景远比这复杂的多。但是测试结果的数据依旧可以给我们一些结论。代理的引入并不一定会降低服务的吞吐,实际上根据服务的负载情况,有时候引入代理反而可以提升整个服务的吞吐,如果我们不计较代理本身所消耗的资源,那么引入代理几乎总是一个好的选择。根据我们上面的分析,一个最简单实用的判断原则,看看你的redis CPU使用情况,如果花费了太多时间在内核态,那么考虑引入代理吧。
|
||||
BIN
doc/bench/redis/mt_overview.png
Normal file
|
After Width: | Height: | Size: 47 KiB |
BIN
doc/bench/redis/mt_predixy_pipeline_get.png
Normal file
|
After Width: | Height: | Size: 56 KiB |
BIN
doc/bench/redis/mt_predixy_pipeline_set.png
Normal file
|
After Width: | Height: | Size: 58 KiB |
BIN
doc/bench/redis/overview.png
Normal file
|
After Width: | Height: | Size: 78 KiB |
BIN
doc/bench/redis/predixy_get.png
Normal file
|
After Width: | Height: | Size: 84 KiB |
BIN
doc/bench/redis/predixy_pipeline_get.png
Normal file
|
After Width: | Height: | Size: 88 KiB |
BIN
doc/bench/redis/predixy_pipeline_set.png
Normal file
|
After Width: | Height: | Size: 85 KiB |
BIN
doc/bench/redis/predixy_set.png
Normal file
|
After Width: | Height: | Size: 85 KiB |
BIN
doc/bench/redis/redis_get.png
Normal file
|
After Width: | Height: | Size: 83 KiB |
BIN
doc/bench/redis/redis_pipeline_get.png
Normal file
|
After Width: | Height: | Size: 82 KiB |
BIN
doc/bench/redis/redis_pipeline_set.png
Normal file
|
After Width: | Height: | Size: 82 KiB |
BIN
doc/bench/redis/redis_set.png
Normal file
|
After Width: | Height: | Size: 81 KiB |
547
doc/config_CN.md
Normal file
@ -0,0 +1,547 @@
|
||||
# Predixy配置文档说明
|
||||
|
||||
要正常运行predixy服务,一个配置文件是必不可少的,启动一个正常的predixy服务需执行以下命令:
|
||||
|
||||
$ predixy <config-file> [--ArgName=ArgValue]...
|
||||
|
||||
predixy首先读取config-file文件中的配置信息,如果有命令行参数指定的话,则会用命令行参数的值覆盖配置文件中定义的相应值。
|
||||
|
||||
## 配置文件格式说明
|
||||
配置文件是以行为单位的文本文件,每一行是以下几种类型之一
|
||||
|
||||
+ 空行或注释
|
||||
+ Key Value
|
||||
+ Key Value {
|
||||
+ }
|
||||
|
||||
### 规则
|
||||
|
||||
+ 以#开始的内容为注释内容
|
||||
+ Include是一个特殊的Key,表示引用Value指明的文件,如果不是绝对路径的话,则相对路径是当前这个文件所在的路径
|
||||
+ Value可以为空, 如果Value本身包括#,双引号的话,应该用双引号括起来,里面的双引号加反斜杠转义, 例如: "A \"#special#\" value"
|
||||
+ 多个行定义同一个Key的话,最后出现的行会覆盖之前的定义
|
||||
|
||||
|
||||
## 基本配置部分
|
||||
|
||||
### Name
|
||||
|
||||
定义predixy服务的名字,在用INFO命令的时候会输出这个内容
|
||||
|
||||
例子:
|
||||
|
||||
Name PredixyUserInfo
|
||||
|
||||
### Bind
|
||||
|
||||
定义predixy服务监听的地址,支持ip:port以及unix socket
|
||||
|
||||
例子:
|
||||
|
||||
Bind 0.0.0.0:7617
|
||||
Bind /tmp/predixy
|
||||
|
||||
未指定时为: 0.0.0.0:7617
|
||||
|
||||
### WorkerThreads
|
||||
|
||||
指定工作线程数
|
||||
|
||||
例子:
|
||||
|
||||
WorkerThreads 4
|
||||
|
||||
未指定时为: 1
|
||||
|
||||
### MaxMemory
|
||||
|
||||
指定predixy最大可申请分配的内存,可以带单位(G/M/K)指定,为0时表示不限制
|
||||
|
||||
例子:
|
||||
|
||||
MaxMemory 1024000
|
||||
MaxMemory 1G
|
||||
|
||||
未指定时为: 0
|
||||
|
||||
### ClientTimeout
|
||||
|
||||
指定客户端超时时间,以秒为单位,即客户端在空闲时间超过该时间以后将主动断开客户端连接,为0时表示禁止该功能,不主动断开客户端连接
|
||||
|
||||
例子:
|
||||
|
||||
ClientTimeout 300
|
||||
|
||||
未指定时为: 0
|
||||
|
||||
### BufSize
|
||||
|
||||
IO Buffer大小,predixy内部分配BufSize大小的缓冲区用于接受客户端命令和服务端响应,完全零拷贝的转发给服务端或者客户端,该值太小的话影响性能,太大的话浪费空间也可能对性能无益。但是具体多少合适要看实际应用场景,predixy默认设该值为4096
|
||||
|
||||
例子:
|
||||
|
||||
BufSize 8192
|
||||
|
||||
未指定时为: 4096
|
||||
|
||||
### Log
|
||||
|
||||
指定日志文件名
|
||||
|
||||
例子:
|
||||
|
||||
Log /var/log/predixy.log
|
||||
|
||||
未指定时predixy的行为是将日志写向标准输出
|
||||
|
||||
### LogRotate
|
||||
|
||||
日志自动切分选项,可以按时间指定,也可以按文件大小指定,还可以两者都指定。按时间指定支持如下格式:
|
||||
|
||||
+ 1d 一天
|
||||
+ nh 1<= n <= 24 n小时
|
||||
+ nm 1 <= n <= 1440 n分钟
|
||||
|
||||
按文件大小指定支持G和M单位
|
||||
|
||||
例子:
|
||||
|
||||
LogRotate 1d #每天切分一次
|
||||
LogRotate 1h #每小时切分一次
|
||||
LogRotate 10m #每10分钟切分一次
|
||||
LogRotate 2G #日志文件每2G切分一次
|
||||
LogRotate 200M #日志文件每200M切分一次
|
||||
LogRotate 1d 2G #每天切分一次,且如果日志文件达到2G也切分一次
|
||||
|
||||
未定义时禁用该功能
|
||||
|
||||
### LogXXXSample
|
||||
|
||||
日志输出采样率,表示每多少条该级别的日志输出一条到Log中,为0时则表示不输出该级别日志。支持的级别如下:
|
||||
|
||||
+ LogVerbSample
|
||||
+ LogDebugSample
|
||||
+ LogInfoSample
|
||||
+ LogNoticeSample
|
||||
+ LogWarnSample
|
||||
+ LogErrorSample
|
||||
|
||||
例子:
|
||||
|
||||
LogVerbSample 0
|
||||
LogDebugSample 0
|
||||
LogInfoSample 10000
|
||||
LogNoticeSample 1
|
||||
LogWarnSample 1
|
||||
LogErrorSample 1
|
||||
|
||||
这几个参数可以在线修改,像redis一样,通过config set命令:
|
||||
|
||||
CONFIG SET LogDebugSample 1
|
||||
|
||||
在predixy中,执行config命令需要管理权限
|
||||
|
||||
|
||||
## 权限控制配置部分
|
||||
|
||||
predixy扩展了redis中AUTH命令的功能,支持定义多个认证密码,可以为每个密码指定权限,权限包括读权限、写权限和管理权限,其中写权限包括读权限,管理权限又包括写权限。还可以指定每个密码所能读写的健空间,健空间的定义是指健具有某个前缀。
|
||||
|
||||
权限控制的定义格式如下:
|
||||
|
||||
Authority {
|
||||
Auth [password] {
|
||||
Mode read|write|admin
|
||||
[KeyPrefix Prefix...]
|
||||
[ReadKeyPrefix Prefix...]
|
||||
[WriteKeyPrefix Prefix...]
|
||||
}...
|
||||
}
|
||||
|
||||
|
||||
Authority里面可以定义多个Auth,每个Auth指定一个密码,可以为每个Auth定义权限和健空间。
|
||||
|
||||
参数说明:
|
||||
|
||||
+ Mode: 必须指定,只能是read、write、admin三者之一,分别表示读、写、管理权限
|
||||
+ KeyPrefix: 可选项,可以定义健空间,多个健空间用空格隔开
|
||||
+ ReadKeyPrefix: 可选项,可以定义可读的健空间,多个健空间用空格隔开
|
||||
+ WriteKeyPrefix: 可选项,可以定义可写的健空间,多个健空间用空格隔开
|
||||
|
||||
对于可读的健空间,如果定义了ReadKeyPrefix,则由ReadKeyPrefix决定,否则由KeyPrefix决定,如果两者都没有,则不限制。可写的健空间解释也一样。需要注意的是,有写权限就表示有读权限,但是读写健空间是完全独立的,即WriteKeyPrefix不会默认包括ReadKeyPrefix的内容。
|
||||
|
||||
例子:
|
||||
|
||||
Authority {
|
||||
Auth {
|
||||
Mode read
|
||||
KeyPrefix Info
|
||||
}
|
||||
Auth readonly {
|
||||
Mode read
|
||||
}
|
||||
Auth modify {
|
||||
Mode write
|
||||
ReadPrefix User Stats
|
||||
WritePrefix User
|
||||
}
|
||||
}
|
||||
|
||||
上面的例子定义了三个认证密码
|
||||
|
||||
+ 空密码,因为密码为空,所以这个认证是默认的认证,它具有读权限,由于指定了KeyPrefix,因此它最终的权限是只能读取具有前缀Info的key
|
||||
+ readonly密码,这个认证具有读权限,没有健空间限制,它可以读所有key
|
||||
+ modify密码,这个认证具有写权限,分别定义了可读健空间User和Stats,因此它能读取具有这两个前缀的key,而可写健空间定义为User,因此它能写具有前缀User的key,但是不能写具有前缀Stats的key
|
||||
|
||||
缺省的predixy权限控制定义如下:
|
||||
|
||||
Authority {
|
||||
Auth {
|
||||
Mode write
|
||||
}
|
||||
Auth "#a complex password#" {
|
||||
Mode admin
|
||||
}
|
||||
}
|
||||
|
||||
它表示无需密码即可读写所有的key,但是管理权限要求输入密码#a complex password#
|
||||
|
||||
## redis实例配置部分
|
||||
|
||||
predixy支持Redis Sentinel和Redis Cluster来使用redis,一个配置里这两种形式只能出现一种。
|
||||
|
||||
### Redis Sentinel形式
|
||||
|
||||
定义格式如下:
|
||||
|
||||
SentinelServerPool {
|
||||
[Password xxx]
|
||||
[Databases number]
|
||||
Hash atol|crc16
|
||||
[HashTag "xx"]
|
||||
Distribution modula|random
|
||||
[MasterReadPriority [0-100]]
|
||||
[StaticSlaveReadPriority [0-100]]
|
||||
[DynamicSlaveReadPriority [0-100]]
|
||||
[RefreshInterval number[s|ms|us]]
|
||||
[ServerTimeout number[s|ms|us]]
|
||||
[ServerFailureLimit number]
|
||||
[ServerRetryTimeout number[s|ms|us]]
|
||||
[KeepAlive seconds]
|
||||
Sentinels {
|
||||
+ addr
|
||||
...
|
||||
}
|
||||
Group xxx {
|
||||
[+ addr]
|
||||
...
|
||||
}
|
||||
}
|
||||
|
||||
参数说明:
|
||||
|
||||
+ Password: 指定连接redis实例默认的密码,不指定的情况下表示redis不需要密码
|
||||
+ Databases: 指定redis db数量,不指定的情况下为1
|
||||
+ Hash: 指定对key算哈希的方法,当前只支持atol和crc16
|
||||
+ HashTag: 指定哈希标签,不指定的话为{}
|
||||
+ Distribution: 指定分布key的方法,当前只支持modula和random
|
||||
+ MasterReadPriority: 读写分离功能,从redis master节点执行读请求的优先级,为0则禁止读redis master,不指定的话为50
|
||||
+ StaticSlaveReadPriority: 读写分离功能,从静态redis slave节点执行读请求的优先级,所谓静态节点,是指在本配置文件中显示列出的redis节点,不指定的话为0
|
||||
+ DynamicSlaveReadPolicy: 功能见上,所谓动态节点是指在本配置文件中没有列出,但是通过redis sentinel动态发现的节点,不指定的话为0
|
||||
+ RefreshInterval: predixy会周期性的请求redis sentinel以获取最新的集群信息,该参数以秒为单位指定刷新周期,不指定的话为1秒
|
||||
+ ServerTimeout: 请求在predixy中最长的处理/等待时间,如果超过该时间redis还没有响应的话,那么predixy会关闭同redis的连接,并给客户端一个错误响应,对于blpop这种阻塞式命令,该选项不起作用,为0则禁止此功能,即如果redis不返回就一直等待,不指定的话为0
|
||||
+ ServerFailureLimit: 一个redis实例出现多少次才错误以后将其标记为失效,不指定的话为10
|
||||
+ ServerRetryTimeout: 一个redis实例失效后多久后去检查其是否恢复正常,不指定的话为1秒
|
||||
+ KeepAlive: predixy与redis的连接tcp keepalive时间,为0则禁止此功能,不指定的话为0
|
||||
+ Sentinels: 里面定义redis sentinel实例的地址
|
||||
+ Group: 定义一个redis组,Group的名字应该和redis sentinel里面的名字一致,Group里可以显示列出redis的地址,列出的话就是上面提到的静态节点
|
||||
|
||||
一个例子:
|
||||
|
||||
SentinelServerPool {
|
||||
Databases 16
|
||||
Hash crc16
|
||||
HashTag "{}"
|
||||
Distribution modula
|
||||
MasterReadPriority 60
|
||||
StaticSlaveReadPriority 50
|
||||
DynamicSlaveReadPriority 50
|
||||
RefreshInterval 1
|
||||
ServerTimeout 1
|
||||
ServerFailureLimit 10
|
||||
ServerRetryTimeout 1
|
||||
KeepAlive 120
|
||||
Sentinels {
|
||||
+ 10.2.2.2:7500
|
||||
+ 10.2.2.3:7500
|
||||
+ 10.2.2.4:7500
|
||||
}
|
||||
Group shard001 {
|
||||
}
|
||||
Group shard002 {
|
||||
}
|
||||
}
|
||||
|
||||
这个Redis Sentinel集群定义指定了三个redis sentinel实例,分别是10.2.2.2:7500、10.2.2.3:7500、10.2.2.4:7500,定义了两组redis,分别是shard001、shard002。没有指定任何静态redis节点。所有redis实例都没有开启密码认证,它们都具有16个db。predixy用crc16计算key的哈希值,然后通过modula也就是求模的办法将key分布到shard001或shard002中去。由于MasterReadPriority为60,比DynamicSlaveReadPriority的50要大,所以读请求都会分发到redis master节点,RefreshInterval为1,每一秒钟向redis sentinel发送请求刷新集群信息。redis实例失败累计达到10次后将该redis实例标记失效,每间隔1秒钟后检查其是否恢复。
|
||||
|
||||
### Redis Cluster形式
|
||||
|
||||
定义格式如下:
|
||||
|
||||
ClusterServerPool {
|
||||
[Password xxx]
|
||||
[MasterReadPriority [0-100]]
|
||||
[StaticSlaveReadPriority [0-100]]
|
||||
[DynamicSlaveReadPriority [0-100]]
|
||||
[RefreshInterval seconds]
|
||||
[ServerTimeout number[s|ms|us]]
|
||||
[ServerFailureLimit number]
|
||||
[ServerRetryTimeout number[s|ms|us]]
|
||||
[KeepAlive seconds]
|
||||
Servers {
|
||||
+ addr
|
||||
...
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
参数说明:
|
||||
|
||||
+ 可选的参数和Redis Sentinel模式同名参数含义一样
|
||||
+ Servers:在这里列出redis cluster里的redis实例,列出的为静态节点,未列出而是通过集群信息发现的则为动态节点
|
||||
|
||||
一个例子:
|
||||
|
||||
ClusterServerPool {
|
||||
MasterReadPriority 0
|
||||
StaticSlaveReadPriority 50
|
||||
DynamicSlaveReadPriority 50
|
||||
RefreshInterval 1
|
||||
ServerTimeout 1
|
||||
ServerFailureLimit 10
|
||||
ServerRetryTimeout 1
|
||||
KeepAlive 120
|
||||
Servers {
|
||||
+ 192.168.2.107:2211
|
||||
+ 192.168.2.107:2212
|
||||
}
|
||||
}
|
||||
|
||||
该定义指定通过redis实例192.168.2.107:2211和192.168.2.107:2212来发现集群信息,指定MasterReadPriority为0,表示不要将读请求分发到redis master节点。
|
||||
|
||||
|
||||
## 多数据中心配置部分
|
||||
|
||||
predixy支持多数据中心,在redis部署跨数据中心的时候,可以将读请求分发给本数据中心的redis实例,避免跨数据中心访问。套用数据中心的概念,实际上即便没有多数据中心,也可以在需要从节点来分担读请求的时候通过本配置来控制请求分配,此时比如可以认为一个机架就是一个数据中心。
|
||||
|
||||
多数据中心配置格式:
|
||||
|
||||
LocalDC name
|
||||
DataCenter {
|
||||
DC name {
|
||||
AddrPrefix {
|
||||
+ IpPrefix
|
||||
...
|
||||
}
|
||||
ReadPolicy {
|
||||
name priority [weight]
|
||||
other priority [weight]
|
||||
}
|
||||
}
|
||||
...
|
||||
}
|
||||
|
||||
|
||||
参数说明:
|
||||
|
||||
+ LocalDC: 指定当前predixy所在的数据中心
|
||||
+ DC: 定义一个数据中心
|
||||
+ AddrPrefix: 定义该数据中心包括的ip前缀
|
||||
+ ReadPolicy: 定义从该数据中心读其它(包括自己)数据中心的优先级及权重
|
||||
|
||||
如果不用数据中心功能,那么不提供LocalDC和DataCenter定义即可。
|
||||
|
||||
一个多数据中心配置的例子:
|
||||
|
||||
DataCenter {
|
||||
DC bj {
|
||||
AddrPrefix {
|
||||
+ 10.1
|
||||
}
|
||||
ReadPolicy {
|
||||
bj 50
|
||||
sh 20
|
||||
sz 10
|
||||
}
|
||||
}
|
||||
DC sh {
|
||||
AddrPrefix {
|
||||
+ 10.2
|
||||
}
|
||||
ReadPolicy {
|
||||
sh 50
|
||||
bj 20 5
|
||||
sz 20 2
|
||||
}
|
||||
}
|
||||
DC sz {
|
||||
AddrPrefix {
|
||||
+ 10.3
|
||||
}
|
||||
ReadPolicy {
|
||||
sz 50
|
||||
sh 20
|
||||
bj 10
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
这个例子定义了三个数据中心,分别是bj、sh、sz。其中bj数据中心包括ip前缀为10.1的redis实例,这样predixy通过redis sentinel或者redis cluster发现节点的时候如果看到redis实例的地址前缀是10.1则会认为该实例在bj数据中心。predixy根据自身所在的数据中心来选择相应的读请求策略。
|
||||
|
||||
假设predixy在bj数据中心,bj读bj的优先级为50,高于另外两个,所以predixy会优先选择bj数据中心进行读操作,如果bj数据中心没有可用的redis节点,则会是sh数据中心,如果sh还没有节点,才会选择sz。
|
||||
|
||||
假设predixy在sh数据中心,predixy优先选择sh数据中心,如果sh数据中心没有可用的redis实例,因为bj和sh的优先级都为20,那么则会根据权重设置来分配流量,在这里,5份的请求去bj数据中心,2份的请求去sz。
|
||||
|
||||
前面在定义集群的时候有说过可以定义主从节点的读优先级,数据中心这里又有读优先级的概念,那么它们是如何工作的?原则就是在启用了数据中心的功能之后,先根据数据中心读策略选取数据中心、然后再在数据中心内应用集群的主从读优先级选择最终的redis实例。
|
||||
|
||||
|
||||
## 延迟监控配置部分
|
||||
|
||||
predixy提供了强大的延迟监控的功能,可以记录predixy处理请求的时间,对predixy来说,这个时间其实主要就是请求redis的时间。
|
||||
|
||||
延迟监控定义格式如下:
|
||||
|
||||
LatencyMonitor name {
|
||||
Commands {
|
||||
+ cmd
|
||||
[- cmd]
|
||||
...
|
||||
}
|
||||
TimeSpan {
|
||||
+ TimeElapsedUS
|
||||
...
|
||||
}
|
||||
}
|
||||
|
||||
参数说明:
|
||||
|
||||
+ LatencyMonitor: 定义一个延迟监控
|
||||
+ Commands: 指定该延迟监控记录哪些redis命令,+ cmd表示监控该命令,- cmd表示不监控该命令,如果cmd为all则表示所有命令。
|
||||
+ TimeSpan: 定义延迟桶,以微秒为单位,必须是一个严格递增的序列。
|
||||
|
||||
可以定义多个LatencyMonitor以便监控不同的命令。
|
||||
|
||||
延迟监控配置例子:
|
||||
|
||||
LatencyMonitor all {
|
||||
Commands {
|
||||
+ all
|
||||
- blpop
|
||||
- brpop
|
||||
- brpoplpush
|
||||
}
|
||||
TimeSpan {
|
||||
+ 1000
|
||||
+ 1200
|
||||
+ 1400
|
||||
+ 1600
|
||||
+ 1700
|
||||
+ 1800
|
||||
+ 2000
|
||||
+ 2500
|
||||
+ 3000
|
||||
+ 3500
|
||||
+ 4000
|
||||
+ 4500
|
||||
+ 5000
|
||||
+ 6000
|
||||
+ 7000
|
||||
+ 8000
|
||||
+ 9000
|
||||
+ 10000
|
||||
}
|
||||
}
|
||||
|
||||
LatencyMonitor get {
|
||||
Commands {
|
||||
+ get
|
||||
}
|
||||
TimeSpan {
|
||||
+ 100
|
||||
+ 200
|
||||
+ 300
|
||||
+ 400
|
||||
+ 500
|
||||
+ 600
|
||||
+ 700
|
||||
+ 800
|
||||
+ 900
|
||||
+ 1000
|
||||
}
|
||||
}
|
||||
|
||||
LatencyMonitor set {
|
||||
Commands {
|
||||
+ set
|
||||
+ setnx
|
||||
+ setex
|
||||
}
|
||||
TimeSpan {
|
||||
+ 100
|
||||
+ 200
|
||||
+ 300
|
||||
+ 400
|
||||
+ 500
|
||||
+ 600
|
||||
+ 700
|
||||
+ 800
|
||||
+ 900
|
||||
+ 1000
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
上面的例子定义了三个延迟监控,其中all监控除了blpop/brpop/brpoplpush外的所有命令,get监控get命令,set监控set/setnx/setex命令。这里的耗时桶定义并不适用与所有情况,在实际使用的时候需要调整以更精确的反应真实的耗时情况。
|
||||
|
||||
查看耗时监控,通过INFO命令来查看,有三种使用方式:
|
||||
|
||||
### 查看所有延迟定义的整体信息
|
||||
|
||||
命令:
|
||||
|
||||
redis> INFO
|
||||
|
||||
此时看到的结果列在最后的就是整体的延迟信息。
|
||||
|
||||
### 查看单个延迟定义的信息
|
||||
|
||||
命令:
|
||||
|
||||
redis> INFO Latency <name>
|
||||
|
||||
其中<name>为延迟定义名称,结果会显示该定义的整体延迟信息,以及分后端redis实例的延迟信息。
|
||||
|
||||
### 查看某个后端redis实例延迟信息
|
||||
命令:
|
||||
|
||||
redis> INFO ServerLatency <serv> [name]
|
||||
|
||||
其中<serv>为redis实例地址,[name]为可选的延迟定义名称,如果忽略name,那么就会给出请求该redis实例所有延迟定义信息,否则只给出name的。
|
||||
|
||||
延迟信息格式说明,下面是延迟定义all的整体信息一个例子:
|
||||
|
||||
LatencyMonitorName:all
|
||||
<= 100 3769836 91339 91.34%
|
||||
<= 200 777185 5900 97.24%
|
||||
<= 300 287565 1181 98.42%
|
||||
<= 400 185891 537 98.96%
|
||||
<= 500 132773 299 99.26%
|
||||
<= 600 85050 156 99.41%
|
||||
<= 700 85455 133 99.54%
|
||||
<= 800 40088 54 99.60%
|
||||
<= 1000 67788 77 99.68%
|
||||
> 1000 601012 325 100.00%
|
||||
T 60 6032643 100001
|
||||
|
||||
+ 第一列为<=,则第二列表示小于等于该耗时,第三列表示这个范围耗时的总和,第四列表示请求数,第五列表示累计的请求占总请求的百分比,
|
||||
+ 第一列为>,则第二列表示大于该耗时的请求,后两列含义同上。本行最多只会出现一次,如果位于本行的请求数过多,则说明延迟定义指定的耗时桶不够合适。
|
||||
+ 第一列为T,这一行只会出现一次,且总在最后。第二列表示所有请求的平均耗时,第三列表示总的请求耗时之和,第四列表示总的请求数。
|
||||
BIN
doc/wechat-cppfan.jpeg
Normal file
|
After Width: | Height: | Size: 37 KiB |
@ -29,6 +29,7 @@ public:
|
||||
typedef AcceptConnection Value;
|
||||
typedef ListNode<AcceptConnection, SharePtr<AcceptConnection>> ListNodeType;
|
||||
typedef DequeNode<AcceptConnection, SharePtr<AcceptConnection>> DequeNodeType;
|
||||
typedef Alloc<AcceptConnection, Const::AcceptConnectionAllocCacheSize> Allocator;
|
||||
public:
|
||||
AcceptConnection(int fd, sockaddr* addr, socklen_t len);
|
||||
~AcceptConnection();
|
||||
@ -97,6 +98,6 @@ private:
|
||||
|
||||
typedef List<AcceptConnection> AcceptConnectionList;
|
||||
typedef Deque<AcceptConnection> AcceptConnectionDeque;
|
||||
typedef Alloc<AcceptConnection, Const::AcceptConnectionAllocCacheSize> AcceptConnectionAlloc;
|
||||
typedef AcceptConnection::Allocator AcceptConnectionAlloc;
|
||||
|
||||
#endif
|
||||
|
||||
@ -76,7 +76,7 @@ public:
|
||||
}
|
||||
UsedMemory += allocSize<T>();
|
||||
if (MaxMemory == 0 || UsedMemory <= MaxMemory) {
|
||||
void* p = ::operator new(allocSize<T>());
|
||||
void* p = ::operator new(allocSize<T>(), std::nothrow);
|
||||
if (p) {
|
||||
try {
|
||||
obj = new (p) T(args...);
|
||||
@ -145,7 +145,7 @@ public:
|
||||
{
|
||||
int n = --mCnt;
|
||||
if (n == 0) {
|
||||
Alloc<T>::destroy(static_cast<T*>(this));
|
||||
T::Allocator::destroy(static_cast<T*>(this));
|
||||
} else if (n < 0) {
|
||||
logError("unref object %p with cnt %d", this, n);
|
||||
abort();
|
||||
|
||||
11
src/Auth.cpp
@ -10,8 +10,8 @@
|
||||
#include "Request.h"
|
||||
|
||||
|
||||
Auth::Auth():
|
||||
mMode(Command::Read|Command::Write|Command::Admin),
|
||||
Auth::Auth(int mode):
|
||||
mMode(mode),
|
||||
mReadKeyPrefix(nullptr),
|
||||
mWriteKeyPrefix(nullptr)
|
||||
{
|
||||
@ -75,10 +75,11 @@ bool Auth::permission(Request* req, const String& key) const
|
||||
return true;
|
||||
}
|
||||
|
||||
Auth Authority::DefaultAuth;
|
||||
Auth Authority::AuthAllowAll;
|
||||
Auth Authority::AuthDenyAll(0);
|
||||
|
||||
Authority::Authority():
|
||||
mDefault(&DefaultAuth)
|
||||
mDefault(&AuthAllowAll)
|
||||
{
|
||||
}
|
||||
|
||||
@ -102,5 +103,7 @@ void Authority::add(const AuthConf& ac)
|
||||
mAuthMap[a->password()] = a;
|
||||
if (a->password().empty()) {
|
||||
mDefault = a;
|
||||
} else if (mDefault == &AuthAllowAll) {
|
||||
mDefault = &AuthDenyAll;
|
||||
}
|
||||
}
|
||||
|
||||
@ -15,7 +15,7 @@
|
||||
class Auth
|
||||
{
|
||||
public:
|
||||
Auth();
|
||||
Auth(int mode = Command::Read|Command::Write|Command::Admin);
|
||||
Auth(const AuthConf& conf);
|
||||
~Auth();
|
||||
const String& password() const
|
||||
@ -53,7 +53,8 @@ public:
|
||||
private:
|
||||
std::map<String, Auth*> mAuthMap;
|
||||
Auth* mDefault;
|
||||
static Auth DefaultAuth;
|
||||
static Auth AuthAllowAll;
|
||||
static Auth AuthDenyAll;
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
@ -12,10 +12,10 @@
|
||||
#if _PREDIXY_BACKTRACE_
|
||||
|
||||
#include <execinfo.h>
|
||||
inline void traceInfo()
|
||||
inline void traceInfo(int sig)
|
||||
{
|
||||
#define Size 128
|
||||
logError("predixy backtrace");
|
||||
logError("predixy backtrace(%d)", sig);
|
||||
void* buf[Size];
|
||||
int num = ::backtrace(buf, Size);
|
||||
int fd = -1;
|
||||
@ -32,9 +32,9 @@ inline void traceInfo()
|
||||
|
||||
#else
|
||||
|
||||
inline void traceInfo()
|
||||
inline void traceInfo(int sig)
|
||||
{
|
||||
logError("predixy backtrace, but current system unspport backtrace");
|
||||
logError("predixy backtrace(%d), but current system unspport backtrace", sig);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
@ -23,6 +23,7 @@ class Buffer :
|
||||
public RefCntObj<Buffer>
|
||||
{
|
||||
public:
|
||||
typedef Alloc<Buffer, Const::BufferAllocCacheSize> Allocator;
|
||||
static const int MaxBufFmtAppendLen = 8192;
|
||||
public:
|
||||
Buffer& operator=(const Buffer&);
|
||||
@ -92,12 +93,13 @@ private:
|
||||
};
|
||||
|
||||
typedef List<Buffer> BufferList;
|
||||
typedef Buffer::Allocator BufferAlloc;
|
||||
|
||||
template<>
|
||||
inline int allocSize<Buffer>()
|
||||
{
|
||||
return Buffer::getSize() + sizeof(Buffer);
|
||||
}
|
||||
typedef Alloc<Buffer, Const::BufferAllocCacheSize> BufferAlloc;
|
||||
|
||||
struct BufferPos
|
||||
{
|
||||
|
||||
@ -109,8 +109,18 @@ void ClusterServerPool::handleResponse(Handler* h, ConnectConnection* s, Request
|
||||
p.master().data());
|
||||
continue;
|
||||
}
|
||||
auto it = mServs.find(p.addr());
|
||||
String addr(p.addr());
|
||||
auto it = mServs.find(addr);
|
||||
Server* serv = it == mServs.end() ? nullptr : it->second;
|
||||
if (!serv) {
|
||||
if (strstr(p.flags().data(), "myself")) {
|
||||
serv = s->server();
|
||||
} else if (const char* t = strchr(p.addr().data(), '@')) {
|
||||
addr = String(p.addr().data(), t - p.addr().data());
|
||||
it = mServs.find(addr);
|
||||
serv = it == mServs.end() ? nullptr : it->second;
|
||||
}
|
||||
}
|
||||
if (!serv) {
|
||||
const char* flags = p.flags().data();
|
||||
if (p.role() == Server::Unknown ||
|
||||
@ -127,7 +137,7 @@ void ClusterServerPool::handleResponse(Handler* h, ConnectConnection* s, Request
|
||||
(int)mServPool.size(), p.addr().data());
|
||||
continue;
|
||||
}
|
||||
serv = new Server(this, p.addr(), false);
|
||||
serv = new Server(this, addr, false);
|
||||
serv->setPassword(password());
|
||||
mServPool.push_back(serv);
|
||||
mServs[serv->addr()] = serv;
|
||||
@ -138,7 +148,6 @@ void ClusterServerPool::handleResponse(Handler* h, ConnectConnection* s, Request
|
||||
p.master().data(),
|
||||
serv->dcName().data());
|
||||
} else {
|
||||
serv->setOnline(true);
|
||||
serv->setUpdating(false);
|
||||
}
|
||||
serv->setRole(p.role());
|
||||
@ -182,12 +191,7 @@ void ClusterServerPool::handleResponse(Handler* h, ConnectConnection* s, Request
|
||||
}
|
||||
for (auto serv : mServPool) {
|
||||
if (serv->updating()) {
|
||||
serv->setOnline(false);
|
||||
serv->setUpdating(false);
|
||||
if (ServerGroup* g = serv->group()) {
|
||||
g->remove(serv);
|
||||
serv->setGroup(nullptr);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
if (serv->role() == Server::Master) {
|
||||
|
||||
@ -9,8 +9,9 @@
|
||||
#include <map>
|
||||
#include "String.h"
|
||||
#include "Command.h"
|
||||
#include "Conf.h"
|
||||
|
||||
const Command Command::CmdPool[Sentinel] = {
|
||||
Command Command::CmdPool[AvailableCommands] = {
|
||||
{None, "", 0, MaxArgs, Read},
|
||||
{Ping, "ping", 1, 2, Read},
|
||||
{PingServ, "ping", 1, 2, Inner},
|
||||
@ -19,6 +20,7 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{AuthServ, "auth", 2, 2, Inner},
|
||||
{Select, "select", 2, 2, Read},
|
||||
{SelectServ, "select", 2, 2, Inner},
|
||||
{Quit, "quit", 1, MaxArgs, Read},
|
||||
{SentinelSentinels, "sentinel sentinels",3, 3, Inner},
|
||||
{SentinelGetMaster, "sentinel get-m-a..",3, 3, Inner},
|
||||
{SentinelSlaves, "sentinel slaves", 3, 3, Inner},
|
||||
@ -36,8 +38,8 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{Exec, "exec", 1, 1, Write|NoKey},
|
||||
{Discard, "discard", 1, 1, Write|NoKey},
|
||||
{DiscardServ, "discard", 1, 1, Inner|NoKey},
|
||||
{Eval, "eval", 4, MaxArgs, Write|KeyAt3},
|
||||
{Evalsha, "evalsha", 4, MaxArgs, Write|KeyAt3},
|
||||
{Eval, "eval", 3, MaxArgs, Write|KeyAt3},
|
||||
{Evalsha, "evalsha", 3, MaxArgs, Write|KeyAt3},
|
||||
{Script, "script", 2, MaxArgs, Write},
|
||||
{ScriptLoad, "script", 3, 3, Write|SubCmd},
|
||||
{Del, "del", 2, MaxArgs, Write|MultiKey},
|
||||
@ -54,7 +56,7 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{Rename, "rename", 3, 3, Write},
|
||||
{Renamenx, "renamenx", 3, 3, Write},
|
||||
{Restore, "restore", 4, 5, Write},
|
||||
{Sort, "sort", 2, MaxArgs, Read},
|
||||
{Sort, "sort", 2, MaxArgs, Write},
|
||||
{Touch, "touch", 2, MaxArgs, Write|MultiKey},
|
||||
{Ttl, "ttl", 2, 2, Read},
|
||||
{TypeCmd, "type", 2, 2, Read},
|
||||
@ -63,7 +65,7 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{Append, "append", 3, 3, Write},
|
||||
{Bitcount, "bitcount", 2, 4, Read},
|
||||
{Bitfield, "bitfield", 2, MaxArgs, Write},
|
||||
{Bitop, "bitop", 4, MaxArgs, Write},
|
||||
{Bitop, "bitop", 4, MaxArgs, Write|KeyAt2},
|
||||
{Bitpos, "bitpos", 3, 5, Read},
|
||||
{Decr, "decr", 2, 2, Write},
|
||||
{Decrby, "decrby", 3, 3, Write},
|
||||
@ -85,8 +87,8 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{Setrange, "setrange", 4, 4, Write},
|
||||
{Strlen, "strlen", 2, 2, Read},
|
||||
{Hdel, "hdel", 3, MaxArgs, Write},
|
||||
{Hexists, "hexists", 3, 3, Write},
|
||||
{Hget, "hget", 3, 3, Write},
|
||||
{Hexists, "hexists", 3, 3, Read},
|
||||
{Hget, "hget", 3, 3, Read},
|
||||
{Hgetall, "hgetall", 2, 2, Read},
|
||||
{Hincrby, "hincrby", 4, 4, Write},
|
||||
{Hincrbyfloat, "hincrbyfloat", 4, 4, Write},
|
||||
@ -95,7 +97,7 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{Hmget, "hmget", 3, MaxArgs, Read},
|
||||
{Hmset, "hmset", 4, MaxArgs, Write},
|
||||
{Hscan, "hscan", 3, 7, Read},
|
||||
{Hset, "hset", 4, 4, Write},
|
||||
{Hset, "hset", 4, MaxArgs, Write},
|
||||
{Hsetnx, "hsetnx", 4, 4, Write},
|
||||
{Hstrlen, "hstrlen", 3, 3, Read},
|
||||
{Hvals, "hvals", 2, 2, Read},
|
||||
@ -108,7 +110,7 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{Lpop, "lpop", 2, 2, Write},
|
||||
{Lpush, "lpush", 3, MaxArgs, Write},
|
||||
{Lpushx, "lpushx", 3, 3, Write},
|
||||
{Lrange, "lrange", 4, 4, Write},
|
||||
{Lrange, "lrange", 4, 4, Read},
|
||||
{Lrem, "lrem", 4, 4, Write},
|
||||
{Lset, "lset", 4, 4, Write},
|
||||
{Ltrim, "ltrim", 4, 4, Write},
|
||||
@ -137,6 +139,8 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{Zincrby, "zincrby", 4, 4, Write},
|
||||
{Zinterstore, "zinterstore", 4, MaxArgs, Write},
|
||||
{Zlexcount, "zlexcount", 4, 4, Read},
|
||||
{Zpopmax, "zpopmax", 2, 3, Write},
|
||||
{Zpopmin, "zpopmin", 2, 3, Write},
|
||||
{Zrange, "zrange", 4, 5, Read},
|
||||
{Zrangebylex, "zrangebylex", 4, 7, Read},
|
||||
{Zrangebyscore, "zrangebyscore", 4, 8, Read},
|
||||
@ -159,22 +163,25 @@ const Command Command::CmdPool[Sentinel] = {
|
||||
{Geodist, "geodist", 4, 5, Read},
|
||||
{Geohash, "geohash", 3, MaxArgs, Read},
|
||||
{Geopos, "geopos", 3, MaxArgs, Read},
|
||||
{Georadius, "georadius", 6, 16, Read},
|
||||
{Georadiusbymember, "georadiusbymember",5, 15, Read},
|
||||
{Georadius, "georadius", 6, 16, Write},
|
||||
{Georadiusbymember, "georadiusbymember",5, 15, Write},
|
||||
{Psubscribe, "psubscribe", 2, MaxArgs, Write|SMultiKey|Private},
|
||||
{Publish, "publish", 3, 3, Write},
|
||||
{Pubsub, "pubsub", 2, MaxArgs, Read},
|
||||
{Punsubscribe, "punsubscribe", 1, MaxArgs, Write},
|
||||
{Punsubscribe, "punsubscribe", 1, MaxArgs, Write|SMultiKey},
|
||||
{Subscribe, "subscribe", 2, MaxArgs, Write|SMultiKey|Private},
|
||||
{Unsubscribe, "unsubscribe", 1, MaxArgs, Write},
|
||||
{Unsubscribe, "unsubscribe", 1, MaxArgs, Write|SMultiKey},
|
||||
{SubMsg, "\000SubMsg", 0, 0, Admin}
|
||||
};
|
||||
|
||||
int Command::Sentinel = Command::MaxCommands;
|
||||
Command::CommandMap Command::CmdMap;
|
||||
|
||||
void Command::init()
|
||||
{
|
||||
int type = 0;
|
||||
for (auto& i : CmdPool) {
|
||||
for (auto j = 0; j < MaxCommands; j++) {
|
||||
const auto& i = CmdPool[j];
|
||||
if (i.type != type) {
|
||||
Throw(InitFail, "command %s unmatch the index in commands table", i.name);
|
||||
}
|
||||
@ -186,3 +193,19 @@ void Command::init()
|
||||
}
|
||||
}
|
||||
|
||||
void Command::addCustomCommand(const CustomCommandConf& ccc) {
|
||||
if (Sentinel >= AvailableCommands) {
|
||||
Throw(InitFail, "too many custom commands(>%d)", MaxCustomCommands);
|
||||
}
|
||||
if (nullptr != find(ccc.name)) {
|
||||
Throw(InitFail, "custom command %s is duplicated", ccc.name.c_str());
|
||||
}
|
||||
auto* p = &CmdPool[Sentinel];
|
||||
p->name = ccc.name.c_str();
|
||||
p->minArgs = ccc.minArgs;
|
||||
p->maxArgs = ccc.maxArgs;
|
||||
p->mode = ccc.mode;
|
||||
p->type = (Command::Type)Sentinel++;
|
||||
CmdMap[ccc.name] = p;
|
||||
}
|
||||
|
||||
|
||||
@ -11,6 +11,8 @@
|
||||
#include "Exception.h"
|
||||
#include "HashFunc.h"
|
||||
|
||||
struct CustomCommandConf;
|
||||
|
||||
class Command
|
||||
{
|
||||
public:
|
||||
@ -25,6 +27,7 @@ public:
|
||||
AuthServ,
|
||||
Select,
|
||||
SelectServ,
|
||||
Quit,
|
||||
|
||||
SentinelSentinels,
|
||||
SentinelGetMaster,
|
||||
@ -153,6 +156,8 @@ public:
|
||||
Zincrby,
|
||||
Zinterstore,
|
||||
Zlexcount,
|
||||
Zpopmax,
|
||||
Zpopmin,
|
||||
Zrange,
|
||||
Zrangebylex,
|
||||
Zrangebyscore,
|
||||
@ -188,7 +193,9 @@ public:
|
||||
Unsubscribe,
|
||||
SubMsg,
|
||||
|
||||
Sentinel
|
||||
MaxCommands,
|
||||
MaxCustomCommands = 16,
|
||||
AvailableCommands = MaxCommands + MaxCustomCommands,
|
||||
};
|
||||
enum Mode
|
||||
{
|
||||
@ -201,12 +208,13 @@ public:
|
||||
MultiKey = 1<<5,
|
||||
SMultiKey = 1<<6,
|
||||
MultiKeyVal = 1<<7,
|
||||
KeyAt3 = 1<<8,
|
||||
SubCmd = 1<<9,
|
||||
Inner = 1<<10 //proxy use only
|
||||
KeyAt2 = 1<<8,
|
||||
KeyAt3 = 1<<9,
|
||||
SubCmd = 1<<10,
|
||||
Inner = 1<<11 //proxy use only
|
||||
};
|
||||
static const int AuthMask = Read|Write|Admin;
|
||||
static const int KeyMask = NoKey|MultiKey|SMultiKey|MultiKeyVal|KeyAt3;
|
||||
static const int KeyMask = NoKey|MultiKey|SMultiKey|MultiKeyVal|KeyAt2|KeyAt3;
|
||||
public:
|
||||
Type type;
|
||||
const char* name;
|
||||
@ -247,9 +255,11 @@ public:
|
||||
auto it = CmdMap.find(cmd);
|
||||
return it == CmdMap.end() ? nullptr : it->second;
|
||||
}
|
||||
static void addCustomCommand(const CustomCommandConf& pc);
|
||||
static int Sentinel;
|
||||
private:
|
||||
static const int MaxArgs = 100000000;
|
||||
static const Command CmdPool[Sentinel];
|
||||
static Command CmdPool[];
|
||||
class H
|
||||
{
|
||||
public:
|
||||
|
||||
@ -10,7 +10,7 @@
|
||||
#include <limits.h>
|
||||
|
||||
#define _PREDIXY_NAME_ "predixy"
|
||||
#define _PREDIXY_VERSION_ "1.0.0"
|
||||
#define _PREDIXY_VERSION_ "1.0.5"
|
||||
|
||||
namespace Const
|
||||
{
|
||||
@ -32,8 +32,8 @@ namespace Const
|
||||
static const int MaxCmdLen = 32;
|
||||
static const int MaxKeyLen = 512;
|
||||
static const int BufferAllocCacheSize = 64;
|
||||
static const int RequestAllocCacheSize = 32;
|
||||
static const int ResponseAllocCacheSize = 32;
|
||||
static const int RequestAllocCacheSize = 128;
|
||||
static const int ResponseAllocCacheSize = 128;
|
||||
static const int AcceptConnectionAllocCacheSize = 32;
|
||||
static const int ConnectConnectionAllocCacheSize = 4;
|
||||
};
|
||||
|
||||
242
src/Conf.cpp
@ -6,6 +6,7 @@
|
||||
|
||||
#include <ctype.h>
|
||||
#include <iostream>
|
||||
#include <sstream>
|
||||
#include <fstream>
|
||||
#include "LogFileSink.h"
|
||||
#include "ServerPool.h"
|
||||
@ -32,6 +33,13 @@ bool ServerConf::parse(ServerConf& s, const char* str)
|
||||
return !s.addr.empty();
|
||||
}
|
||||
|
||||
void CustomCommandConf::init(CustomCommandConf&c, const char* name) {
|
||||
c.name = name;
|
||||
c.minArgs = 2;
|
||||
c.maxArgs = 2;
|
||||
c.mode = Command::Write;
|
||||
}
|
||||
|
||||
Conf::Conf():
|
||||
mBind("0.0.0.0:7617"),
|
||||
mWorkerThreads(1),
|
||||
@ -49,8 +57,6 @@ Conf::Conf():
|
||||
mLogSample[LogLevel::Notice] = 1;
|
||||
mLogSample[LogLevel::Warn] = 1;
|
||||
mLogSample[LogLevel::Error] = 1;
|
||||
mSentinelServerPool.refreshInterval = 1;
|
||||
mClusterServerPool.refreshInterval = 1;
|
||||
}
|
||||
|
||||
Conf::~Conf()
|
||||
@ -119,8 +125,9 @@ void Conf::setGlobal(const ConfParser::Node* node)
|
||||
{
|
||||
const ConfParser::Node* authority = nullptr;
|
||||
const ConfParser::Node* clusterServerPool = nullptr;
|
||||
const ConfParser::Node* sentinelServerPool = nullptr;
|
||||
const ConfParser::Node* standaloneServerPool = nullptr;
|
||||
const ConfParser::Node* dataCenter = nullptr;
|
||||
std::vector<const ConfParser::Node*> latencyMonitors;
|
||||
for (auto p = node; p; p = p->next) {
|
||||
if (setStr(mName, "Name", p)) {
|
||||
} else if (setStr(mBind, "Bind", p)) {
|
||||
@ -149,16 +156,19 @@ void Conf::setGlobal(const ConfParser::Node* node)
|
||||
} else if (setInt(mLogSample[LogLevel::Warn], "LogWarnSample", p)) {
|
||||
} else if (setInt(mLogSample[LogLevel::Error], "LogErrorSample", p)) {
|
||||
} else if (strcasecmp(p->key.c_str(), "LatencyMonitor") == 0) {
|
||||
mLatencyMonitors.push_back(LatencyMonitorConf{});
|
||||
setLatencyMonitor(mLatencyMonitors.back(), p);
|
||||
latencyMonitors.push_back(p);
|
||||
} else if (strcasecmp(p->key.c_str(), "Authority") == 0) {
|
||||
authority = p;
|
||||
} else if (strcasecmp(p->key.c_str(), "ClusterServerPool") == 0) {
|
||||
clusterServerPool = p;
|
||||
} else if (strcasecmp(p->key.c_str(), "SentinelServerPool") == 0) {
|
||||
sentinelServerPool = p;
|
||||
standaloneServerPool = p;
|
||||
} else if (strcasecmp(p->key.c_str(), "StandaloneServerPool") == 0) {
|
||||
standaloneServerPool = p;
|
||||
} else if (strcasecmp(p->key.c_str(), "DataCenter") == 0) {
|
||||
dataCenter = p;
|
||||
} else if (strcasecmp(p->key.c_str(), "CustomCommand") == 0) {
|
||||
setCustomCommand(p);
|
||||
} else {
|
||||
Throw(UnknownKey, "%s:%d unknown key %s", p->file, p->line, p->key.c_str());
|
||||
}
|
||||
@ -166,20 +176,27 @@ void Conf::setGlobal(const ConfParser::Node* node)
|
||||
if (authority) {
|
||||
setAuthority(authority);
|
||||
}
|
||||
if (clusterServerPool && sentinelServerPool) {
|
||||
Throw(LogicError, "Can't define ClusterServerPool and SentinelServerPool at the same time");
|
||||
if (clusterServerPool && standaloneServerPool) {
|
||||
Throw(LogicError, "Can't define ClusterServerPool/StandaloneServerPool at the same time");
|
||||
} else if (clusterServerPool) {
|
||||
setClusterServerPool(clusterServerPool);
|
||||
mServerPoolType = ServerPool::Cluster;
|
||||
} else if (sentinelServerPool) {
|
||||
setSentinelServerPool(sentinelServerPool);
|
||||
mServerPoolType = ServerPool::Sentinel;
|
||||
} else if (standaloneServerPool) {
|
||||
if (strcasecmp(standaloneServerPool->key.c_str(), "SentinelServerPool") == 0) {
|
||||
mStandaloneServerPool.refreshMethod = ServerPoolRefreshMethod::Sentinel;
|
||||
}
|
||||
setStandaloneServerPool(standaloneServerPool);
|
||||
mServerPoolType = ServerPool::Standalone;
|
||||
} else {
|
||||
Throw(LogicError, "Must define a server pool");
|
||||
}
|
||||
if (dataCenter) {
|
||||
setDataCenter(dataCenter);
|
||||
}
|
||||
for (auto& latencyMonitor : latencyMonitors) {
|
||||
mLatencyMonitors.push_back(LatencyMonitorConf{});
|
||||
setLatencyMonitor(mLatencyMonitors.back(), latencyMonitor);
|
||||
}
|
||||
}
|
||||
|
||||
static void setKeyPrefix(std::vector<std::string>& dat, const std::string& v)
|
||||
@ -241,24 +258,21 @@ void Conf::setAuthority(const ConfParser::Node* node)
|
||||
|
||||
bool Conf::setServerPool(ServerPoolConf& sp, const ConfParser::Node* p)
|
||||
{
|
||||
bool ret = true;
|
||||
if (setStr(sp.password, "Password", p)) {
|
||||
return true;
|
||||
} else if (setInt(sp.masterReadPriority, "MasterReadPriority", p, 0, 100)) {
|
||||
return true;
|
||||
} else if (setInt(sp.staticSlaveReadPriority, "StaticSlaveReadPriority", p, 0, 100)) {
|
||||
return true;
|
||||
} else if (setInt(sp.dynamicSlaveReadPriority, "DynamicSlaveReadPriority", p, 0, 100)) {
|
||||
return true;
|
||||
} else if (setInt(sp.refreshInterval, "RefreshInterval", p, 1)) {
|
||||
return true;
|
||||
} else if (setDuration(sp.refreshInterval, "RefreshInterval", p)) {
|
||||
} else if (setDuration(sp.serverTimeout, "ServerTimeout", p)) {
|
||||
} else if (setInt(sp.serverFailureLimit, "ServerFailureLimit", p, 1)) {
|
||||
return true;
|
||||
} else if (setInt(sp.serverRetryTimeout, "ServerRetryTimeout", p, 1)) {
|
||||
return true;
|
||||
} else if (setDuration(sp.serverRetryTimeout, "ServerRetryTimeout", p)) {
|
||||
} else if (setInt(sp.keepalive, "KeepAlive", p, 0)) {
|
||||
} else if (setInt(sp.databases, "Databases", p, 1, 128)) {
|
||||
return true;
|
||||
} else {
|
||||
ret = false;
|
||||
}
|
||||
return false;
|
||||
return ret;
|
||||
}
|
||||
|
||||
void Conf::setClusterServerPool(const ConfParser::Node* node)
|
||||
@ -282,40 +296,47 @@ void Conf::setClusterServerPool(const ConfParser::Node* node)
|
||||
}
|
||||
}
|
||||
|
||||
void Conf::setSentinelServerPool(const ConfParser::Node* node)
|
||||
void Conf::setStandaloneServerPool(const ConfParser::Node* node)
|
||||
{
|
||||
if (!node->sub) {
|
||||
Throw(InvalidValue, "%s:%d SentinelServerPool require scope value", node->file, node->line);
|
||||
Throw(InvalidValue, "%s:%d StandaloneServerPool require scope value", node->file, node->line);
|
||||
}
|
||||
mSentinelServerPool.hashTag[0] = '\0';
|
||||
mSentinelServerPool.hashTag[1] = '\0';
|
||||
mStandaloneServerPool.hashTag[0] = '\0';
|
||||
mStandaloneServerPool.hashTag[1] = '\0';
|
||||
for (auto p = node->sub; p; p = p->next) {
|
||||
if (setServerPool(mSentinelServerPool, p)) {
|
||||
if (setServerPool(mStandaloneServerPool, p)) {
|
||||
} else if (strcasecmp(p->key.c_str(), "RefreshMethod") == 0) {
|
||||
try {
|
||||
mStandaloneServerPool.refreshMethod = ServerPoolRefreshMethod::parse(p->val.c_str());
|
||||
} catch (ServerPoolRefreshMethod::InvalidEnumValue& excp) {
|
||||
Throw(InvalidValue, "%s:%d unknown RefreshMethod:%s", p->file, p->line, p->val.c_str());
|
||||
}
|
||||
} else if (strcasecmp(p->key.c_str(), "Distribution") == 0) {
|
||||
mSentinelServerPool.dist = Distribution::parse(p->val.c_str());
|
||||
if (mSentinelServerPool.dist == Distribution::None) {
|
||||
mStandaloneServerPool.dist = Distribution::parse(p->val.c_str());
|
||||
if (mStandaloneServerPool.dist == Distribution::None) {
|
||||
Throw(InvalidValue, "%s:%d unknown Distribution", p->file, p->line);
|
||||
}
|
||||
} else if (strcasecmp(p->key.c_str(), "Hash") == 0) {
|
||||
mSentinelServerPool.hash = Hash::parse(p->val.c_str());
|
||||
if (mSentinelServerPool.hash == Hash::None) {
|
||||
mStandaloneServerPool.hash = Hash::parse(p->val.c_str());
|
||||
if (mStandaloneServerPool.hash == Hash::None) {
|
||||
Throw(InvalidValue, "%s:%d unknown Hash", p->file, p->line);
|
||||
}
|
||||
} else if (strcasecmp(p->key.c_str(), "HashTag") == 0) {
|
||||
if (p->val.empty()) {
|
||||
mSentinelServerPool.hashTag[0] = '\0';
|
||||
mSentinelServerPool.hashTag[1] = '\0';
|
||||
mStandaloneServerPool.hashTag[0] = '\0';
|
||||
mStandaloneServerPool.hashTag[1] = '\0';
|
||||
} else if (p->val.size() == 2) {
|
||||
mSentinelServerPool.hashTag[0] = p->val[0];
|
||||
mSentinelServerPool.hashTag[1] = p->val[1];
|
||||
mStandaloneServerPool.hashTag[0] = p->val[0];
|
||||
mStandaloneServerPool.hashTag[1] = p->val[1];
|
||||
} else {
|
||||
Throw(InvalidValue, "%s:%d HashTag invalid", p->file, p->line);
|
||||
}
|
||||
} else if (setServers(mSentinelServerPool.sentinels, "Sentinels", p)) {
|
||||
} else if (setServers(mStandaloneServerPool.sentinels, "Sentinels", p)) {
|
||||
mStandaloneServerPool.sentinelPassword = p->val;
|
||||
} else if (strcasecmp(p->key.c_str(), "Group") == 0) {
|
||||
mSentinelServerPool.groups.push_back(ServerGroupConf{p->val});
|
||||
mStandaloneServerPool.groups.push_back(ServerGroupConf{p->val});
|
||||
if (p->sub) {
|
||||
auto& g = mSentinelServerPool.groups.back();
|
||||
auto& g = mStandaloneServerPool.groups.back();
|
||||
setServers(g.servers, "Group", p);
|
||||
}
|
||||
} else {
|
||||
@ -323,18 +344,31 @@ void Conf::setSentinelServerPool(const ConfParser::Node* node)
|
||||
p->file, p->line, p->key.c_str());
|
||||
}
|
||||
}
|
||||
if (mSentinelServerPool.sentinels.empty()) {
|
||||
Throw(LogicError, "SentinelServerPool no sentinel server");
|
||||
if (mStandaloneServerPool.groups.empty()) {
|
||||
Throw(LogicError, "StandaloneServerPool no server group");
|
||||
}
|
||||
if (mSentinelServerPool.groups.empty()) {
|
||||
Throw(LogicError, "SentinelServerPool no server group");
|
||||
}
|
||||
if (mSentinelServerPool.groups.size() > 1) {
|
||||
if (mSentinelServerPool.dist == Distribution::None) {
|
||||
Throw(LogicError, "SentinelServerPool must define Dsitribution in multi groups");
|
||||
if (mStandaloneServerPool.refreshMethod == ServerPoolRefreshMethod::None) {
|
||||
Throw(LogicError, "StandaloneServerPool must define RefreshMethod");
|
||||
} else if (mStandaloneServerPool.refreshMethod == ServerPoolRefreshMethod::Sentinel) {
|
||||
if (mStandaloneServerPool.sentinels.empty()) {
|
||||
Throw(LogicError, "StandaloneServerPool with RefreshMethod(sentinel) but no sentinel servers");
|
||||
}
|
||||
if (mSentinelServerPool.hash == Hash::None) {
|
||||
Throw(LogicError, "SentinelServerPool must define Hash in multi groups");
|
||||
} else {
|
||||
if (!mStandaloneServerPool.sentinels.empty()) {
|
||||
Throw(LogicError, "StandaloneServerPool with Sentinels but RefreshMethod is not sentinel");
|
||||
}
|
||||
for (auto& g : mStandaloneServerPool.groups) {
|
||||
if (g.servers.empty()) {
|
||||
Throw(LogicError, "Group(%s) must add servers", g.name.c_str());
|
||||
}
|
||||
}
|
||||
}
|
||||
if (mStandaloneServerPool.groups.size() > 1) {
|
||||
if (mStandaloneServerPool.dist == Distribution::None) {
|
||||
Throw(LogicError, "StandaloneServerPool must define Dsitribution in multi groups");
|
||||
}
|
||||
if (mStandaloneServerPool.hash == Hash::None) {
|
||||
Throw(LogicError, "StandaloneServerPool must define Hash in multi groups");
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -355,6 +389,83 @@ void Conf::setDataCenter(const ConfParser::Node* node)
|
||||
}
|
||||
}
|
||||
|
||||
void Conf::setCustomCommand(const ConfParser::Node* node)
|
||||
{
|
||||
if (!node->sub) {
|
||||
Throw(InvalidValue, "%s:%d CustomCommand require scope value", node->file, node->line);
|
||||
}
|
||||
for (auto p = node->sub; p; p = p->next) {
|
||||
mCustomCommands.push_back(CustomCommandConf{});
|
||||
auto& cc = mCustomCommands.back();
|
||||
CustomCommandConf::init(cc, p->key.c_str());
|
||||
auto s = p->sub;
|
||||
for (;s ; s = s->next) {
|
||||
if (setInt(cc.minArgs, "MinArgs", s, 2)) {
|
||||
} else if (setInt(cc.maxArgs, "MaxArgs", s, 2, 9999)) {
|
||||
} else if (setCommandMode(cc.mode, "Mode", s)) {
|
||||
} else {
|
||||
Throw(UnknownKey, "%s:%d unknown key %s", s->file, s->line, s->key.c_str());
|
||||
}
|
||||
}
|
||||
if (cc.maxArgs < cc.minArgs) {
|
||||
Throw(InvalidValue, "%s:%d must be MaxArgs >= MinArgs", p->file, p->line);
|
||||
}
|
||||
}
|
||||
for (const auto& cc : mCustomCommands) {
|
||||
Command::addCustomCommand(cc);
|
||||
}
|
||||
}
|
||||
|
||||
bool Conf::setCommandMode(int& mode, const char* name, const ConfParser::Node* n, const int defaultMode)
|
||||
{
|
||||
if (strcasecmp(name, n->key.c_str()) != 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (n->val.size() == 0) {
|
||||
mode = defaultMode;
|
||||
} else {
|
||||
mode = 0;
|
||||
std::string mask;
|
||||
std::istringstream is(n->val);
|
||||
while (std::getline(is, mask, '|')) {
|
||||
if ((strcasecmp(mask.c_str(), "write") == 0)) {
|
||||
mode |= Command::Write;
|
||||
} else if ((strcasecmp(mask.c_str(), "read") == 0)) {
|
||||
mode |= Command::Read;
|
||||
} else if ((strcasecmp(mask.c_str(), "admin") == 0)) {
|
||||
mode |= Command::Admin;
|
||||
} else if ((strcasecmp(mask.c_str(), "keyAt2") == 0)) {
|
||||
mode |= Command::KeyAt2;
|
||||
} else if ((strcasecmp(mask.c_str(), "keyAt3") == 0)) {
|
||||
mode |= Command::KeyAt3;
|
||||
} else {
|
||||
Throw(InvalidValue, "%s:%d unknown mode %s", n->file, n->line, mask.c_str());
|
||||
}
|
||||
}
|
||||
switch (mode & Command::KeyMask) {
|
||||
case 0:
|
||||
case Command::KeyAt2:
|
||||
case Command::KeyAt3:
|
||||
break;
|
||||
default:
|
||||
Throw(InvalidValue, "%s:%d %s require exclusive key pos", n->file, n->line, name);
|
||||
}
|
||||
switch (mode & Command::AuthMask) {
|
||||
case 0:
|
||||
mode |= Command::Write;
|
||||
break;
|
||||
case Command::Read:
|
||||
case Command::Write:
|
||||
case Command::Admin:
|
||||
break;
|
||||
default:
|
||||
Throw(InvalidValue, "%s:%d %s require exclusive mode", n->file, n->line, name);
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
void Conf::setDC(DCConf& dc, const ConfParser::Node* node)
|
||||
{
|
||||
if (!node->sub) {
|
||||
@ -567,6 +678,27 @@ bool Conf::parseMemory(long& m, const char* str)
|
||||
return m >= 0;
|
||||
}
|
||||
|
||||
bool Conf::parseDuration(long& v, const char* str)
|
||||
{
|
||||
char u[4];
|
||||
int c = sscanf(str, "%ld%3s", &v, u);
|
||||
if (c == 2 && v > 0) {
|
||||
if (strcasecmp(u, "s") == 0) {
|
||||
v *= 1000000;
|
||||
} else if (strcasecmp(u, "m") == 0 || strcasecmp(u, "ms") == 0) {
|
||||
v *= 1000;
|
||||
} else if (strcasecmp(u, "u") == 0 || strcasecmp(u, "us") == 0) {
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
} else if (c == 1) {
|
||||
v *= 1000000;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
return v >= 0;
|
||||
}
|
||||
|
||||
bool Conf::setMemory(long& m, const char* name, const ConfParser::Node* n)
|
||||
{
|
||||
if (strcasecmp(name, n->key.c_str()) != 0) {
|
||||
@ -579,13 +711,25 @@ bool Conf::setMemory(long& m, const char* name, const ConfParser::Node* n)
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Conf::setDuration(long& v, const char* name, const ConfParser::Node* n)
|
||||
{
|
||||
if (strcasecmp(name, n->key.c_str()) != 0) {
|
||||
return false;
|
||||
}
|
||||
if (!parseDuration(v, n->val.c_str())) {
|
||||
Throw(InvalidValue, "%s:%d %s invalid duration value \"%s\"",
|
||||
n->file, n->line, name, n->val.c_str());
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Conf::setServers(std::vector<ServerConf>& servs, const char* name, const ConfParser::Node* p)
|
||||
{
|
||||
if (strcasecmp(p->key.c_str(), name) != 0) {
|
||||
return false;
|
||||
}
|
||||
if (!p->sub) {
|
||||
Throw(InvalidValue, "%s:%d %s require scope value", name, p->file, p->line);
|
||||
Throw(InvalidValue, "%s:%d %s require scope value", p->file, p->line, name);
|
||||
}
|
||||
for (auto n = p->sub; n; n = n->next) {
|
||||
if (strcasecmp(n->key.c_str(), "+") == 0) {
|
||||
|
||||
37
src/Conf.h
@ -19,6 +19,8 @@
|
||||
#include "Distribution.h"
|
||||
#include "ConfParser.h"
|
||||
#include "Auth.h"
|
||||
#include "Command.h"
|
||||
#include "Enums.h"
|
||||
|
||||
struct AuthConf
|
||||
{
|
||||
@ -49,9 +51,11 @@ struct ServerPoolConf
|
||||
int masterReadPriority = 50;
|
||||
int staticSlaveReadPriority = 0;
|
||||
int dynamicSlaveReadPriority = 0;
|
||||
int refreshInterval = 1; //seconds
|
||||
long refreshInterval = 1000000; //us
|
||||
long serverTimeout = 0; //us
|
||||
int serverFailureLimit = 10;
|
||||
int serverRetryTimeout = 1; //seconds
|
||||
long serverRetryTimeout = 1000000; //us
|
||||
int keepalive = 0; //seconds
|
||||
int databases = 1;
|
||||
};
|
||||
|
||||
@ -60,11 +64,13 @@ struct ClusterServerPoolConf : public ServerPoolConf
|
||||
std::vector<ServerConf> servers;
|
||||
};
|
||||
|
||||
struct SentinelServerPoolConf : public ServerPoolConf
|
||||
struct StandaloneServerPoolConf : public ServerPoolConf
|
||||
{
|
||||
ServerPoolRefreshMethod refreshMethod = ServerPoolRefreshMethod::None;
|
||||
Distribution dist = Distribution::None;
|
||||
Hash hash = Hash::None;
|
||||
char hashTag[2];
|
||||
std::string sentinelPassword;
|
||||
std::vector<ServerConf> sentinels;
|
||||
std::vector<ServerGroupConf> groups;
|
||||
};
|
||||
@ -86,10 +92,20 @@ struct DCConf
|
||||
struct LatencyMonitorConf
|
||||
{
|
||||
std::string name;
|
||||
std::bitset<Command::Sentinel> cmds;
|
||||
std::bitset<Command::AvailableCommands> cmds;
|
||||
std::vector<long> timeSpan;//us
|
||||
};
|
||||
|
||||
struct CustomCommandConf
|
||||
{
|
||||
std::string name;
|
||||
int minArgs;
|
||||
int maxArgs;
|
||||
int mode;
|
||||
|
||||
static void init(CustomCommandConf &c, const char* name);
|
||||
};
|
||||
|
||||
class Conf
|
||||
{
|
||||
public:
|
||||
@ -161,9 +177,9 @@ public:
|
||||
{
|
||||
return mClusterServerPool;
|
||||
}
|
||||
const SentinelServerPoolConf& sentinelServerPool() const
|
||||
const StandaloneServerPoolConf& standaloneServerPool() const
|
||||
{
|
||||
return mSentinelServerPool;
|
||||
return mStandaloneServerPool;
|
||||
}
|
||||
const std::string& localDC() const
|
||||
{
|
||||
@ -179,11 +195,12 @@ public:
|
||||
}
|
||||
public:
|
||||
static bool parseMemory(long& m, const char* str);
|
||||
static bool parseDuration(long& v, const char* str);
|
||||
private:
|
||||
void setGlobal(const ConfParser::Node* node);
|
||||
void setAuthority(const ConfParser::Node* node);
|
||||
void setClusterServerPool(const ConfParser::Node* node);
|
||||
void setSentinelServerPool(const ConfParser::Node* node);
|
||||
void setStandaloneServerPool(const ConfParser::Node* node);
|
||||
void setDataCenter(const ConfParser::Node* node);
|
||||
void check();
|
||||
bool setServerPool(ServerPoolConf& sp, const ConfParser::Node* n);
|
||||
@ -192,10 +209,13 @@ private:
|
||||
bool setLong(long& attr, const char* name, const ConfParser::Node* n, long lower = LONG_MIN, long upper = LONG_MAX);
|
||||
bool setBool(bool& attr, const char* name, const ConfParser::Node* n);
|
||||
bool setMemory(long& mem, const char* name, const ConfParser::Node* n);
|
||||
bool setDuration(long& v, const char* name, const ConfParser::Node* n);
|
||||
bool setServers(std::vector<ServerConf>& servs, const char* name, const ConfParser::Node* n);
|
||||
void setDC(DCConf& dc, const ConfParser::Node* n);
|
||||
void setReadPolicy(ReadPolicyConf& c, const ConfParser::Node* n);
|
||||
void setLatencyMonitor(LatencyMonitorConf& m, const ConfParser::Node* n);
|
||||
void setCustomCommand(const ConfParser::Node* n);
|
||||
bool setCommandMode(int& mode, const char* name, const ConfParser::Node* n, const int defaultMode = Command::Write);
|
||||
private:
|
||||
std::string mName;
|
||||
std::string mBind;
|
||||
@ -211,10 +231,11 @@ private:
|
||||
std::vector<AuthConf> mAuthConfs;
|
||||
int mServerPoolType;
|
||||
ClusterServerPoolConf mClusterServerPool;
|
||||
SentinelServerPoolConf mSentinelServerPool;
|
||||
StandaloneServerPoolConf mStandaloneServerPool;
|
||||
std::vector<DCConf> mDCConfs;
|
||||
std::string mLocalDC;
|
||||
std::vector<LatencyMonitorConf> mLatencyMonitors;
|
||||
std::vector<CustomCommandConf> mCustomCommands;
|
||||
};
|
||||
|
||||
|
||||
|
||||
@ -283,20 +283,22 @@ Done:
|
||||
case SValBody:
|
||||
return KeyVal;
|
||||
case VValBody:
|
||||
{
|
||||
auto ret = KeyVal;
|
||||
val.assign(line, pos, line.size() - pos);
|
||||
if (val.back() == '{') {
|
||||
val.resize(val.size() - 1);
|
||||
int vsp = 0;
|
||||
for (auto it = val.rbegin(); it != val.rend(); ++it) {
|
||||
if (isspace(*it)) {
|
||||
++vsp;
|
||||
}
|
||||
}
|
||||
val.resize(val.size() - vsp);
|
||||
return BeginScope;
|
||||
} else {
|
||||
return KeyVal;
|
||||
ret = BeginScope;
|
||||
}
|
||||
int vsp = 0;
|
||||
for (auto it = val.rbegin(); it != val.rend(); ++it) {
|
||||
if (isspace(*it)) {
|
||||
++vsp;
|
||||
}
|
||||
}
|
||||
val.resize(val.size() - vsp);
|
||||
return ret;
|
||||
}
|
||||
case ScopeReady:
|
||||
return KeyVal;
|
||||
case ScopeBody:
|
||||
|
||||
@ -23,6 +23,7 @@ public:
|
||||
typedef ConnectConnection Value;
|
||||
typedef ListNode<ConnectConnection> ListNodeType;
|
||||
typedef DequeNode<ConnectConnection> DequeNodeType;
|
||||
typedef Alloc<ConnectConnection, Const::ConnectConnectionAllocCacheSize> Allocator;
|
||||
public:
|
||||
ConnectConnection(Server* s, bool shared);
|
||||
~ConnectConnection();
|
||||
@ -73,6 +74,11 @@ public:
|
||||
{
|
||||
return mSendRequests.size() + mSentRequests.size();
|
||||
}
|
||||
Request* frontRequest() const
|
||||
{
|
||||
return !mSentRequests.empty() ? mSentRequests.front() :
|
||||
(!mSendRequests.empty() ? mSendRequests.front() : nullptr);
|
||||
}
|
||||
private:
|
||||
void parse(Handler* h, Buffer* buf, int pos);
|
||||
void handleResponse(Handler* h);
|
||||
@ -92,6 +98,6 @@ private:
|
||||
|
||||
typedef List<ConnectConnection> ConnectConnectionList;
|
||||
typedef Deque<ConnectConnection> ConnectConnectionDeque;
|
||||
typedef Alloc<ConnectConnection, Const::ConnectConnectionAllocCacheSize> ConnectConnectionAlloc;
|
||||
typedef ConnectConnection::Allocator ConnectConnectionAlloc;
|
||||
|
||||
#endif
|
||||
|
||||
@ -30,12 +30,14 @@ ConnectConnection* ConnectConnectionPool::getShareConnection(int db)
|
||||
mHandler->id(), db, (int)mShareConns.size());
|
||||
return nullptr;
|
||||
}
|
||||
bool needInit = false;
|
||||
ConnectConnection* c = mShareConns[db];
|
||||
if (!c) {
|
||||
c = ConnectConnectionAlloc::create(mServ, true);
|
||||
c->setDb(db);
|
||||
++mStats.connections;
|
||||
mShareConns[db] = c;
|
||||
needInit = true;
|
||||
logNotice("h %d create server connection %s %d",
|
||||
mHandler->id(), c->peer(), c->fd());
|
||||
} else if (c->fd() < 0) {
|
||||
@ -43,15 +45,17 @@ ConnectConnection* ConnectConnectionPool::getShareConnection(int db)
|
||||
return nullptr;
|
||||
}
|
||||
c->reopen();
|
||||
needInit = true;
|
||||
logNotice("h %d reopen server connection %s %d",
|
||||
mHandler->id(), c->peer(), c->fd());
|
||||
} else {
|
||||
return c;
|
||||
}
|
||||
if (!init(c)) {
|
||||
if (needInit && !init(c)) {
|
||||
c->close(mHandler);
|
||||
return nullptr;
|
||||
}
|
||||
if (mServ->fail()) {
|
||||
return nullptr;
|
||||
}
|
||||
return c;
|
||||
}
|
||||
|
||||
@ -91,7 +95,7 @@ ConnectConnection* ConnectConnectionPool::getPrivateConnection(int db)
|
||||
ccl.push_back(c);
|
||||
return nullptr;
|
||||
}
|
||||
return c;
|
||||
return mServ->fail() ? nullptr : c;
|
||||
}
|
||||
|
||||
void ConnectConnectionPool::putPrivateConnection(ConnectConnection* s)
|
||||
@ -107,24 +111,6 @@ void ConnectConnectionPool::putPrivateConnection(ConnectConnection* s)
|
||||
}
|
||||
}
|
||||
|
||||
void ConnectConnectionPool::putTransactionConnection(ConnectConnection* s, bool inWatch, bool inMulti)
|
||||
{
|
||||
if (s->good()) {
|
||||
if (inMulti) {
|
||||
RequestPtr req = RequestAlloc::create(Request::DiscardServ);
|
||||
mHandler->handleRequest(req, s);
|
||||
logDebug("h %d s %s %d discard req %ld",
|
||||
mHandler->id(), s->peer(), s->fd(), req->id());
|
||||
} else if (inWatch) {
|
||||
RequestPtr req = RequestAlloc::create(Request::UnwatchServ);
|
||||
mHandler->handleRequest(req, s);
|
||||
logDebug("h %d s %s %d unwatch req %ld",
|
||||
mHandler->id(), s->peer(), s->fd(), req->id());
|
||||
}
|
||||
}
|
||||
putPrivateConnection(s);
|
||||
}
|
||||
|
||||
bool ConnectConnectionPool::init(ConnectConnection* c)
|
||||
{
|
||||
if (!c->setNonBlock()) {
|
||||
@ -136,6 +122,11 @@ bool ConnectConnectionPool::init(ConnectConnection* c)
|
||||
logWarn("h %d s %s %d settcpnodelay fail %s",
|
||||
mHandler->id(), c->peer(), c->fd(), StrError());
|
||||
}
|
||||
auto sp = mHandler->proxy()->serverPool();
|
||||
if (sp->keepalive() > 0 && !c->setTcpKeepAlive(sp->keepalive())) {
|
||||
logWarn("h %d s %s %d settcpkeepalive(%d) fail %s",
|
||||
mHandler->id(), c->peer(), c->fd(), sp->keepalive(),StrError());
|
||||
}
|
||||
auto m = mHandler->eventLoop();
|
||||
if (!m->addSocket(c, Multiplexor::ReadEvent|Multiplexor::WriteEvent)) {
|
||||
logWarn("h %d s %s %d add to eventloop fail",
|
||||
@ -159,7 +150,6 @@ bool ConnectConnectionPool::init(ConnectConnection* c)
|
||||
logDebug("h %d s %s %d auth req %ld",
|
||||
mHandler->id(), c->peer(), c->fd(), req->id());
|
||||
}
|
||||
auto sp = mHandler->proxy()->serverPool();
|
||||
if (sp->type() == ServerPool::Cluster) {
|
||||
RequestPtr req = RequestAlloc::create(Request::Readonly);
|
||||
mHandler->handleRequest(req, c);
|
||||
|
||||
@ -24,7 +24,6 @@ public:
|
||||
ConnectConnection* getShareConnection(int db=0);
|
||||
ConnectConnection* getPrivateConnection(int db=0);
|
||||
void putPrivateConnection(ConnectConnection* s);
|
||||
void putTransactionConnection(ConnectConnection* s, bool inWatch, bool inMulti);
|
||||
void check();
|
||||
Server* server() const
|
||||
{
|
||||
|
||||
@ -9,7 +9,8 @@
|
||||
Connection::Connection():
|
||||
mPostEvts(0),
|
||||
mBufCnt(0),
|
||||
mDb(0)
|
||||
mDb(0),
|
||||
mCloseASAP(false)
|
||||
{
|
||||
}
|
||||
|
||||
@ -26,7 +27,7 @@ BufferPtr Connection::getBuffer(Handler* h, bool allowNew)
|
||||
}
|
||||
}
|
||||
if (!mBuf || mBuf->full()) {
|
||||
BufferPtr buf = Alloc<Buffer>::create();
|
||||
BufferPtr buf = BufferAlloc::create();
|
||||
if (mBuf) {
|
||||
mBuf->concat(buf);
|
||||
}
|
||||
|
||||
@ -25,10 +25,19 @@ public:
|
||||
enum StatusEnum
|
||||
{
|
||||
ParseError = Socket::CustomStatus,
|
||||
LogicError
|
||||
LogicError,
|
||||
TimeoutError
|
||||
};
|
||||
public:
|
||||
Connection();
|
||||
void closeASAP()
|
||||
{
|
||||
mCloseASAP = true;
|
||||
}
|
||||
bool isCloseASAP() const
|
||||
{
|
||||
return mCloseASAP;
|
||||
}
|
||||
int getPostEvent() const
|
||||
{
|
||||
return mPostEvts;
|
||||
@ -56,6 +65,7 @@ private:
|
||||
BufferPtr mBuf;
|
||||
int mBufCnt;
|
||||
int mDb;
|
||||
bool mCloseASAP;
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
@ -86,6 +86,11 @@ public:
|
||||
{
|
||||
return node(obj)->next(Idx);
|
||||
}
|
||||
bool exist(T* obj) const
|
||||
{
|
||||
auto n = node(obj);
|
||||
return n->prev(Idx) != nullptr || n->next(Idx) != nullptr || n == mHead;
|
||||
}
|
||||
void push_back(T* obj)
|
||||
{
|
||||
N* p = static_cast<N*>(obj);
|
||||
|
||||
15
src/Enums.cpp
Normal file
@ -0,0 +1,15 @@
|
||||
/*
|
||||
* predixy - A high performance and full features proxy for redis.
|
||||
* Copyright (C) 2017 Joyield, Inc. <joyield.com@gmail.com>
|
||||
* All rights reserved.
|
||||
*/
|
||||
|
||||
#include "Enums.h"
|
||||
|
||||
const ServerPoolRefreshMethod::TypeName
|
||||
ServerPoolRefreshMethod::sPairs[3] = {
|
||||
{ServerPoolRefreshMethod::None, "none"},
|
||||
{ServerPoolRefreshMethod::Fixed, "fixed"},
|
||||
{ServerPoolRefreshMethod::Sentinel, "sentinel"},
|
||||
};
|
||||
|
||||
74
src/Enums.h
Normal file
@ -0,0 +1,74 @@
|
||||
/*
|
||||
* predixy - A high performance and full features proxy for redis.
|
||||
* Copyright (C) 2017 Joyield, Inc. <joyield.com@gmail.com>
|
||||
* All rights reserved.
|
||||
*/
|
||||
|
||||
#ifndef _PREDIXY_ENUMS_H_
|
||||
#define _PREDIXY_ENUMS_H_
|
||||
|
||||
#include <string.h>
|
||||
#include <strings.h>
|
||||
#include "Exception.h"
|
||||
|
||||
template<class T>
|
||||
class EnumBase
|
||||
{
|
||||
public:
|
||||
DefException(InvalidEnumValue);
|
||||
struct TypeName
|
||||
{
|
||||
int type;
|
||||
const char* name;
|
||||
};
|
||||
public:
|
||||
EnumBase(int t):
|
||||
mType(t)
|
||||
{
|
||||
}
|
||||
int value() const
|
||||
{
|
||||
return mType;
|
||||
}
|
||||
bool operator==(const T& t) const
|
||||
{
|
||||
return t.value() == mType;
|
||||
}
|
||||
bool operator!=(const T& t) const
|
||||
{
|
||||
return t.value() != mType;
|
||||
}
|
||||
const char* name() const
|
||||
{
|
||||
return T::sPairs[mType].name;
|
||||
}
|
||||
static T parse(const char* str)
|
||||
{
|
||||
for (auto& i : T::sPairs) {
|
||||
if (strcasecmp(i.name, str) == 0) {
|
||||
return T(typename T::Type(i.type));
|
||||
}
|
||||
}
|
||||
Throw(InvalidEnumValue, "invalid enum value:%s", str);
|
||||
}
|
||||
protected:
|
||||
int mType;
|
||||
};
|
||||
|
||||
class ServerPoolRefreshMethod : public EnumBase<ServerPoolRefreshMethod>
|
||||
{
|
||||
public:
|
||||
enum Type
|
||||
{
|
||||
None,
|
||||
Fixed,
|
||||
Sentinel
|
||||
};
|
||||
static const TypeName sPairs[3];
|
||||
ServerPoolRefreshMethod(Type t = None):
|
||||
EnumBase<ServerPoolRefreshMethod>(t)
|
||||
{
|
||||
}
|
||||
};
|
||||
|
||||
#endif
|
||||
@ -31,12 +31,9 @@ bool EpollMultiplexor::addSocket(Socket* s, int evts)
|
||||
event.events |= (evts & ReadEvent) ? EPOLLIN : 0;
|
||||
event.events |= (evts & WriteEvent) ? EPOLLOUT : 0;
|
||||
event.events |= EPOLLET;
|
||||
//event.events |= EPOLLONESHOT;
|
||||
event.data.ptr = s;
|
||||
s->setEvent(evts);
|
||||
int ret = epoll_ctl(mFd, EPOLL_CTL_ADD, s->fd(), &event);
|
||||
if (ret == 0) {
|
||||
s->setEvent(evts);
|
||||
}
|
||||
return ret == 0;
|
||||
}
|
||||
|
||||
@ -61,7 +58,6 @@ bool EpollMultiplexor::addEvent(Socket* s, int evts)
|
||||
}
|
||||
if ((s->getEvent() | evts) != s->getEvent()) {
|
||||
event.events |= EPOLLET;
|
||||
//event.events |= EPOLLONESHOT;
|
||||
int ret = epoll_ctl(mFd, EPOLL_CTL_MOD, s->fd(), &event);
|
||||
if (ret == 0) {
|
||||
s->setEvent(s->getEvent() | evts);
|
||||
|
||||
267
src/Handler.cpp
@ -62,6 +62,13 @@ void Handler::run()
|
||||
}
|
||||
refreshServerPool();
|
||||
checkConnectionPool();
|
||||
timeout = mProxy->serverPool()->serverTimeout();
|
||||
if (timeout > 0) {
|
||||
int num = checkServerTimeout(timeout);
|
||||
if (num > 0) {
|
||||
postEvent();
|
||||
}
|
||||
}
|
||||
if (mStatsVer < mProxy->statsVer()) {
|
||||
resetStats();
|
||||
}
|
||||
@ -103,6 +110,29 @@ void Handler::checkConnectionPool()
|
||||
}
|
||||
}
|
||||
|
||||
int Handler::checkServerTimeout(long timeout)
|
||||
{
|
||||
int num = 0;
|
||||
auto now = Util::elapsedUSec();
|
||||
auto n = mWaitConnectConns.front();
|
||||
while (n) {
|
||||
auto s = n;
|
||||
n = mWaitConnectConns.next(n);
|
||||
if (auto req = s->frontRequest()) {
|
||||
long elapsed = now - req->createTime();
|
||||
if (elapsed >= timeout) {
|
||||
s->setStatus(Connection::TimeoutError);
|
||||
addPostEvent(s, Multiplexor::ErrorEvent);
|
||||
mWaitConnectConns.remove(s);
|
||||
++num;
|
||||
}
|
||||
} else {
|
||||
mWaitConnectConns.remove(s);
|
||||
}
|
||||
}
|
||||
return num;
|
||||
}
|
||||
|
||||
void Handler::handleEvent(Socket* s, int evts)
|
||||
{
|
||||
FuncCallTimer();
|
||||
@ -166,11 +196,14 @@ void Handler::postAcceptConnectionEvent()
|
||||
bool ret;
|
||||
if (finished) {
|
||||
ret = mEventLoop->delEvent(c, Multiplexor::WriteEvent);
|
||||
if (c->isCloseASAP()) {
|
||||
c->setStatus(AcceptConnection::None);
|
||||
}
|
||||
} else {
|
||||
ret = mEventLoop->addEvent(c, Multiplexor::WriteEvent);
|
||||
}
|
||||
if (!ret) {
|
||||
c->setStatus(Multiplexor::ErrorEvent);
|
||||
c->setStatus(AcceptConnection::IOError);
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -180,15 +213,8 @@ void Handler::postAcceptConnectionEvent()
|
||||
mEventLoop->delSocket(c);
|
||||
if (auto s = c->connectConnection()) {
|
||||
auto cp = mConnPool[s->server()->id()];
|
||||
if (c->inTransaction()) {
|
||||
cp->putTransactionConnection(s, c->inPendWatch(), c->inPendMulti());
|
||||
} else if (c->inSub(true)) {
|
||||
cp->putPrivateConnection(s);
|
||||
s->setStatus(Connection::LogicError);
|
||||
addPostEvent(s, Multiplexor::ErrorEvent);
|
||||
} else {
|
||||
cp->putPrivateConnection(s);
|
||||
}
|
||||
s->setStatus(Connection::LogicError);
|
||||
addPostEvent(s, Multiplexor::ErrorEvent);
|
||||
c->detachConnectConnection();
|
||||
s->detachAcceptConnection();
|
||||
}
|
||||
@ -219,6 +245,10 @@ void Handler::postConnectConnectionEvent()
|
||||
}
|
||||
if (!ret) {
|
||||
s->setStatus(Multiplexor::ErrorEvent);
|
||||
} else {
|
||||
if (s->isShared() && !mWaitConnectConns.exist(s)) {
|
||||
mWaitConnectConns.push_back(s);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -245,6 +275,9 @@ void Handler::postConnectConnectionEvent()
|
||||
s->status(), s->statusStr());
|
||||
mEventLoop->delSocket(s);
|
||||
s->close(this);
|
||||
if (!s->isShared()) {
|
||||
mConnPool[s->server()->id()]->putPrivateConnection(s);
|
||||
}
|
||||
if (c) {
|
||||
addPostEvent(c, Multiplexor::ErrorEvent);
|
||||
s->detachAcceptConnection();
|
||||
@ -284,7 +317,6 @@ void Handler::addAcceptSocket(int fd, sockaddr* addr, socklen_t len)
|
||||
AcceptConnection* c = nullptr;
|
||||
try {
|
||||
c = AcceptConnectionAlloc::create(fd, addr, len);
|
||||
logNotice("h %d accept c %s %d", id(), c->peer(), fd);
|
||||
} catch (ExceptionBase& e) {
|
||||
logWarn("h %d create connection for client %d fail %s",
|
||||
id(), fd, e.what());
|
||||
@ -337,6 +369,8 @@ void Handler::addAcceptSocket(int fd, sockaddr* addr, socklen_t len)
|
||||
logWarn("h %d destroy c %s %d with add to event loop fail:%s",
|
||||
id(), c->peer(), c->fd(), StrError());
|
||||
AcceptConnectionAlloc::destroy(c);
|
||||
} else {
|
||||
logNotice("h %d accept c %s %d assign to h %d", id(), c->peer(), fd, dst->id());
|
||||
}
|
||||
}
|
||||
|
||||
@ -387,6 +421,8 @@ void Handler::handleConnectConnectionEvent(ConnectConnection* s, int evts)
|
||||
if (s->good() && (evts & Multiplexor::WriteEvent)) {
|
||||
if (s->isConnecting()) {
|
||||
s->setConnected();
|
||||
logDebug("h %d s %s %d connected",
|
||||
id(), s->peer(), s->fd());
|
||||
}
|
||||
addPostEvent(s, Multiplexor::WriteEvent);
|
||||
}
|
||||
@ -466,7 +502,7 @@ void Handler::handleRequest(Request* req)
|
||||
{
|
||||
FuncCallTimer();
|
||||
auto c = req->connection();
|
||||
if (c && c->isBlockRequest()) {
|
||||
if (c && (c->isBlockRequest() || c->isCloseASAP())) {
|
||||
return;
|
||||
}
|
||||
++mStats.requests;
|
||||
@ -533,7 +569,13 @@ bool Handler::preHandleRequest(Request* req, const String& key)
|
||||
directResponse(req, Response::Pong);
|
||||
} else {
|
||||
ResponsePtr res = ResponseAlloc::create();
|
||||
res->setStr(key.data(), key.length());
|
||||
if (req->isInline()) {
|
||||
SString<Const::MaxKeyLen> k;
|
||||
RequestParser::decodeInlineArg(k, key);
|
||||
res->setStr(k.data(), k.length());
|
||||
} else {
|
||||
res->setStr(key.data(), key.length());
|
||||
}
|
||||
handleResponse(nullptr, req, res);
|
||||
}
|
||||
return true;
|
||||
@ -550,6 +592,12 @@ bool Handler::preHandleRequest(Request* req, const String& key)
|
||||
directResponse(req, Response::InvalidDb);
|
||||
}
|
||||
return true;
|
||||
case Command::Quit:
|
||||
directResponse(req, Response::Ok);
|
||||
if (c) {
|
||||
c->closeASAP();
|
||||
}
|
||||
return true;
|
||||
case Command::Cmd:
|
||||
directResponse(req, Response::Cmd);
|
||||
return true;
|
||||
@ -644,6 +692,10 @@ void Handler::postHandleRequest(Request* req, ConnectConnection* s)
|
||||
case Command::Blpop:
|
||||
case Command::Brpop:
|
||||
case Command::Brpoplpush:
|
||||
c->setBlockRequest(true);
|
||||
c->attachConnectConnection(s);
|
||||
s->attachAcceptConnection(c);
|
||||
break;
|
||||
case Command::Unwatch:
|
||||
case Command::Exec:
|
||||
case Command::Discard:
|
||||
@ -710,11 +762,11 @@ void Handler::directResponse(Request* req, Response::GenericCode code, ConnectCo
|
||||
id(), c->peer(), c->fd(), req->id(), code, excp.what());
|
||||
}
|
||||
} else {
|
||||
logInfo("h %d ignore req %ld res code %d c %s %d status %d %s",
|
||||
logDebug("h %d ignore req %ld res code %d c %s %d status %d %s",
|
||||
id(), req->id(), code, c->peer(), c->fd(), c->status(), c->statusStr());
|
||||
}
|
||||
} else {
|
||||
logInfo("h %d ignore req %ld res code %d without accept connection",
|
||||
logDebug("h %d ignore req %ld res code %d without accept connection",
|
||||
id(), req->id(), code);
|
||||
}
|
||||
}
|
||||
@ -741,7 +793,7 @@ void Handler::handleResponse(ConnectConnection* s, Request* req, Response* res)
|
||||
auto sp = mProxy->serverPool();
|
||||
AcceptConnection* c = req->connection();
|
||||
if (!c) {
|
||||
logInfo("h %d ignore req %ld res %ld", id(), req->id(), res->id());
|
||||
logDebug("h %d ignore req %ld res %ld", id(), req->id(), res->id());
|
||||
return;
|
||||
} else if (!c->good()) {
|
||||
logWarn("h %d ignore req %ld res %ld for c %s %d with status %d %s",
|
||||
@ -854,11 +906,11 @@ void Handler::handleResponse(ConnectConnection* s, Request* req, Response* res)
|
||||
default:
|
||||
break;
|
||||
}
|
||||
if (auto cs = c->connectConnection()) {
|
||||
if (s && !s->isShared()) {
|
||||
if (!c->inTransaction() && !c->inSub(true)) {
|
||||
mConnPool[cs->server()->id()]->putPrivateConnection(cs);
|
||||
mConnPool[s->server()->id()]->putPrivateConnection(s);
|
||||
c->detachConnectConnection();
|
||||
cs->detachAcceptConnection();
|
||||
s->detachAcceptConnection();
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -876,98 +928,112 @@ void Handler::infoRequest(Request* req, const String& key)
|
||||
infoServerLatencyRequest(req);
|
||||
return;
|
||||
}
|
||||
bool all = key.equal("All", true);
|
||||
bool empty = key.empty();
|
||||
ResponsePtr res = ResponseAlloc::create();
|
||||
res->setType(Reply::String);
|
||||
Segment& body = res->body();
|
||||
BufferPtr buf = body.fset(nullptr, "# Proxy\n");
|
||||
buf = buf->fappend("Version:%s\n", _PREDIXY_VERSION_);
|
||||
buf = buf->fappend("Name:%s\n", mProxy->conf()->name());
|
||||
buf = buf->fappend("Bind:%s\n", mProxy->conf()->bind());
|
||||
buf = buf->fappend("RedisMode:proxy\n");
|
||||
Buffer* buf = body.fset(nullptr, "");
|
||||
|
||||
#define Scope(all, empty, header) ((all || empty || key.equal(header, true)) ? \
|
||||
(buf = buf->fappend("# %s\n", header)) : nullptr)
|
||||
|
||||
if (all || empty || key.equal("Proxy", true) || key.equal("Server", true)) {
|
||||
buf = buf->fappend("# %s\n", "Proxy");
|
||||
buf = buf->fappend("Version:%s\n", _PREDIXY_VERSION_);
|
||||
buf = buf->fappend("Name:%s\n", mProxy->conf()->name());
|
||||
buf = buf->fappend("Bind:%s\n", mProxy->conf()->bind());
|
||||
buf = buf->fappend("RedisMode:proxy\n");
|
||||
#ifdef _PREDIXY_SINGLE_THREAD_
|
||||
buf = buf->fappend("SingleThread:true\n");
|
||||
buf = buf->fappend("SingleThread:true\n");
|
||||
#else
|
||||
buf = buf->fappend("SingleThread:false\n");
|
||||
buf = buf->fappend("SingleThread:false\n");
|
||||
#endif
|
||||
buf = buf->fappend("WorkerThreads:%d\n", mProxy->conf()->workerThreads());
|
||||
SString<32> timeStr;
|
||||
timeStr.strftime("%Y-%m-%d %H:%M:%S", mProxy->startTime());
|
||||
buf = buf->fappend("UptimeSince:%s\n", timeStr.data());
|
||||
buf = buf->fappend("\n");
|
||||
|
||||
buf = buf->fappend("# SystemResource\n");
|
||||
buf = buf->fappend("UsedMemory:%ld\n", AllocBase::getUsedMemory());
|
||||
buf = buf->fappend("MaxMemory:%ld\n", AllocBase::getMaxMemory());
|
||||
struct rusage ru;
|
||||
int ret = getrusage(RUSAGE_SELF, &ru);
|
||||
if (ret == 0) {
|
||||
buf = buf->fappend("MaxRSS:%ld\n", ru.ru_maxrss<<10);
|
||||
buf = buf->fappend("UsedCpuSys:%d.%d\n", ru.ru_stime.tv_sec, ru.ru_stime.tv_usec / 1000);
|
||||
buf = buf->fappend("UsedCpuUser:%d.%d\n", ru.ru_utime.tv_sec, ru.ru_utime.tv_usec / 1000);
|
||||
} else {
|
||||
logError("h %d getrusage fail %s", id(), StrError());
|
||||
}
|
||||
buf = buf->fappend("\n");
|
||||
|
||||
buf = buf->fappend("# Stats\n");
|
||||
HandlerStats st(mStats);
|
||||
for (auto h : mProxy->handlers()) {
|
||||
if (h == this) {
|
||||
continue;
|
||||
}
|
||||
st += h->mStats;
|
||||
}
|
||||
buf = buf->fappend("Accept:%ld\n", st.accept);
|
||||
buf = buf->fappend("ClientConnections:%ld\n", st.clientConnections);
|
||||
buf = buf->fappend("TotalRequests:%ld\n", st.requests);
|
||||
buf = buf->fappend("TotalResponses:%ld\n", st.responses);
|
||||
buf = buf->fappend("TotalRecvClientBytes:%ld\n", st.recvClientBytes);
|
||||
buf = buf->fappend("TotalSendServerBytes:%ld\n", st.sendServerBytes);
|
||||
buf = buf->fappend("TotalRecvServerBytes:%ld\n", st.recvServerBytes);
|
||||
buf = buf->fappend("TotalSendClientBytes:%ld\n", st.sendClientBytes);
|
||||
buf = buf->fappend("\n");
|
||||
|
||||
buf = buf->fappend("# Servers\n");
|
||||
int servCursor = 0;
|
||||
auto sp = mProxy->serverPool();
|
||||
while (Server* serv = sp->iter(servCursor)) {
|
||||
ServerStats st;
|
||||
for (auto h : mProxy->handlers()) {
|
||||
if (auto cp = h->getConnectConnectionPool(serv->id())) {
|
||||
st += cp->stats();
|
||||
}
|
||||
}
|
||||
buf->fappend("Server:%s\n", serv->addr().data());
|
||||
buf->fappend("Role:%s\n", serv->roleStr());
|
||||
auto g = serv->group();
|
||||
buf->fappend("Group:%s\n", g ? g->name().data() : "");
|
||||
buf->fappend("DC:%s\n", serv->dcName().data());
|
||||
buf->fappend("CurrentIsFail:%d\n", (int)serv->fail());
|
||||
buf->fappend("Connections:%d\n", st.connections);
|
||||
buf->fappend("Connect:%ld\n", st.connect);
|
||||
buf->fappend("Requests:%ld\n", st.requests);
|
||||
buf->fappend("Responses:%ld\n", st.responses);
|
||||
buf->fappend("SendBytes:%ld\n", st.sendBytes);
|
||||
buf->fappend("RecvBytes:%ld\n", st.recvBytes);
|
||||
buf = buf->fappend("WorkerThreads:%d\n", mProxy->conf()->workerThreads());
|
||||
buf = buf->fappend("Uptime:%ld\n", (long)mProxy->startTime());
|
||||
SString<32> timeStr;
|
||||
timeStr.strftime("%Y-%m-%d %H:%M:%S", mProxy->startTime());
|
||||
buf = buf->fappend("UptimeSince:%s\n", timeStr.data());
|
||||
buf = buf->fappend("\n");
|
||||
}
|
||||
buf = buf->fappend("\n");
|
||||
|
||||
buf = buf->fappend("# LatencyMonitor\n");
|
||||
LatencyMonitor lm;
|
||||
for (size_t i = 0; i < mLatencyMonitors.size(); ++i) {
|
||||
lm = mLatencyMonitors[i];
|
||||
if (Scope(all, empty, "SystemResource")) {
|
||||
buf = buf->fappend("UsedMemory:%ld\n", AllocBase::getUsedMemory());
|
||||
buf = buf->fappend("MaxMemory:%ld\n", AllocBase::getMaxMemory());
|
||||
struct rusage ru;
|
||||
int ret = getrusage(RUSAGE_SELF, &ru);
|
||||
if (ret == 0) {
|
||||
buf = buf->fappend("MaxRSS:%ld\n", ru.ru_maxrss<<10);
|
||||
buf = buf->fappend("UsedCpuSys:%.3f\n", (double)ru.ru_stime.tv_sec + ru.ru_stime.tv_usec / 1000000.);
|
||||
buf = buf->fappend("UsedCpuUser:%.3f\n", (double)ru.ru_utime.tv_sec + ru.ru_utime.tv_usec / 1000000.);
|
||||
} else {
|
||||
logError("h %d getrusage fail %s", id(), StrError());
|
||||
}
|
||||
buf = buf->fappend("\n");
|
||||
}
|
||||
|
||||
if (Scope(all, empty, "Stats")) {
|
||||
HandlerStats st(mStats);
|
||||
for (auto h : mProxy->handlers()) {
|
||||
if (h == this) {
|
||||
continue;
|
||||
}
|
||||
lm += h->mLatencyMonitors[i];
|
||||
st += h->mStats;
|
||||
}
|
||||
buf = buf->fappend("LatencyMonitorName:%s\n", lm.name().data());
|
||||
buf = lm.output(buf);
|
||||
buf = buf->fappend("Accept:%ld\n", st.accept);
|
||||
buf = buf->fappend("ClientConnections:%ld\n", st.clientConnections);
|
||||
buf = buf->fappend("TotalRequests:%ld\n", st.requests);
|
||||
buf = buf->fappend("TotalResponses:%ld\n", st.responses);
|
||||
buf = buf->fappend("TotalRecvClientBytes:%ld\n", st.recvClientBytes);
|
||||
buf = buf->fappend("TotalSendServerBytes:%ld\n", st.sendServerBytes);
|
||||
buf = buf->fappend("TotalRecvServerBytes:%ld\n", st.recvServerBytes);
|
||||
buf = buf->fappend("TotalSendClientBytes:%ld\n", st.sendClientBytes);
|
||||
buf = buf->fappend("\n");
|
||||
}
|
||||
|
||||
if (Scope(all, empty, "Servers")) {
|
||||
int servCursor = 0;
|
||||
auto sp = mProxy->serverPool();
|
||||
while (Server* serv = sp->iter(servCursor)) {
|
||||
ServerStats st;
|
||||
for (auto h : mProxy->handlers()) {
|
||||
if (auto cp = h->getConnectConnectionPool(serv->id())) {
|
||||
st += cp->stats();
|
||||
}
|
||||
}
|
||||
buf = buf->fappend("Server:%s\n", serv->addr().data());
|
||||
buf = buf->fappend("Role:%s\n", serv->roleStr());
|
||||
auto g = serv->group();
|
||||
buf = buf->fappend("Group:%s\n", g ? g->name().data() : "");
|
||||
buf = buf->fappend("DC:%s\n", serv->dcName().data());
|
||||
buf = buf->fappend("CurrentIsFail:%d\n", (int)serv->fail());
|
||||
buf = buf->fappend("Connections:%d\n", st.connections);
|
||||
buf = buf->fappend("Connect:%ld\n", st.connect);
|
||||
buf = buf->fappend("Requests:%ld\n", st.requests);
|
||||
buf = buf->fappend("Responses:%ld\n", st.responses);
|
||||
buf = buf->fappend("SendBytes:%ld\n", st.sendBytes);
|
||||
buf = buf->fappend("RecvBytes:%ld\n", st.recvBytes);
|
||||
buf = buf->fappend("\n");
|
||||
}
|
||||
buf = buf->fappend("\n");
|
||||
}
|
||||
|
||||
if (Scope(all, empty, "LatencyMonitor")) {
|
||||
LatencyMonitor lm;
|
||||
for (size_t i = 0; i < mLatencyMonitors.size(); ++i) {
|
||||
lm = mLatencyMonitors[i];
|
||||
for (auto h : mProxy->handlers()) {
|
||||
if (h == this) {
|
||||
continue;
|
||||
}
|
||||
lm += h->mLatencyMonitors[i];
|
||||
}
|
||||
buf = buf->fappend("LatencyMonitorName:%s\n", lm.name().data());
|
||||
buf = lm.output(buf);
|
||||
buf = buf->fappend("\n");
|
||||
}
|
||||
}
|
||||
|
||||
buf = buf->fappend("\r\n");
|
||||
body.end().buf = buf;
|
||||
body.end().pos = buf->length();
|
||||
@ -1170,6 +1236,11 @@ void Handler::configGetRequest(Request* req)
|
||||
}
|
||||
|
||||
do {
|
||||
Append("Name", "%s", conf->name());
|
||||
Append("Bind", "%s", conf->bind());
|
||||
Append("WorkerThreads", "%d", conf->workerThreads());
|
||||
Append("BufSize", "%d", Buffer::getSize() + sizeof(Buffer));
|
||||
Append("LocalDC", "%s", conf->localDC().c_str());
|
||||
Append("MaxMemory", "%ld", AllocBase::getMaxMemory());
|
||||
Append("ClientTimeout", "%d", conf->clientTimeout() / 1000000);
|
||||
Append("AllowMissLog", "%s", log->allowMissLog() ? "true" : "false");
|
||||
@ -1288,7 +1359,7 @@ void Handler::configSetRequest(Request* req)
|
||||
|
||||
void Handler::innerResponse(ConnectConnection* s, Request* req, Response* res)
|
||||
{
|
||||
logInfo("h %d s %s %d inner req %ld %s res %ld %s",
|
||||
logDebug("h %d s %s %d inner req %ld %s res %ld %s",
|
||||
id(), (s ? s->peer() : "None"), (s ? s->fd() : -1),
|
||||
req->id(), req->cmd(),
|
||||
res->id(), res->typeStr());
|
||||
@ -1402,10 +1473,14 @@ bool Handler::permission(Request* req, const String& key, Response::GenericCode&
|
||||
return true;
|
||||
}
|
||||
if (req->type() == Command::Auth) {
|
||||
SString<Const::MaxKeyLen> pw;
|
||||
if (req->isInline()) {
|
||||
RequestParser::decodeInlineArg(pw, key);
|
||||
}
|
||||
auto m = mProxy->authority();
|
||||
if (!m->hasAuth()) {
|
||||
code = Response::NoPasswordSet;
|
||||
} else if (auto auth = m->get(key)) {
|
||||
} else if (auto auth = m->get(req->isInline() ? pw : key)) {
|
||||
c->setAuth(auth);
|
||||
code = Response::Ok;
|
||||
} else {
|
||||
|
||||
@ -96,6 +96,7 @@ private:
|
||||
void refreshServerPool();
|
||||
void checkConnectionPool();
|
||||
int checkClientTimeout(long timeout);
|
||||
int checkServerTimeout(long timeout);
|
||||
void innerResponse(ConnectConnection* c, Request* req, Response* res);
|
||||
void infoRequest(Request* req, const String& key);
|
||||
void infoLatencyRequest(Request* req);
|
||||
|
||||
@ -67,6 +67,5 @@ int KqueueMultiplexor::wait(long usec, T* handler)
|
||||
|
||||
|
||||
typedef KqueueMultiplexor Multiplexor;
|
||||
#define _MULTIPLEXOR_ASYNC_ASSIGN_
|
||||
|
||||
#endif
|
||||
|
||||
@ -98,7 +98,7 @@ public:
|
||||
Buffer* output(Buffer* buf) const;
|
||||
private:
|
||||
String mName;
|
||||
const std::bitset<Command::Sentinel>* mCmds;
|
||||
const std::bitset<Command::AvailableCommands>* mCmds;
|
||||
std::vector<TimeSpan> mTimeSpan;
|
||||
TimeSpan mLast;
|
||||
};
|
||||
|
||||
@ -118,7 +118,9 @@ bool LogFileSink::reopen(time_t t)
|
||||
}
|
||||
if (mFileSuffixFmt) {
|
||||
unlink(mFileName.c_str());
|
||||
if (symlink(mFilePath, mFileName.c_str()) == -1) {
|
||||
const char* name = strrchr(mFilePath, '/');
|
||||
name = name ? name + 1 : mFilePath;
|
||||
if (symlink(name, mFileName.c_str()) == -1) {
|
||||
fprintf(stderr, "create symbol link for %s fail", mFileName.c_str());
|
||||
}
|
||||
}
|
||||
@ -135,9 +137,9 @@ bool LogFileSink::setFile(const char* path, int rotateSecs, long rotateBytes)
|
||||
if (len > 4 && strcasecmp(path + len - 4, ".log") == 0) {
|
||||
len -= 4;
|
||||
}
|
||||
if (len + FileSuffixReserveLen >= MaxPathLen) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
if (len + FileSuffixReserveLen >= MaxPathLen) {
|
||||
return false;
|
||||
}
|
||||
mFileName = path;
|
||||
mFilePathLen = len;
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
CXX ?= g++
|
||||
LVL ?= -O3
|
||||
LVL ?= -g -O3
|
||||
Opts += $(LVL)
|
||||
|
||||
ifeq ($(MT), false)
|
||||
@ -72,6 +72,7 @@ objs = \
|
||||
Buffer.o \
|
||||
Command.o \
|
||||
Distribution.o \
|
||||
Enums.o \
|
||||
Reply.o \
|
||||
ConfParser.o \
|
||||
Conf.o \
|
||||
@ -87,7 +88,7 @@ objs = \
|
||||
ServerPool.o \
|
||||
ClusterNodesParser.o \
|
||||
ClusterServerPool.o \
|
||||
SentinelServerPool.o \
|
||||
StandaloneServerPool.o \
|
||||
ConnectConnectionPool.o \
|
||||
Handler.o \
|
||||
Proxy.o \
|
||||
|
||||
@ -26,10 +26,10 @@ static bool Stop = false;
|
||||
|
||||
static void abortHandler(int sig)
|
||||
{
|
||||
Abort = true;
|
||||
if (sig == SIGABRT) {
|
||||
traceInfo();
|
||||
if (!Abort) {
|
||||
traceInfo(sig);
|
||||
}
|
||||
Abort = true;
|
||||
if (!Running) {
|
||||
abort();
|
||||
}
|
||||
@ -65,6 +65,7 @@ Proxy::~Proxy()
|
||||
|
||||
bool Proxy::init(int argc, char* argv[])
|
||||
{
|
||||
signal(SIGHUP, SIG_IGN);
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
signal(SIGFPE, abortHandler);
|
||||
signal(SIGILL, abortHandler);
|
||||
@ -72,7 +73,6 @@ bool Proxy::init(int argc, char* argv[])
|
||||
signal(SIGABRT, abortHandler);
|
||||
signal(SIGBUS, abortHandler);
|
||||
signal(SIGQUIT, abortHandler);
|
||||
signal(SIGHUP, abortHandler);
|
||||
signal(SIGINT, stopHandler);
|
||||
signal(SIGTERM, stopHandler);
|
||||
|
||||
@ -118,10 +118,10 @@ bool Proxy::init(int argc, char* argv[])
|
||||
mServPool = p;
|
||||
}
|
||||
break;
|
||||
case ServerPool::Sentinel:
|
||||
case ServerPool::Standalone:
|
||||
{
|
||||
SentinelServerPool* p = new SentinelServerPool(this);
|
||||
p->init(mConf->sentinelServerPool());
|
||||
StandaloneServerPool* p = new StandaloneServerPool(this);
|
||||
p->init(mConf->standaloneServerPool());
|
||||
mServPool = p;
|
||||
}
|
||||
break;
|
||||
|
||||
@ -13,7 +13,7 @@
|
||||
#include "DC.h"
|
||||
#include "ServerPool.h"
|
||||
#include "ClusterServerPool.h"
|
||||
#include "SentinelServerPool.h"
|
||||
#include "StandaloneServerPool.h"
|
||||
#include "LatencyMonitor.h"
|
||||
|
||||
class Proxy
|
||||
@ -51,15 +51,15 @@ public:
|
||||
}
|
||||
bool isSplitMultiKey() const
|
||||
{
|
||||
return mConf->sentinelServerPool().groups.size() != 1;
|
||||
return mConf->standaloneServerPool().groups.size() != 1;
|
||||
}
|
||||
bool supportTransaction() const
|
||||
{
|
||||
return mConf->sentinelServerPool().groups.size() == 1;
|
||||
return mConf->standaloneServerPool().groups.size() == 1;
|
||||
}
|
||||
bool supportSubscribe() const
|
||||
{
|
||||
return mConf->sentinelServerPool().groups.size() == 1 ||
|
||||
return mConf->standaloneServerPool().groups.size() == 1 ||
|
||||
mConf->clusterServerPool().servers.size() > 0;
|
||||
}
|
||||
const std::vector<Handler*>& handlers() const
|
||||
|
||||
@ -31,7 +31,9 @@ static const GenericRequest GenericRequestDefs[] = {
|
||||
{Request::DelHead, Command::Del, "*2\r\n$3\r\ndel\r\n"},
|
||||
{Request::UnlinkHead, Command::Unlink, "*2\r\n$6\r\nunlink\r\n"},
|
||||
{Request::PsubscribeHead,Command::Psubscribe, "*2\r\n$10\r\npsubscribe\r\n"},
|
||||
{Request::SubscribeHead,Command::Subscribe, "*2\r\n$9\r\nsubscribe\r\n"}
|
||||
{Request::SubscribeHead,Command::Subscribe, "*2\r\n$9\r\nsubscribe\r\n"},
|
||||
{Request::PunsubscribeHead,Command::Punsubscribe, "*2\r\n$12\r\npunsubscribe\r\n"},
|
||||
{Request::UnsubscribeHead,Command::Unsubscribe, "*2\r\n$11\r\nunsubscribe\r\n"}
|
||||
};
|
||||
|
||||
thread_local static Request* GenericRequests[Request::CodeSentinel];
|
||||
@ -55,6 +57,7 @@ Request::Request():
|
||||
mType(Command::None),
|
||||
mDone(false),
|
||||
mDelivered(false),
|
||||
mInline(false),
|
||||
mFollowers(0),
|
||||
mFollowersDone(0),
|
||||
mRedirectCnt(0),
|
||||
@ -68,6 +71,7 @@ Request::Request(AcceptConnection* c):
|
||||
mType(Command::None),
|
||||
mDone(false),
|
||||
mDelivered(false),
|
||||
mInline(false),
|
||||
mFollowers(0),
|
||||
mFollowersDone(0),
|
||||
mRedirectCnt(0),
|
||||
@ -80,6 +84,7 @@ Request::Request(GenericCode code):
|
||||
mConn(nullptr),
|
||||
mDone(false),
|
||||
mDelivered(false),
|
||||
mInline(false),
|
||||
mFollowers(0),
|
||||
mFollowersDone(0),
|
||||
mRedirectCnt(0),
|
||||
@ -93,10 +98,12 @@ Request::Request(GenericCode code):
|
||||
|
||||
Request::~Request()
|
||||
{
|
||||
clear();
|
||||
}
|
||||
|
||||
void Request::clear()
|
||||
{
|
||||
mConn = nullptr;
|
||||
mRes = nullptr;
|
||||
mHead.clear();
|
||||
mReq.clear();
|
||||
@ -137,24 +144,33 @@ void Request::set(const RequestParser& p, Request* leader)
|
||||
case Command::Subscribe:
|
||||
r = GenericRequests[SubscribeHead];
|
||||
break;
|
||||
case Command::Punsubscribe:
|
||||
r = GenericRequests[PunsubscribeHead];
|
||||
break;
|
||||
case Command::Unsubscribe:
|
||||
r = GenericRequests[UnsubscribeHead];
|
||||
break;
|
||||
default:
|
||||
//should never reach
|
||||
abort();
|
||||
break;
|
||||
}
|
||||
mHead = r->mReq;
|
||||
mReq = p.request();
|
||||
mLeader = leader;
|
||||
if (leader == this) {
|
||||
if (mType == Command::Mset || mType == Command::Msetnx) {
|
||||
mFollowers = (p.argNum() - 1) >> 1;
|
||||
} else {
|
||||
mFollowers = p.argNum() - 1;
|
||||
}
|
||||
} else {
|
||||
mLeader = leader;
|
||||
}
|
||||
} else {
|
||||
mReq = p.request();
|
||||
}
|
||||
mKey = p.key();
|
||||
mInline = p.isInline();
|
||||
}
|
||||
|
||||
void Request::setAuth(const String& password)
|
||||
@ -274,10 +290,11 @@ void Request::adjustScanCursor(long cursor)
|
||||
|
||||
void Request::follow(Request* leader)
|
||||
{
|
||||
++mFollowers;
|
||||
leader->mFollowers += 1;
|
||||
if (leader == this) {
|
||||
return;
|
||||
}
|
||||
mConn = leader->mConn;
|
||||
mType = leader->mType;
|
||||
mHead = leader->mHead;
|
||||
mReq = leader->mReq;
|
||||
@ -325,40 +342,49 @@ int Request::fill(IOVec* vecs, int len)
|
||||
void Request::setResponse(Response* res)
|
||||
{
|
||||
mDone = true;
|
||||
if (mLeader) {
|
||||
mLeader->mFollowersDone += 1;
|
||||
if (Request* ld = leader()) {
|
||||
ld->mFollowersDone += 1;
|
||||
switch (mType) {
|
||||
case Command::Mget:
|
||||
mRes = res;
|
||||
break;
|
||||
case Command::Mset:
|
||||
if (Response* leaderRes = mLeader->getResponse()) {
|
||||
if (Response* leaderRes = ld->getResponse()) {
|
||||
if (res->isError() && !leaderRes->isError()) {
|
||||
mLeader->mRes = res;
|
||||
ld->mRes = res;
|
||||
}
|
||||
} else {
|
||||
mLeader->mRes = res;
|
||||
ld->mRes = res;
|
||||
}
|
||||
break;
|
||||
case Command::Msetnx:
|
||||
if (Response* leaderRes = ld->getResponse()) {
|
||||
if (!leaderRes->isError() &&
|
||||
(res->isError() || res->integer() == 0)) {
|
||||
ld->mRes = res;
|
||||
}
|
||||
} else {
|
||||
ld->mRes = res;
|
||||
}
|
||||
break;
|
||||
case Command::Touch:
|
||||
case Command::Exists:
|
||||
case Command::Del:
|
||||
case Command::Unlink:
|
||||
if (!mLeader->mRes) {
|
||||
mLeader->mRes = res;
|
||||
if (!ld->mRes) {
|
||||
ld->mRes = res;
|
||||
}
|
||||
if (mLeader->isDone()) {
|
||||
mLeader->mRes->set(mLeader->mRes->integer());
|
||||
if (ld->isDone()) {
|
||||
ld->mRes->set(ld->mRes->integer());
|
||||
}
|
||||
break;
|
||||
case Command::ScriptLoad:
|
||||
if (Response* leaderRes = mLeader->getResponse()) {
|
||||
if (Response* leaderRes = ld->getResponse()) {
|
||||
if (leaderRes->isString() && !res->isString()) {
|
||||
mLeader->mRes = res;
|
||||
ld->mRes = res;
|
||||
}
|
||||
} else {
|
||||
mLeader->mRes = res;
|
||||
ld->mRes = res;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
@ -373,11 +399,13 @@ void Request::setResponse(Response* res)
|
||||
|
||||
bool Request::isDone() const
|
||||
{
|
||||
if (mLeader == this) {
|
||||
if (isLeader()) {
|
||||
switch (mType) {
|
||||
case Command::Mget:
|
||||
case Command::Psubscribe:
|
||||
case Command::Subscribe:
|
||||
case Command::Punsubscribe:
|
||||
case Command::Unsubscribe:
|
||||
return mDone;
|
||||
default:
|
||||
break;
|
||||
|
||||
@ -25,6 +25,7 @@ class Request :
|
||||
public:
|
||||
typedef Request Value;
|
||||
typedef ListNode<Request, SharePtr<Request>, RequestListIndex::Size> ListNodeType;
|
||||
typedef Alloc<Request, Const::RequestAllocCacheSize> Allocator;
|
||||
static const int MaxRedirectLimit = 3;
|
||||
enum GenericCode
|
||||
{
|
||||
@ -44,6 +45,8 @@ public:
|
||||
UnlinkHead,
|
||||
PsubscribeHead,
|
||||
SubscribeHead,
|
||||
PunsubscribeHead,
|
||||
UnsubscribeHead,
|
||||
|
||||
CodeSentinel
|
||||
};
|
||||
@ -117,16 +120,20 @@ public:
|
||||
}
|
||||
Request* leader() const
|
||||
{
|
||||
return mLeader;
|
||||
return isLeader() ? const_cast<Request*>(this) : (Request*)mLeader;
|
||||
}
|
||||
bool isLeader() const
|
||||
{
|
||||
return mLeader == this;
|
||||
return mFollowers > 0;
|
||||
}
|
||||
bool isDelivered() const
|
||||
{
|
||||
return mDelivered;
|
||||
}
|
||||
bool isInline() const
|
||||
{
|
||||
return mInline;
|
||||
}
|
||||
void setDelivered()
|
||||
{
|
||||
mDelivered = true;
|
||||
@ -161,6 +168,7 @@ private:
|
||||
ResponsePtr mRes;
|
||||
bool mDone;
|
||||
bool mDelivered;
|
||||
bool mInline;
|
||||
Segment mHead; //for multi key command mget/mset/del...
|
||||
Segment mReq;
|
||||
Segment mKey;
|
||||
@ -174,6 +182,6 @@ private:
|
||||
|
||||
typedef List<Request, RequestListIndex::Recv> RecvRequestList;
|
||||
typedef List<Request, RequestListIndex::Send> SendRequestList;
|
||||
typedef Alloc<Request, Const::RequestAllocCacheSize> RequestAlloc;
|
||||
typedef Request::Allocator RequestAlloc;
|
||||
|
||||
#endif
|
||||
|
||||
@ -25,6 +25,8 @@ void RequestParser::reset()
|
||||
mStatus = Normal;
|
||||
mState = Idle;
|
||||
mInline = false;
|
||||
mEscape = false;
|
||||
mQuote = '\0';
|
||||
mArgNum = 0;
|
||||
mArgCnt = 0;
|
||||
mArgLen = 0;
|
||||
@ -43,6 +45,8 @@ inline bool RequestParser::isKey(bool split) const
|
||||
return mArgCnt > 0;
|
||||
case Command::MultiKeyVal:
|
||||
return split ? (mArgCnt & 1) : mArgCnt == 1;
|
||||
case Command::KeyAt2:
|
||||
return mArgCnt == 2;
|
||||
case Command::KeyAt3:
|
||||
return mArgCnt == 3;
|
||||
default:
|
||||
@ -53,10 +57,12 @@ inline bool RequestParser::isKey(bool split) const
|
||||
|
||||
inline bool RequestParser::isSplit(bool split) const
|
||||
{
|
||||
if (mCommand->mode & (Command::MultiKey|Command::MultiKeyVal)) {
|
||||
return split && mStatus == Normal && isKey(true);
|
||||
if (mCommand->mode & Command::MultiKey) {
|
||||
return split && mStatus == Normal && mArgNum > 2 && isKey(true);
|
||||
} else if (mCommand->mode & Command::MultiKeyVal) {
|
||||
return split && mStatus == Normal && mArgNum > 3 && isKey(true);
|
||||
} else if (mCommand->mode & Command::SMultiKey) {
|
||||
return mStatus == Normal;
|
||||
return mStatus == Normal && mArgNum > 2;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
@ -88,9 +94,11 @@ RequestParser::Status RequestParser::parse(Buffer* buf, int& pos, bool split)
|
||||
mState = InlineBegin;
|
||||
mInline = true;
|
||||
}
|
||||
/* NO break */
|
||||
case InlineBegin:
|
||||
if (ch == '\r') {
|
||||
error = __LINE__;
|
||||
if (ch == '\n') {
|
||||
mState = Idle;
|
||||
//error = __LINE__;
|
||||
} else if (!isspace(ch)) {
|
||||
mReq.begin().buf = buf;
|
||||
mReq.begin().pos = pos;
|
||||
@ -103,7 +111,13 @@ RequestParser::Status RequestParser::parse(Buffer* buf, int& pos, bool split)
|
||||
if (isspace(ch)) {
|
||||
mCmd[mArgLen < Const::MaxCmdLen ? mArgLen : Const::MaxCmdLen - 1] = '\0';
|
||||
parseCmd();
|
||||
mState = ch == '\r' ? InlineLF : InlineArg;
|
||||
mArgCnt = 1;
|
||||
if (ch == '\n') {
|
||||
mArgNum = 1;
|
||||
mState = Finished;
|
||||
goto Done;
|
||||
}
|
||||
mState = InlineArgBegin;
|
||||
} else {
|
||||
if (mArgLen < Const::MaxCmdLen) {
|
||||
mCmd[mArgLen] = tolower(ch);
|
||||
@ -111,15 +125,64 @@ RequestParser::Status RequestParser::parse(Buffer* buf, int& pos, bool split)
|
||||
++mArgLen;
|
||||
}
|
||||
break;
|
||||
case InlineArg:
|
||||
if (ch == '\r') {
|
||||
mState = InlineLF;
|
||||
}
|
||||
break;
|
||||
case InlineLF:
|
||||
case InlineArgBegin:
|
||||
if (ch == '\n') {
|
||||
mArgNum = mArgCnt;
|
||||
mState = Finished;
|
||||
goto Done;
|
||||
} else if (isspace(ch)) {
|
||||
break;
|
||||
}
|
||||
if (mArgCnt == 1) {
|
||||
mKey.begin().buf = buf;
|
||||
mKey.begin().pos = pos;
|
||||
}
|
||||
mState = InlineArg;
|
||||
/* NO break */
|
||||
case InlineArg:
|
||||
if (mEscape) {
|
||||
mEscape = false;
|
||||
} else if (mQuote) {
|
||||
if (ch == mQuote) {
|
||||
mState = InlineArgEnd;
|
||||
} else if (ch == '\\') {
|
||||
mEscape = true;
|
||||
} else if (ch == '\n') {
|
||||
error = __LINE__;
|
||||
}
|
||||
} else {
|
||||
if (isspace(ch)) {
|
||||
if (mArgCnt == 1) {
|
||||
mKey.end().buf = buf;
|
||||
mKey.end().pos = pos;
|
||||
}
|
||||
++mArgCnt;
|
||||
if (ch == '\n') {
|
||||
mArgNum = mArgCnt;
|
||||
mState = Finished;
|
||||
goto Done;
|
||||
} else {
|
||||
mState = InlineArgBegin;
|
||||
}
|
||||
} else if (ch == '\'' || ch == '"') {
|
||||
mQuote = ch;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case InlineArgEnd:
|
||||
if (isspace(ch)) {
|
||||
if (mArgCnt == 1) {
|
||||
mKey.end().buf = buf;
|
||||
mKey.end().pos = pos;
|
||||
}
|
||||
++mArgCnt;
|
||||
if (ch == '\n') {
|
||||
mArgNum = mArgCnt;
|
||||
mState = Finished;
|
||||
goto Done;
|
||||
} else {
|
||||
mState = InlineArgBegin;
|
||||
}
|
||||
} else {
|
||||
error = __LINE__;
|
||||
}
|
||||
@ -401,7 +464,14 @@ void RequestParser::parseCmd()
|
||||
mType = c->type;
|
||||
mCommand = c;
|
||||
if (mInline) {
|
||||
if (mType != Command::Ping) {
|
||||
switch (mType) {
|
||||
case Command::Ping:
|
||||
case Command::Echo:
|
||||
case Command::Auth:
|
||||
case Command::Select:
|
||||
case Command::Quit:
|
||||
break;
|
||||
default:
|
||||
mStatus = CmdError;
|
||||
logNotice("unsupport command %s in inline command protocol", c->name);
|
||||
return;
|
||||
|
||||
@ -19,8 +19,9 @@ public:
|
||||
Idle, // * or inline command
|
||||
InlineBegin,
|
||||
InlineCmd,
|
||||
InlineLF,
|
||||
InlineArgBegin,
|
||||
InlineArg,
|
||||
InlineArgEnd,
|
||||
ArgNum, // 2
|
||||
ArgNumLF, // \r\n
|
||||
CmdTag,
|
||||
@ -34,16 +35,16 @@ public:
|
||||
KeyLenLF,
|
||||
KeyBody,
|
||||
KeyBodyLF,
|
||||
ArgTag, // $ $
|
||||
ArgLen, // 3 5
|
||||
ArgLenLF, // \r\n \r\n
|
||||
ArgBody, // get hello
|
||||
ArgBodyLF, // \r\n \r\n
|
||||
SArgTag, // $ $
|
||||
SArgLen, // 3 5
|
||||
SArgLenLF, // \r\n \r\n
|
||||
SArgBody, // get hello
|
||||
SArgBodyLF, // \r\n \r\n
|
||||
ArgTag,
|
||||
ArgLen,
|
||||
ArgLenLF,
|
||||
ArgBody,
|
||||
ArgBodyLF,
|
||||
SArgTag,
|
||||
SArgLen,
|
||||
SArgLenLF,
|
||||
SArgBody,
|
||||
SArgBodyLF,
|
||||
Finished,
|
||||
|
||||
Error
|
||||
@ -63,6 +64,8 @@ public:
|
||||
~RequestParser();
|
||||
Status parse(Buffer* buf, int& pos, bool split);
|
||||
void reset();
|
||||
template<int Size>
|
||||
static bool decodeInlineArg(SString<Size>& dst, const String& src);
|
||||
bool isIdle() const
|
||||
{
|
||||
return mState == Idle;
|
||||
@ -116,6 +119,8 @@ private:
|
||||
Status mStatus;
|
||||
State mState;
|
||||
bool mInline;
|
||||
bool mEscape;
|
||||
char mQuote;
|
||||
int mArgNum;
|
||||
int mArgCnt;
|
||||
int mArgLen;
|
||||
@ -123,4 +128,48 @@ private:
|
||||
int mByteCnt;
|
||||
};
|
||||
|
||||
template<int Size>
|
||||
bool RequestParser::decodeInlineArg(SString<Size>& dst, const String& src)
|
||||
{
|
||||
bool ret = true;
|
||||
bool escape = false;
|
||||
char quote = '\0';
|
||||
const char* p = src.data();
|
||||
for (int i = 0; i < src.length(); ++i, ++p) {
|
||||
char c = *p;
|
||||
if (escape) {
|
||||
if (quote == '"') {
|
||||
switch (c) {
|
||||
case 'n': c = '\n'; break;
|
||||
case 'r': c = '\r'; break;
|
||||
case 't': c = '\t'; break;
|
||||
case 'b': c = '\b'; break;
|
||||
case 'a': c = '\a'; break;
|
||||
default: break;
|
||||
}
|
||||
} else {
|
||||
if (!dst.append('\\')) {
|
||||
ret = false;
|
||||
}
|
||||
}
|
||||
escape = false;
|
||||
} else if (quote) {
|
||||
if (c == '\\') {
|
||||
escape = true;
|
||||
continue;
|
||||
} else if (c == quote) {
|
||||
quote = '\0';
|
||||
continue;
|
||||
}
|
||||
} else if (c == '"' || c == '\'') {
|
||||
quote = c;
|
||||
continue;
|
||||
}
|
||||
if (!dst.append(c)) {
|
||||
ret = false;
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
@ -18,6 +18,7 @@ class Response :
|
||||
public:
|
||||
typedef Response Value;
|
||||
typedef ListNode<Response, SharePtr<Response>> ListNodeType;
|
||||
typedef Alloc<Response, Const::ResponseAllocCacheSize> Allocator;
|
||||
enum GenericCode
|
||||
{
|
||||
Pong,
|
||||
@ -137,6 +138,6 @@ private:
|
||||
};
|
||||
|
||||
typedef List<Response> ResponseList;
|
||||
typedef Alloc<Response, Const::ResponseAllocCacheSize> ResponseAlloc;
|
||||
typedef Response::Allocator ResponseAlloc;
|
||||
|
||||
#endif
|
||||
|
||||
@ -35,7 +35,7 @@ void SentinelServerPool::init(const SentinelServerPoolConf& conf)
|
||||
for (auto& sc : conf.sentinels) {
|
||||
Server* s = new Server(this, sc.addr, true);
|
||||
s->setRole(Server::Sentinel);
|
||||
s->setPassword(sc.password.empty() ? conf.password : sc.password);
|
||||
s->setPassword(sc.password.empty() ? conf.sentinelPassword:sc.password);
|
||||
mSentinels[i++] = s;
|
||||
mServs[s->addr()] = s;
|
||||
}
|
||||
@ -275,6 +275,18 @@ AddrParser::Status AddrParser::parse(SString<Const::MaxAddrLen>& addr)
|
||||
return mState != Invalid ? Done : Error;
|
||||
}
|
||||
|
||||
static bool hasValidPort(const String& addr)
|
||||
{
|
||||
const char* p = addr.data() + addr.length();
|
||||
for (int i = 0; i < addr.length(); ++i) {
|
||||
if (*(--p) == ':') {
|
||||
int port = atoi(p + 1);
|
||||
return port > 0 && port < 65536;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
void SentinelServerPool::handleSentinels(Handler* h, ConnectConnection* s, Request* req, Response* res)
|
||||
{
|
||||
if (!res || !res->isArray()) {
|
||||
@ -286,6 +298,11 @@ void SentinelServerPool::handleSentinels(Handler* h, ConnectConnection* s, Reque
|
||||
auto st = parser.parse(addr);
|
||||
if (st == AddrParser::Ok) {
|
||||
logDebug("sentinel server pool parse sentinel %s", addr.data());
|
||||
if (!hasValidPort(addr)) {
|
||||
logNotice("sentinel server pool parse sentienl %s invalid",
|
||||
addr.data());
|
||||
continue;
|
||||
}
|
||||
auto it = mServs.find(addr);
|
||||
Server* serv = it == mServs.end() ? nullptr : it->second;
|
||||
if (!serv) {
|
||||
|
||||
@ -162,6 +162,9 @@ Server* ServerGroup::getReadServer(Handler* h, DC* localDC) const
|
||||
continue;
|
||||
}
|
||||
DC* dc = s->dc();
|
||||
if (!dc) {
|
||||
continue;
|
||||
}
|
||||
int dcrp = localDC->getReadPriority(dc);
|
||||
if (dcrp <= 0) {
|
||||
continue;
|
||||
@ -221,7 +224,7 @@ Server* ServerGroup::getReadServer(Handler* h, DC* localDC) const
|
||||
dc = sdc[0];
|
||||
found = true;
|
||||
}
|
||||
if (!found) {//dc maybe nullptr even we found
|
||||
if (!found) {
|
||||
return nullptr;
|
||||
}
|
||||
Server* deadServs[Const::MaxServInGroup];
|
||||
|
||||
@ -18,9 +18,11 @@ void ServerPool::init(const ServerPoolConf& conf)
|
||||
mMasterReadPriority = conf.masterReadPriority;
|
||||
mStaticSlaveReadPriority = conf.staticSlaveReadPriority;
|
||||
mDynamicSlaveReadPriority = conf.dynamicSlaveReadPriority;
|
||||
mRefreshInterval = conf.refreshInterval * 1000000;
|
||||
mRefreshInterval = conf.refreshInterval;
|
||||
mServerTimeout = conf.serverTimeout;
|
||||
mServerFailureLimit = conf.serverFailureLimit;
|
||||
mServerRetryTimeout = conf.serverRetryTimeout * 1000000;
|
||||
mServerRetryTimeout = conf.serverRetryTimeout;
|
||||
mKeepAlive = conf.keepalive;
|
||||
mDbNum = conf.databases;
|
||||
}
|
||||
|
||||
|
||||
@ -20,7 +20,7 @@ public:
|
||||
{
|
||||
Unknown,
|
||||
Cluster,
|
||||
Sentinel
|
||||
Standalone
|
||||
};
|
||||
static const int DefaultServerRetryTimeout = 10000000;
|
||||
static const int DefaultRefreshInterval = 1000000;
|
||||
@ -56,6 +56,10 @@ public:
|
||||
{
|
||||
return mRefreshInterval;
|
||||
}
|
||||
long serverTimeout() const
|
||||
{
|
||||
return mServerTimeout;
|
||||
}
|
||||
int serverFailureLimit() const
|
||||
{
|
||||
return mServerFailureLimit;
|
||||
@ -64,6 +68,10 @@ public:
|
||||
{
|
||||
return mServerRetryTimeout;
|
||||
}
|
||||
int keepalive() const
|
||||
{
|
||||
return mKeepAlive;
|
||||
}
|
||||
int dbNum() const
|
||||
{
|
||||
return mDbNum;
|
||||
@ -133,8 +141,10 @@ private:
|
||||
int mStaticSlaveReadPriority;
|
||||
int mDynamicSlaveReadPriority;
|
||||
long mRefreshInterval;
|
||||
long mServerTimeout;
|
||||
int mServerFailureLimit;
|
||||
long mServerRetryTimeout;
|
||||
int mKeepAlive;
|
||||
int mDbNum;
|
||||
};
|
||||
|
||||
|
||||
@ -102,7 +102,7 @@ void Socket::getFirstAddr(const char* addr, int type, int protocol, sockaddr* re
|
||||
} else {
|
||||
std::string tmp;
|
||||
const char* host = addr;
|
||||
const char* port = strchr(addr, ':');
|
||||
const char* port = strrchr(addr, ':');
|
||||
if (port) {
|
||||
tmp.append(addr, port - addr);
|
||||
host = tmp.c_str();
|
||||
@ -155,6 +155,35 @@ bool Socket::setTcpNoDelay(bool val)
|
||||
return ret == 0;
|
||||
}
|
||||
|
||||
bool Socket::setTcpKeepAlive(int interval)
|
||||
{
|
||||
int val = 1;
|
||||
int ret = setsockopt(mFd, SOL_SOCKET, SO_KEEPALIVE, &val, sizeof(val));
|
||||
if (ret != 0) {
|
||||
return false;
|
||||
}
|
||||
#ifdef __linux__
|
||||
val = interval;
|
||||
ret = setsockopt(mFd, IPPROTO_TCP, TCP_KEEPIDLE, &val, sizeof(val));
|
||||
if (ret != 0) {
|
||||
return false;
|
||||
}
|
||||
val = interval / 3;
|
||||
ret = setsockopt(mFd, IPPROTO_TCP, TCP_KEEPINTVL, &val, sizeof(val));
|
||||
if (ret != 0) {
|
||||
return false;
|
||||
}
|
||||
val = 3;
|
||||
ret = setsockopt(mFd, IPPROTO_TCP, TCP_KEEPCNT, &val, sizeof(val));
|
||||
if (ret != 0) {
|
||||
return false;
|
||||
}
|
||||
#else
|
||||
((void)interval); //Avoid unused var warning for non Linux systems
|
||||
#endif
|
||||
return true;
|
||||
}
|
||||
|
||||
int Socket::read(void* buf, int cnt)
|
||||
{
|
||||
FuncCallTimer();
|
||||
|
||||
@ -46,7 +46,7 @@ public:
|
||||
EventError,
|
||||
ExceptError,
|
||||
|
||||
CustomStatus
|
||||
CustomStatus = 100
|
||||
};
|
||||
public:
|
||||
Socket(int fd = -1);
|
||||
@ -59,6 +59,7 @@ public:
|
||||
void close();
|
||||
bool setNonBlock(bool val = true);
|
||||
bool setTcpNoDelay(bool val = true);
|
||||
bool setTcpKeepAlive(int interval);
|
||||
int read(void* buf, int cnt);
|
||||
int write(const void* buf, int cnt);
|
||||
int writev(const struct iovec* vecs, int cnt);
|
||||
|
||||
482
src/StandaloneServerPool.cpp
Normal file
@ -0,0 +1,482 @@
|
||||
/*
|
||||
* predixy - A high performance and full features proxy for redis.
|
||||
* Copyright (C) 2017 Joyield, Inc. <joyield.com@gmail.com>
|
||||
* All rights reserved.
|
||||
*/
|
||||
|
||||
#include <algorithm>
|
||||
#include "Logger.h"
|
||||
#include "ServerGroup.h"
|
||||
#include "Handler.h"
|
||||
#include "StandaloneServerPool.h"
|
||||
|
||||
StandaloneServerPool::StandaloneServerPool(Proxy* p):
|
||||
ServerPoolTmpl(p, Standalone),
|
||||
mDist(Distribution::Modula)
|
||||
{
|
||||
mSentinels.reserve(MaxSentinelNum);
|
||||
mServPool.reserve(Const::MaxServNum);
|
||||
mHashTag[0] = mHashTag[1] = '\0';
|
||||
}
|
||||
|
||||
StandaloneServerPool::~StandaloneServerPool()
|
||||
{
|
||||
}
|
||||
|
||||
void StandaloneServerPool::init(const StandaloneServerPoolConf& conf)
|
||||
{
|
||||
ServerPool::init(conf);
|
||||
mRefreshMethod = conf.refreshMethod;
|
||||
mDist = conf.dist;
|
||||
mHash = conf.hash;
|
||||
mHashTag[0] = conf.hashTag[0];
|
||||
mHashTag[1] = conf.hashTag[1];
|
||||
int i = 0;
|
||||
if (conf.refreshMethod == ServerPoolRefreshMethod::Sentinel) {
|
||||
mSentinels.resize(conf.sentinels.size());
|
||||
for (auto& sc : conf.sentinels) {
|
||||
Server* s = new Server(this, sc.addr, true);
|
||||
s->setRole(Server::Sentinel);
|
||||
s->setPassword(sc.password.empty() ? conf.sentinelPassword:sc.password);
|
||||
mSentinels[i++] = s;
|
||||
mServs[s->addr()] = s;
|
||||
}
|
||||
}
|
||||
mGroupPool.resize(conf.groups.size());
|
||||
i = 0;
|
||||
for (auto& gc : conf.groups) {
|
||||
ServerGroup* g = new ServerGroup(this, gc.name);
|
||||
mGroupPool[i++] = g;
|
||||
auto role = Server::Master;
|
||||
for (auto& sc : gc.servers) {
|
||||
Server* s = new Server(this, sc.addr, true);
|
||||
s->setPassword(sc.password.empty() ? conf.password : sc.password);
|
||||
mServPool.push_back(s);
|
||||
mServs[s->addr()] = s;
|
||||
g->add(s);
|
||||
s->setGroup(g);
|
||||
switch (mRefreshMethod.value()) {
|
||||
case ServerPoolRefreshMethod::Fixed:
|
||||
s->setOnline(true);
|
||||
s->setRole(role);
|
||||
role = Server::Slave;
|
||||
break;
|
||||
default:
|
||||
s->setOnline(false);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Server* StandaloneServerPool::getServer(Handler* h, Request* req, const String& key) const
|
||||
{
|
||||
FuncCallTimer();
|
||||
switch (req->type()) {
|
||||
case Command::SentinelGetMaster:
|
||||
case Command::SentinelSlaves:
|
||||
case Command::SentinelSentinels:
|
||||
if (mSentinels.empty()) {
|
||||
return nullptr;
|
||||
} else {
|
||||
Server* s = randServer(h, mSentinels);
|
||||
logDebug("sentinel server pool get server %s for sentinel command",
|
||||
s->addr().data());
|
||||
return s;
|
||||
}
|
||||
break;
|
||||
case Command::Randomkey:
|
||||
return randServer(h, mServPool);
|
||||
default:
|
||||
break;
|
||||
}
|
||||
if (mGroupPool.size() == 1) {
|
||||
return mGroupPool[0]->getServer(h, req);
|
||||
} else if (mGroupPool.size() > 1) {
|
||||
switch (mDist) {
|
||||
case Distribution::Modula:
|
||||
{
|
||||
long idx = mHash.hash(key.data(), key.length(), mHashTag);
|
||||
idx %= mGroupPool.size();
|
||||
return mGroupPool[idx]->getServer(h, req);
|
||||
}
|
||||
break;
|
||||
case Distribution::Random:
|
||||
{
|
||||
int idx = h->rand() % mGroupPool.size();
|
||||
return mGroupPool[idx]->getServer(h, req);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
void StandaloneServerPool::refreshRequest(Handler* h)
|
||||
{
|
||||
logDebug("h %d update standalone server pool", h->id());
|
||||
switch (mRefreshMethod.value()) {
|
||||
case ServerPoolRefreshMethod::Sentinel:
|
||||
for (auto g : mGroupPool) {
|
||||
RequestPtr req = RequestAlloc::create();
|
||||
req->setSentinels(g->name());
|
||||
req->setData(g);
|
||||
h->handleRequest(req);
|
||||
req = RequestAlloc::create();
|
||||
req->setSentinelGetMaster(g->name());
|
||||
req->setData(g);
|
||||
h->handleRequest(req);
|
||||
req = RequestAlloc::create();
|
||||
req->setSentinelSlaves(g->name());
|
||||
req->setData(g);
|
||||
h->handleRequest(req);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
void StandaloneServerPool::handleResponse(Handler* h, ConnectConnection* s, Request* req, Response* res)
|
||||
{
|
||||
switch (req->type()) {
|
||||
case Command::SentinelSentinels:
|
||||
handleSentinels(h, s, req, res);
|
||||
break;
|
||||
case Command::SentinelGetMaster:
|
||||
handleGetMaster(h, s, req, res);
|
||||
break;
|
||||
case Command::SentinelSlaves:
|
||||
handleSlaves(h, s, req, res);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
class AddrParser
|
||||
{
|
||||
public:
|
||||
enum Status {
|
||||
Ok,
|
||||
Error,
|
||||
Done
|
||||
};
|
||||
public:
|
||||
AddrParser(const Segment& res):
|
||||
mRes(res),
|
||||
mState(Idle),
|
||||
mCnt(0),
|
||||
mArgLen(0),
|
||||
mIp(false),
|
||||
mPort(false)
|
||||
{
|
||||
mRes.rewind();
|
||||
}
|
||||
int count() const {return mCnt;}
|
||||
Status parse(SString<Const::MaxAddrLen>& addr);
|
||||
private:
|
||||
enum State {
|
||||
Idle,
|
||||
Count,
|
||||
CountLF,
|
||||
Arg,
|
||||
ArgLen,
|
||||
ArgLenLF,
|
||||
SubArrayLen,
|
||||
Body,
|
||||
BodyLF,
|
||||
Invalid,
|
||||
Finished
|
||||
};
|
||||
private:
|
||||
Segment mRes;
|
||||
State mState;
|
||||
int mCnt;
|
||||
int mArgLen;
|
||||
bool mIp;
|
||||
bool mPort;
|
||||
SString<4> mKey;
|
||||
};
|
||||
|
||||
AddrParser::Status AddrParser::parse(SString<Const::MaxAddrLen>& addr)
|
||||
{
|
||||
const char* dat;
|
||||
int len;
|
||||
addr.clear();
|
||||
while (mRes.get(dat, len) && mState != Invalid) {
|
||||
for (int i = 0; i < len && mState != Invalid; ++i) {
|
||||
char ch = dat[i];
|
||||
switch (mState) {
|
||||
case Idle:
|
||||
mState = ch == '*' ? Count : Invalid;
|
||||
break;
|
||||
case Count:
|
||||
if (ch >= '0' && ch <= '9') {
|
||||
mCnt = mCnt * 10 + (ch - '0');
|
||||
} else if (ch == '\r') {
|
||||
if (mCnt == 0) {
|
||||
mState = Finished;
|
||||
return Done;
|
||||
} else if (mCnt < 0) {
|
||||
mState = Invalid;
|
||||
return Error;
|
||||
}
|
||||
mState = CountLF;
|
||||
} else {
|
||||
mState = Invalid;
|
||||
}
|
||||
break;
|
||||
case CountLF:
|
||||
mState = ch == '\n' ? Arg : Invalid;
|
||||
break;
|
||||
case Arg:
|
||||
if (ch == '$') {
|
||||
mState = ArgLen;
|
||||
mArgLen = 0;
|
||||
} else if (ch == '*') {
|
||||
mState = SubArrayLen;
|
||||
} else {
|
||||
mState = Invalid;
|
||||
}
|
||||
break;
|
||||
case ArgLen:
|
||||
if (ch >= '0' && ch <= '9') {
|
||||
mArgLen = mArgLen * 10 + (ch - '0');
|
||||
} else if (ch == '\r') {
|
||||
mState = ArgLenLF;
|
||||
} else {
|
||||
mState = Invalid;
|
||||
}
|
||||
break;
|
||||
case ArgLenLF:
|
||||
mState = ch == '\n' ? Body : Invalid;
|
||||
break;
|
||||
case SubArrayLen:
|
||||
if (ch == '\n') {
|
||||
mState = Arg;
|
||||
}
|
||||
break;
|
||||
case Body:
|
||||
if (ch == '\r') {
|
||||
mState = BodyLF;
|
||||
if (mPort) {
|
||||
mPort = false;
|
||||
mRes.use(i + 1);
|
||||
return Ok;
|
||||
} else if (mIp) {
|
||||
mIp = false;
|
||||
addr.append(':');
|
||||
} else if (mArgLen == 2 && strcmp(mKey.data(), "ip") == 0) {
|
||||
mIp = true;
|
||||
} else if (mArgLen == 4 && strcmp(mKey.data(), "port") == 0) {
|
||||
mPort = true;
|
||||
}
|
||||
break;
|
||||
}
|
||||
if (mIp || mPort) {
|
||||
addr.append(ch);
|
||||
} else if (mArgLen == 2 || mArgLen == 4) {
|
||||
mKey.append(ch);
|
||||
}
|
||||
break;
|
||||
case BodyLF:
|
||||
mKey.clear();
|
||||
mState = ch == '\n' ? Arg : Invalid;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
mRes.use(len);
|
||||
}
|
||||
return mState != Invalid ? Done : Error;
|
||||
}
|
||||
|
||||
static bool hasValidPort(const String& addr)
|
||||
{
|
||||
const char* p = addr.data() + addr.length();
|
||||
for (int i = 0; i < addr.length(); ++i) {
|
||||
if (*(--p) == ':') {
|
||||
int port = atoi(p + 1);
|
||||
return port > 0 && port < 65536;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
void StandaloneServerPool::handleSentinels(Handler* h, ConnectConnection* s, Request* req, Response* res)
|
||||
{
|
||||
if (!res || !res->isArray()) {
|
||||
return;
|
||||
}
|
||||
AddrParser parser(res->body());
|
||||
SString<Const::MaxAddrLen> addr;
|
||||
while (true) {
|
||||
auto st = parser.parse(addr);
|
||||
if (st == AddrParser::Ok) {
|
||||
logDebug("sentinel server pool parse sentinel %s", addr.data());
|
||||
if (!hasValidPort(addr)) {
|
||||
logNotice("sentinel server pool parse sentienl %s invalid",
|
||||
addr.data());
|
||||
continue;
|
||||
}
|
||||
auto it = mServs.find(addr);
|
||||
Server* serv = it == mServs.end() ? nullptr : it->second;
|
||||
if (!serv) {
|
||||
if (mSentinels.size() == mSentinels.capacity()) {
|
||||
logWarn("too many sentinels %d, will ignore new sentinel %s",
|
||||
(int)mSentinels.size(), addr.data());
|
||||
continue;
|
||||
}
|
||||
serv = new Server(this, addr, false);
|
||||
serv->setRole(Server::Sentinel);
|
||||
serv->setPassword(password());
|
||||
mSentinels.push_back(serv);
|
||||
mServs[serv->addr()] = serv;
|
||||
logNotice("h %d create new sentinel %s",
|
||||
h->id(), addr.data());
|
||||
}
|
||||
serv->setOnline(true);
|
||||
} else if (st == AddrParser::Done) {
|
||||
break;
|
||||
} else {
|
||||
logError("sentinel server pool parse sentinel sentinels error");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void StandaloneServerPool::handleGetMaster(Handler* h, ConnectConnection* s, Request* req, Response* res)
|
||||
{
|
||||
if (!res || !res->isArray()) {
|
||||
return;
|
||||
}
|
||||
ServerGroup* g = (ServerGroup*)req->data();
|
||||
if (!g) {
|
||||
return;
|
||||
}
|
||||
SegmentStr<Const::MaxAddrLen + 32> str(res->body());
|
||||
if (!str.complete()) {
|
||||
return;
|
||||
}
|
||||
if (strncmp(str.data(), "*2\r\n$", 5) != 0) {
|
||||
return;
|
||||
}
|
||||
SString<Const::MaxAddrLen> addr;
|
||||
const char* p = str.data() + 5;
|
||||
int len = atoi(p);
|
||||
if (len <= 0) {
|
||||
return;
|
||||
}
|
||||
p = strchr(p, '\r') + 2;
|
||||
if (!addr.append(p, len)) {
|
||||
return;
|
||||
}
|
||||
if (!addr.append(':')) {
|
||||
return;
|
||||
}
|
||||
p += len + 3;
|
||||
len = atoi(p);
|
||||
if (len <= 0) {
|
||||
return;
|
||||
}
|
||||
p = strchr(p, '\r') + 2;
|
||||
if (!addr.append(p, len)) {
|
||||
return;
|
||||
}
|
||||
logDebug("sentinel server pool group %s get master %s",
|
||||
g->name().data(), addr.data());
|
||||
auto it = mServs.find(addr);
|
||||
Server* serv = it == mServs.end() ? nullptr : it->second;
|
||||
if (serv) {
|
||||
serv->setOnline(true);
|
||||
serv->setRole(Server::Master);
|
||||
auto old = serv->group();
|
||||
if (old) {
|
||||
if (old != g) {
|
||||
old->remove(serv);
|
||||
g->add(serv);
|
||||
serv->setGroup(g);
|
||||
}
|
||||
} else {
|
||||
g->add(serv);
|
||||
serv->setGroup(g);
|
||||
}
|
||||
} else {
|
||||
if (mServPool.size() == mServPool.capacity()) {
|
||||
logWarn("too many servers %d, will ignore new master server %s",
|
||||
(int)mServPool.size(), addr.data());
|
||||
return;
|
||||
}
|
||||
serv = new Server(this, addr, false);
|
||||
serv->setRole(Server::Master);
|
||||
serv->setPassword(password());
|
||||
mServPool.push_back(serv);
|
||||
g->add(serv);
|
||||
serv->setGroup(g);
|
||||
mServs[serv->addr()] = serv;
|
||||
logNotice("sentinel server pool group %s create master server %s %s",
|
||||
g->name().data(), addr.data(), serv->dcName().data());
|
||||
}
|
||||
}
|
||||
|
||||
void StandaloneServerPool::handleSlaves(Handler* h, ConnectConnection* s, Request* req, Response* res)
|
||||
{
|
||||
if (!res || !res->isArray()) {
|
||||
return;
|
||||
}
|
||||
ServerGroup* g = (ServerGroup*)req->data();
|
||||
if (!g) {
|
||||
return;
|
||||
}
|
||||
AddrParser parser(res->body());
|
||||
SString<Const::MaxAddrLen> addr;
|
||||
while (true) {
|
||||
auto st = parser.parse(addr);
|
||||
if (st == AddrParser::Ok) {
|
||||
logDebug("sentinel server pool group %s parse slave %s",
|
||||
g->name().data(), addr.data());
|
||||
auto it = mServs.find(addr);
|
||||
Server* serv = it == mServs.end() ? nullptr : it->second;
|
||||
if (serv) {
|
||||
serv->setOnline(true);
|
||||
serv->setRole(Server::Slave);
|
||||
auto old = serv->group();
|
||||
if (old) {
|
||||
if (old != g) {
|
||||
old->remove(serv);
|
||||
g->add(serv);
|
||||
serv->setGroup(g);
|
||||
}
|
||||
} else {
|
||||
g->add(serv);
|
||||
serv->setGroup(g);
|
||||
}
|
||||
} else {
|
||||
if (mServPool.size() == mServPool.capacity()) {
|
||||
logWarn("too many servers %d, will ignore new slave server %s",
|
||||
(int)mServPool.size(), addr.data());
|
||||
return;
|
||||
}
|
||||
serv = new Server(this, addr, false);
|
||||
serv->setRole(Server::Slave);
|
||||
serv->setPassword(password());
|
||||
mServPool.push_back(serv);
|
||||
g->add(serv);
|
||||
serv->setGroup(g);
|
||||
mServs[serv->addr()] = serv;
|
||||
logNotice("sentinel server pool group %s create slave server %s %s",
|
||||
g->name().data(), addr.data(), serv->dcName().data());
|
||||
}
|
||||
} else if (st == AddrParser::Done) {
|
||||
break;
|
||||
} else {
|
||||
logError("sentinel server pool group %s parse sentinel sentinels error",
|
||||
g->name().data());
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
43
src/StandaloneServerPool.h
Normal file
@ -0,0 +1,43 @@
|
||||
/*
|
||||
* predixy - A high performance and full features proxy for redis.
|
||||
* Copyright (C) 2017 Joyield, Inc. <joyield.com@gmail.com>
|
||||
* All rights reserved.
|
||||
*/
|
||||
|
||||
#ifndef _PREDIXY_STANDALONE_SERVER_POOL_H_
|
||||
#define _PREDIXY_STANDALONE_SERVER_POOL_H_
|
||||
|
||||
#include <map>
|
||||
#include "Predixy.h"
|
||||
#include "ServerPool.h"
|
||||
|
||||
class StandaloneServerPool : public ServerPoolTmpl<StandaloneServerPool>
|
||||
{
|
||||
public:
|
||||
static const int MaxSentinelNum = 64;
|
||||
public:
|
||||
StandaloneServerPool(Proxy* p);
|
||||
~StandaloneServerPool();
|
||||
void init(const StandaloneServerPoolConf& conf);
|
||||
Server* getServer(Handler* h, Request* req, const String& key) const;
|
||||
Server* iter(int& cursor) const
|
||||
{
|
||||
return ServerPool::iter(mServPool, cursor);
|
||||
}
|
||||
void refreshRequest(Handler* h);
|
||||
void handleResponse(Handler* h, ConnectConnection* s, Request* req, Response* res);
|
||||
private:
|
||||
void handleSentinels(Handler* h, ConnectConnection* s, Request* req, Response* res);
|
||||
void handleGetMaster(Handler* h, ConnectConnection* s, Request* req, Response* res);
|
||||
void handleSlaves(Handler* h, ConnectConnection* s, Request* req, Response* res);
|
||||
friend class ServerPoolTmpl<StandaloneServerPool>;
|
||||
private:
|
||||
ServerPoolRefreshMethod mRefreshMethod;
|
||||
std::vector<Server*> mSentinels;
|
||||
std::vector<Server*> mServPool;
|
||||
Distribution mDist;
|
||||
Hash mHash;
|
||||
char mHashTag[2];
|
||||
};
|
||||
|
||||
#endif
|
||||
580
test/basic.py
Executable file
@ -0,0 +1,580 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# predixy - A high performance and full features proxy for redis.
|
||||
# Copyright (C) 2017 Joyield, Inc. <joyield.com@gmail.com>
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
import time
|
||||
import redis
|
||||
import sys
|
||||
import argparse
|
||||
|
||||
c = None
|
||||
|
||||
Cases = [
|
||||
('ping', [
|
||||
[('ping',), 'PONG'],
|
||||
]),
|
||||
('echo', [
|
||||
[('echo', 'hello'), 'hello'],
|
||||
]),
|
||||
('del', [
|
||||
[('set', 'key', 'val'), 'OK'],
|
||||
[('del', 'key'), 1],
|
||||
[('del', 'key'), 0],
|
||||
[('mset', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'), 'OK'],
|
||||
[('del', 'a', 'b', 'c'), 2],
|
||||
[('del', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'), 2],
|
||||
]),
|
||||
('dump', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('dump', 'k'), lambda x:len(x)>10],
|
||||
[('del', 'k'), 1],
|
||||
]),
|
||||
('exists', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('exists', 'k'), 1],
|
||||
[('del', 'k'), 1],
|
||||
[('exists', 'k'), 0],
|
||||
[('mset', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'), 'OK'],
|
||||
[('exists', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'), 4],
|
||||
[('del', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'), 4],
|
||||
]),
|
||||
('rename', [
|
||||
[('del', '{k}1', '{k}2'), ],
|
||||
[('set', '{k}1', 'v'), 'OK'],
|
||||
[('rename', '{k}1', '{k}2'), 'OK'],
|
||||
[('get', '{k}2'), 'v'],
|
||||
]),
|
||||
('renamenx', [
|
||||
[('del', '{k}1', '{k}2'), ],
|
||||
[('set', '{k}1', 'v'), 'OK'],
|
||||
[('renamenx', '{k}1', '{k}2'), 1],
|
||||
[('get', '{k}2'), 'v'],
|
||||
[('set', '{k}1', 'new'), 'OK'],
|
||||
[('renamenx', '{k}1', '{k}2'), 0],
|
||||
]),
|
||||
('expire', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('ttl', 'k'), -1],
|
||||
[('expire', 'k', 10), 1],
|
||||
[('ttl', 'k'), lambda x: x>0],
|
||||
[('del', 'k'), 1],
|
||||
]),
|
||||
('pexpire', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('ttl', 'k'), -1],
|
||||
[('pexpire', 'k', 10000), 1],
|
||||
[('ttl', 'k'), lambda x: x>0],
|
||||
[('del', 'k'), 1],
|
||||
]),
|
||||
('expireat', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('ttl', 'k'), -1],
|
||||
[('expireat', 'k', int(time.time()) + 10), 1],
|
||||
[('ttl', 'k'), lambda x: x>0],
|
||||
[('del', 'k'), 1],
|
||||
]),
|
||||
('pexpireat', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('pttl', 'k'), -1],
|
||||
[('pexpireat', 'k', (int(time.time()) + 10) * 1000) , 1],
|
||||
[('pttl', 'k'), lambda x: x>0],
|
||||
[('del', 'k'), 1],
|
||||
]),
|
||||
('persist', [
|
||||
[('setex', 'k', 10, 'v'), 'OK'],
|
||||
[('ttl', 'k'), lambda x: x>0],
|
||||
[('persist', 'k'), 1],
|
||||
[('ttl', 'k'), -1],
|
||||
[('del', 'k'), 1],
|
||||
]),
|
||||
('pttl', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('pttl', 'k'), -1],
|
||||
[('setex', 'k', 10, 'v'), 'OK'],
|
||||
[('pttl', 'k'), lambda x: x>0],
|
||||
[('del', 'k'), 1],
|
||||
]),
|
||||
('ttl', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('ttl', 'k'), -1],
|
||||
[('setex', 'k', 10, 'v'), 'OK'],
|
||||
[('ttl', 'k'), lambda x: x>0],
|
||||
[('del', 'k'), 1],
|
||||
]),
|
||||
('type', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('del', 'h', 'l', 's', 'z'),],
|
||||
[('type', 'k'), 'string'],
|
||||
[('hset', 'h', 'k', 'v'), 1],
|
||||
[('type', 'h'), 'hash'],
|
||||
[('lpush', 'l', 'k'), 1],
|
||||
[('type', 'l'), 'list'],
|
||||
[('sadd', 's', 'k'), 1],
|
||||
[('type', 's'), 'set'],
|
||||
[('zadd', 'z', 10, 'k'), 1],
|
||||
[('type', 'z'), 'zset'],
|
||||
[('del', 'k', 'h', 'l', 's', 'z'), 5],
|
||||
]),
|
||||
('sort', [
|
||||
[('del', 'list'), ],
|
||||
[('lpush', 'list', 6, 3, 1, 2, 5, 4), 6],
|
||||
[('sort', 'list'), ['1', '2', '3', '4', '5', '6']],
|
||||
[('sort', 'list', 'ASC'), ['1', '2', '3', '4', '5', '6']],
|
||||
[('sort', 'list', 'DESC'), ['6', '5', '4', '3', '2', '1']],
|
||||
[('sort', 'list', 'LIMIT', 1, 2), ['2', '3']],
|
||||
[('mset', 'u1', -1, 'u2', -2, 'u3', -3, 'u4', -4, 'u5', -5, 'u6', -6), 'OK'],
|
||||
[('del', 'list'), 1],
|
||||
[('lpush', 'list', 'c++', 'java', 'c', 'javascript', 'python'), 5],
|
||||
[('sort', 'list', 'ALPHA'), ['c', 'c++', 'java', 'javascript', 'python']],
|
||||
[('sort', 'list', 'DESC', 'ALPHA'), ['python', 'javascript', 'java', 'c++', 'c']],
|
||||
[('del', 'list', 'u1', 'u2', 'u3', 'u4', 'u5', 'u6'), 7],
|
||||
]),
|
||||
('touch', [
|
||||
[('del', 'k1', 'k2', 'k3', 'k4'), ],
|
||||
[('mset', 'k1', 'v1', 'k2', 'v2', 'k3', 'v3'), 'OK'],
|
||||
[('touch', 'k1'), 1],
|
||||
[('touch', 'k1', 'k2', 'k3', 'k4'), 3],
|
||||
[('del', 'k1', 'k2', 'k3'), 3],
|
||||
[('touch', 'k1', 'k2', 'k3', 'k4'), 0],
|
||||
]),
|
||||
('scan', [
|
||||
[('mset', 'k1', 'v1', 'k2', 'v2', 'k3', 'v3'), 'OK'],
|
||||
[('scan', '0'), lambda x: x[0] != 0],
|
||||
[('scan', '0', 'count', 1), lambda x: x[0] != 0],
|
||||
[('del', 'k1', 'k2', 'k3'), 3],
|
||||
]),
|
||||
('append', [
|
||||
[('set', 'k', ''), 'OK'],
|
||||
[('strlen', 'k'), 0],
|
||||
[('append', 'k', '1'), 1],
|
||||
[('get', 'k'), '1'],
|
||||
[('append', 'k', '2'), 2],
|
||||
[('get', 'k'), '12'],
|
||||
]),
|
||||
('bitcount', [
|
||||
[('set', 'k', '\x0f\x07\x03\x01'), 'OK'],
|
||||
[('bitcount', 'k'), 10],
|
||||
[('bitcount', 'k', 0, 1), 7],
|
||||
[('bitcount', 'k', -2, -1), 3],
|
||||
]),
|
||||
('bitfield', [
|
||||
[('set', 'k', 123), 'OK'],
|
||||
[('bitfield', 'k', 'INCRBY', 'i5', 2, 3), [-5]],
|
||||
]),
|
||||
('bitop', [
|
||||
[('del', '{k}1', '{k}2', '{k}3'), ],
|
||||
[('set', '{k}1', '\x0f'), 'OK'],
|
||||
[('set', '{k}2', '\xf1'), 'OK'],
|
||||
[('bitop', 'NOT', '{k}3', '{k}1'), 1],
|
||||
[('bitop', 'AND', '{k}3', '{k}1', '{k}2'), 1],
|
||||
]),
|
||||
('bitpos', [
|
||||
[('set', 'k', '\x0f'), 'OK'],
|
||||
[('bitpos', 'k', 0), 0],
|
||||
[('bitpos', 'k', 1, 0, -1), 4],
|
||||
]),
|
||||
('decr', [
|
||||
[('set', 'k', 10), 'OK'],
|
||||
[('decr', 'k'), 9],
|
||||
[('decr', 'k'), 8],
|
||||
]),
|
||||
('decrby', [
|
||||
[('set', 'k', 10), 'OK'],
|
||||
[('decrby', 'k', 2), 8],
|
||||
[('decrby', 'k', 3), 5],
|
||||
]),
|
||||
('getbit', [
|
||||
[('set', 'k', '\x0f'), 'OK'],
|
||||
[('getbit', 'k', 0), 0],
|
||||
[('getbit', 'k', 4), 1],
|
||||
]),
|
||||
('getrange', [
|
||||
[('set', 'k', '0123456'), 'OK'],
|
||||
[('getrange', 'k', 0, 2), '012'],
|
||||
[('getrange', 'k', -2, -1), '56'],
|
||||
]),
|
||||
('getset', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('getset', 'k', 'value'), 'v'],
|
||||
[('get', 'k'), 'value'],
|
||||
]),
|
||||
('incr', [
|
||||
[('set', 'k', 10), 'OK'],
|
||||
[('incr', 'k'), 11],
|
||||
[('incr', 'k'), 12],
|
||||
]),
|
||||
('incrby', [
|
||||
[('set', 'k', 10), 'OK'],
|
||||
[('incrby', 'k', 2), 12],
|
||||
[('incrby', 'k', 3), 15],
|
||||
]),
|
||||
('incrbyfloat', [
|
||||
[('set', 'k', 10), 'OK'],
|
||||
[('incrbyfloat', 'k', 2.5), '12.5'],
|
||||
[('incrbyfloat', 'k', 3.5), '16'],
|
||||
]),
|
||||
('mget', [
|
||||
[('mset', 'k', 'v'), 'OK'],
|
||||
[('mget', 'k'), ['v']],
|
||||
[('mget', 'k', 'k'), ['v', 'v']],
|
||||
[('mset', 'k1', 'v1', 'k2', 'v2'), 'OK'],
|
||||
[('mget', 'k1', 'v1', 'k2', 'v2'), ['v1', None, 'v2', None]],
|
||||
[('del', 'k1', 'k2'), 2],
|
||||
]),
|
||||
('msetnx', [
|
||||
[('del', 'k1', 'k2', 'k3'), ],
|
||||
[('msetnx', 'k1', 'v1', 'k2', 'v2', 'k3', 'v3'), 1],
|
||||
[('mget', 'k1', 'k2', 'k3'), ['v1', 'v2', 'v3']],
|
||||
[('msetnx', 'k1', 'v1', 'k2', 'v2', 'k3', 'v3'), 0],
|
||||
[('del', 'k1', 'k2', 'k3'), 3],
|
||||
]),
|
||||
('psetex', [
|
||||
[('del', 'k'), ],
|
||||
[('psetex', 'k', 10000, 'v'), 'OK'],
|
||||
[('get', 'k'), 'v'],
|
||||
[('pttl', 'k'), lambda x: x>0],
|
||||
]),
|
||||
('set', [
|
||||
[('set', 'k', 'v'), 'OK'],
|
||||
[('get', 'k'), 'v'],
|
||||
[('ttl', 'k'), -1],
|
||||
[('set', 'k', 'vex', 'EX', 10), 'OK'],
|
||||
[('get', 'k'), 'vex'],
|
||||
[('ttl', 'k'), lambda x: x>0],
|
||||
[('set', 'k', 'vpx', 'PX', 20000), 'OK'],
|
||||
[('get', 'k'), 'vpx'],
|
||||
[('pttl', 'k'), lambda x: x>10000],
|
||||
[('set', 'k', 'val', 'NX'), None],
|
||||
[('get', 'k'), 'vpx'],
|
||||
[('set', 'k', 'val', 'XX'), 'OK'],
|
||||
[('get', 'k'), 'val'],
|
||||
]),
|
||||
('setbit', [
|
||||
[('set', 'k', '\x00'), 'OK'],
|
||||
[('setbit', 'k', 1, 1), 0],
|
||||
[('setbit', 'k', 1, 0), 1],
|
||||
]),
|
||||
('setex', [
|
||||
[('del', 'k'), ],
|
||||
[('setex', 'k', 10, 'v'), 'OK'],
|
||||
[('get', 'k'), 'v'],
|
||||
[('ttl', 'k'), lambda x: x>0],
|
||||
]),
|
||||
('setnx', [
|
||||
[('del', 'k'), ],
|
||||
[('setnx', 'k', 'v'), 1],
|
||||
[('get', 'k'), 'v'],
|
||||
[('setnx', 'k', 'v'), 0],
|
||||
]),
|
||||
('setrange', [
|
||||
[('set', 'k', 'hello world'), 'OK'],
|
||||
[('setrange', 'k', 6, 'predixy'), 13],
|
||||
]),
|
||||
('strlen', [
|
||||
[('set', 'k', '123456'), 'OK'],
|
||||
[('strlen', 'k'), 6],
|
||||
]),
|
||||
('script', [
|
||||
[('del', 'k'), ],
|
||||
[('eval', 'return "hello"', 0), 'hello'],
|
||||
[('eval', 'return KEYS[1]', 1, 'k'), 'k'],
|
||||
[('eval', 'return KEYS[1]', 3, '{k}1', '{k}2', '{k}3'), '{k}1'],
|
||||
[('eval', 'return redis.call("set", KEYS[1], ARGV[1])', 1, 'k', 'v'), 'OK'],
|
||||
[('eval', 'return redis.call("get", KEYS[1])', 1, 'k'), 'v'],
|
||||
[('script', 'load', 'return redis.call("get", KEYS[1])'), 'a5260dd66ce02462c5b5231c727b3f7772c0bcc5'],
|
||||
[('evalsha', 'a5260dd66ce02462c5b5231c727b3f7772c0bcc5', 1, 'k'), 'v'],
|
||||
]),
|
||||
('hash', [
|
||||
[('del', 'k'), ],
|
||||
[('hset', 'k', 'name', 'hash'), 1],
|
||||
[('hget', 'k', 'name'), 'hash'],
|
||||
[('hexists', 'k', 'name'), 1],
|
||||
[('hlen', 'k'), 1],
|
||||
[('hkeys', 'k'), ['name']],
|
||||
[('hgetall', 'k'), ['name', 'hash']],
|
||||
[('hmget', 'k', 'name'), ['hash']],
|
||||
[('hscan', 'k', 0), ['0', ['name', 'hash']]],
|
||||
[('hstrlen', 'k', 'name'), 4],
|
||||
[('hvals', 'k'), ['hash']],
|
||||
[('hsetnx', 'k', 'name', 'other'), 0],
|
||||
[('hget', 'k', 'name'), 'hash'],
|
||||
[('hsetnx', 'k', 'age', 5), 1],
|
||||
[('hget', 'k', 'age'), '5'],
|
||||
[('hincrby', 'k', 'age', 3), 8],
|
||||
[('hincrbyfloat', 'k', 'age', 1.5), '9.5'],
|
||||
[('hmset', 'k', 'sex', 'F'), 'OK'],
|
||||
[('hget', 'k', 'sex'), 'F'],
|
||||
[('hmset', 'k', 'height', 180, 'weight', 80, 'zone', 'cn'), 'OK'],
|
||||
[('hlen', 'k'), 6],
|
||||
[('hmget', 'k', 'name', 'age', 'sex', 'height', 'weight', 'zone'), ['hash', '9.5', 'F', '180', '80', 'cn']],
|
||||
[('hscan', 'k', 0, 'match', '*eight'), lambda x:False if len(x)!=2 else len(x[1])==4],
|
||||
[('hscan', 'k', 0, 'count', 2), lambda x:len(x)==2],
|
||||
[('hkeys', 'k'), lambda x:len(x)==6],
|
||||
[('hvals', 'k'), lambda x:len(x)==6],
|
||||
[('hgetall', 'k'), lambda x:len(x)==12],
|
||||
]),
|
||||
('list', [
|
||||
[('del', 'k'), ],
|
||||
[('lpush', 'k', 'apple'), 1],
|
||||
[('llen', 'k'), 1],
|
||||
[('lindex', 'k', 0), 'apple'],
|
||||
[('lindex', 'k', -1), 'apple'],
|
||||
[('lindex', 'k', -2), None],
|
||||
[('lpush', 'k', 'pear', 'orange'), 3],
|
||||
[('llen', 'k'), 3],
|
||||
[('lrange', 'k', 0, 3), ['orange', 'pear', 'apple']],
|
||||
[('lrange', 'k', -2, -1), ['pear', 'apple']],
|
||||
[('lset', 'k', 0, 'peach'), 'OK'],
|
||||
[('lindex', 'k', 0), 'peach'],
|
||||
[('rpush', 'k', 'orange'), 4],
|
||||
[('lrange', 'k', 0, 3), ['peach', 'pear', 'apple', 'orange']],
|
||||
[('rpush', 'k', 'grape', 'banana', 'tomato'), 7],
|
||||
[('lrange', 'k', 0, 7), ['peach', 'pear', 'apple', 'orange', 'grape', 'banana', 'tomato']],
|
||||
[('lpop', 'k'), 'peach'],
|
||||
[('rpop', 'k'), 'tomato'],
|
||||
[('rpoplpush', 'k', 'k'), 'banana'],
|
||||
[('lpushx', 'k', 'peach'), 6],
|
||||
[('rpushx', 'k', 'peach'), 7],
|
||||
[('lrem', 'k', 1, 'apple'), 1],
|
||||
[('lrem', 'k', 5, 'peach'), 2],
|
||||
[('lrange', 'k', 0, 7), ['banana', 'pear', 'orange', 'grape']],
|
||||
[('linsert', 'k', 'BEFORE', 'pear', 'peach'), 5],
|
||||
[('linsert', 'k', 'AFTER', 'orange', 'tomato'), 6],
|
||||
[('linsert', 'k', 'AFTER', 'apple', 'tomato'), -1],
|
||||
[('lrange', 'k', 0, 7), ['banana', 'peach', 'pear', 'orange', 'tomato', 'grape']],
|
||||
[('ltrim', 'k', 0, 4), 'OK'],
|
||||
[('ltrim', 'k', 1, -1), 'OK'],
|
||||
[('lrange', 'k', 0, 7), ['peach', 'pear', 'orange', 'tomato']],
|
||||
[('blpop', 'k', 0), ['k', 'peach']],
|
||||
[('brpop', 'k', 0), ['k', 'tomato']],
|
||||
[('brpoplpush', 'k', 'k', 0), 'orange'],
|
||||
[('lrange', 'k', 0, 7), ['orange', 'pear']],
|
||||
[('del', 'k'), 1],
|
||||
[('lpushx', 'k', 'peach'), 0],
|
||||
[('rpushx', 'k', 'peach'), 0],
|
||||
]),
|
||||
('set', [
|
||||
[('del', 'k', '{k}2', '{k}3', '{k}4', '{k}5', '{k}6'), ],
|
||||
[('sadd', 'k', 'apple'), 1],
|
||||
[('scard', 'k'), 1],
|
||||
[('sadd', 'k', 'apple'), 0],
|
||||
[('scard', 'k'), 1],
|
||||
[('sadd', 'k', 'apple', 'pear', 'orange', 'banana'), 3],
|
||||
[('scard', 'k'), 4],
|
||||
[('sismember', 'k', 'apple'), 1],
|
||||
[('sismember', 'k', 'grape'), 0],
|
||||
[('smembers', 'k'), lambda x:len(x)==4],
|
||||
[('srandmember', 'k'), lambda x:x in ['apple', 'pear', 'orange', 'banana']],
|
||||
[('srandmember', 'k', 2), lambda x:len(x)==2],
|
||||
[('sscan', 'k', 0), lambda x:len(x)==2],
|
||||
[('sscan', 'k', 0, 'match', 'a*'), lambda x:len(x)==2 and x[1][0]=='apple'],
|
||||
[('sscan', 'k', 0, 'count', 2), lambda x:len(x)==2 and len(x[1])>=2],
|
||||
[('srem', 'k', 'apple'), 1],
|
||||
[('srem', 'k', 'apple'), 0],
|
||||
[('scard', 'k'), 3],
|
||||
[('srem', 'k', 'pear', 'orange'), 2],
|
||||
[('scard', 'k'), 1],
|
||||
[('sadd', '{k}2', 'apple', 'pear', 'orange', 'banana'), 4],
|
||||
[('sdiff', '{k}2', 'k'), lambda x:len(x)==3],
|
||||
[('sadd', '{k}3', 'apple', 'pear'), 2],
|
||||
[('sdiff', '{k}2', 'k', '{k}3'), ['orange']],
|
||||
[('sdiffstore', '{k}4', '{k}2', 'k', '{k}3'), 1],
|
||||
[('sinter', '{k}2', 'k'), ['banana']],
|
||||
[('sinterstore', '{k}5', '{k}2', 'k'), 1],
|
||||
[('sunion', '{k}3', 'k'), lambda x:len(x)==3],
|
||||
[('sunionstore', '{k}6', '{k}3', 'k'), 3],
|
||||
[('smove', '{k}2', 'k', 'apple'), 1],
|
||||
[('scard', 'k'), 2],
|
||||
[('scard', '{k}2'), 3],
|
||||
]),
|
||||
('zset', [
|
||||
[('del', 'k', '{k}2', '{k}3', '{k}4', '{k}5', '{k}6'), ],
|
||||
[('zadd', 'k', 10, 'apple'), 1],
|
||||
[('zcard', 'k'), 1],
|
||||
[('zincrby', 'k', 2, 'apple'), '12'],
|
||||
[('zincrby', 'k', -2, 'apple'), '10'],
|
||||
[('zadd', 'k', 15, 'pear', 20, 'orange', 30, 'banana'), 3],
|
||||
[('zcard', 'k'), 4],
|
||||
[('zscore', 'k', 'pear'), '15'],
|
||||
[('zrank', 'k', 'apple'), 0],
|
||||
[('zrank', 'k', 'orange'), 2],
|
||||
[('zcount', 'k', '-inf', '+inf'), 4],
|
||||
[('zcount', 'k', 1, 10), 1],
|
||||
[('zcount', 'k', 15, 20), 2],
|
||||
[('zlexcount', 'k', '[a', '[z'), 4],
|
||||
[('zscan', 'k', 0), lambda x:len(x)==2 and len(x[1])==8],
|
||||
[('zscan', 'k', 0, 'MATCH', 'o*'), ['0', ['orange', '20']]],
|
||||
[('zrange', 'k', 0, 2), ['apple', 'pear', 'orange']],
|
||||
[('zrange', 'k', -2, -1), ['orange', 'banana']],
|
||||
[('zrange', 'k', 0, 2, 'WITHSCORES'), ['apple', '10', 'pear', '15', 'orange', '20']],
|
||||
[('zrangebylex', 'k', '-', '+'), lambda x:len(x)==4],
|
||||
[('zrangebylex', 'k', '-', '+', 'LIMIT', 1, 2), lambda x:len(x)==2],
|
||||
[('zrangebyscore', 'k', '10', '(20'), ['apple', 'pear']],
|
||||
[('zrangebyscore', 'k', '-inf', '+inf', 'LIMIT', 1, 2), ['pear', 'orange']],
|
||||
[('zrangebyscore', 'k', '-inf', '+inf', 'WITHSCORES', 'LIMIT', 1, 2), ['pear', '15', 'orange', '20']],
|
||||
[('zrevrange', 'k', 0, 2), ['banana', 'orange', 'pear']],
|
||||
[('zrevrange', 'k', -2, -1), ['pear', 'apple']],
|
||||
[('zrevrange', 'k', 0, 2, 'WITHSCORES'), ['banana', '30', 'orange', '20', 'pear', '15']],
|
||||
[('zrevrangebylex', 'k', '+', '-'), lambda x:len(x)==4],
|
||||
[('zrevrangebylex', 'k', '+', '-', 'LIMIT', 1, 2), lambda x:len(x)==2],
|
||||
[('zrevrangebyscore', 'k', '(20', '10'), ['pear', 'apple']],
|
||||
[('zrevrangebyscore', 'k', '+inf', '-inf', 'LIMIT', 1, 2), ['orange', 'pear']],
|
||||
[('zrevrangebyscore', 'k', '+inf', '-inf', 'WITHSCORES', 'LIMIT', 1, 2), ['orange', '20', 'pear', '15']],
|
||||
[('zrem', 'k', 'apple'), 1],
|
||||
[('zrem', 'k', 'apple'), 0],
|
||||
[('zremrangebyrank', 'k', '0', '1'), 2],
|
||||
[('zadd', 'k', 15, 'pear', 20, 'orange', 30, 'banana'), 2],
|
||||
[('zremrangebyscore', 'k', '20', '30'), 2],
|
||||
[('zadd', 'k', 'NX', 0, 'pear', 0, 'orange', 0, 'banana'), 2],
|
||||
[('zremrangebylex', 'k', '[banana', '(cat'), 1],
|
||||
[('zadd', 'k', 15, 'pear', 20, 'orange', 30, 'banana'), 1],
|
||||
[('zadd', '{k}2', 10, 'apple', 15, 'pear'), 2],
|
||||
[('zinterstore', '{k}3', 2, 'k', '{k}2'), 1],
|
||||
[('zinterstore', '{k}3', 2, 'k', '{k}2', 'AGGREGATE', 'MAX'), 1],
|
||||
[('zinterstore', '{k}3', 2, 'k', '{k}2', 'WEIGHTS', 0.5, 1.2, 'AGGREGATE', 'MAX'), 1],
|
||||
[('zunionstore', '{k}3', 2, 'k', '{k}2'), 4],
|
||||
[('zunionstore', '{k}3', 2, 'k', '{k}2', 'AGGREGATE', 'MAX'), 4],
|
||||
[('zunionstore', '{k}3', 2, 'k', '{k}2', 'WEIGHTS', 0.5, 1.2, 'AGGREGATE', 'MAX'), 4],
|
||||
[('zadd', '{k}5', 0, 'apple', 9, 'banana', 1, 'pear', 3, 'orange', 4, 'cat'), 5],
|
||||
[('zpopmax', '{k}5'), ['banana', '9']],
|
||||
[('zpopmax', '{k}5', 3), ['cat', '4', 'orange', '3', 'pear', '1']],
|
||||
[('zadd', '{k}6', 0, 'apple', 9, 'banana', 1, 'pear', 3, 'orange', 4, 'cat'), 5],
|
||||
[('zpopmin', '{k}6'), ['apple', '0']],
|
||||
[('zpopmin', '{k}6', 3), ['pear', '1', 'orange', '3', 'cat', '4']],
|
||||
]),
|
||||
('hyperloglog', [
|
||||
[('del', 'k', '{k}2', '{k}3'), ],
|
||||
[('pfadd', 'k', 'a', 'b', 'c', 'd'), 1],
|
||||
[('pfcount', 'k'), 4],
|
||||
[('pfadd', '{k}2', 'c', 'd', 'e', 'f'), 1],
|
||||
[('pfcount', '{k}2'), 4],
|
||||
[('pfmerge', '{k}3', 'k', '{k}2'), 'OK'],
|
||||
[('pfcount', '{k}3'), 6],
|
||||
]),
|
||||
('geo', [
|
||||
[('del', 'k'), ],
|
||||
[('geoadd', 'k', 116, 40, 'beijing'), 1],
|
||||
[('geoadd', 'k', 121.5, 30.8, 'shanghai', 114, 22.3, 'shenzhen'), 2],
|
||||
[('geoadd', 'k', -74, 40.3, 'new york', 151.2, -33.9, 'sydney'), 2],
|
||||
[('geodist', 'k', 'beijing', 'shanghai'), lambda x:x>1000000],
|
||||
[('geodist', 'k', 'beijing', 'shanghai', 'km'), lambda x:x>1000],
|
||||
[('geohash', 'k', 'beijing', 'shanghai'), lambda x:len(x)==2],
|
||||
[('geopos', 'k', 'beijing'), lambda x:len(x)==1 and len(x[0])==2],
|
||||
[('geopos', 'k', 'beijing', 'shanghai'), lambda x:len(x)==2 and len(x[1])==2],
|
||||
[('georadius', 'k', 140, 35, 3000, 'km'), lambda x:len(x)==3],
|
||||
[('georadius', 'k', 140, 35, 3000, 'km', 'WITHDIST', 'ASC'), lambda x:len(x)==3 and x[0][0]=='shanghai' and x[1][0]=='beijing' and x[2][0]=='shenzhen'],
|
||||
[('georadiusbymember', 'k', 'shanghai', 2000, 'km'), lambda x:len(x)==3],
|
||||
[('georadiusbymember', 'k', 'shanghai', 3000, 'km', 'WITHDIST', 'ASC'), lambda x:len(x)==3 and x[0][0]=='shanghai' and x[1][0]=='beijing' and x[2][0]=='shenzhen'],
|
||||
]),
|
||||
('clean', [
|
||||
[('del', 'k'), ],
|
||||
]),
|
||||
]
|
||||
|
||||
TransactionCases = [
|
||||
('multi-exec', [
|
||||
[('multi',), 'OK'],
|
||||
[('set', 'k', 'v'), 'QUEUED'],
|
||||
[('get', 'k'), 'QUEUED'],
|
||||
[('exec',), ['OK', 'v']],
|
||||
]),
|
||||
('multi-discard', [
|
||||
[('multi',), 'OK'],
|
||||
[('set', 'k', 'v'), 'QUEUED'],
|
||||
[('get', 'k'), 'QUEUED'],
|
||||
[('discard',), 'OK'],
|
||||
]),
|
||||
('watch-multi-exec', [
|
||||
[('watch', 'k'), 'OK'],
|
||||
[('watch', '{k}2', '{k}3'), 'OK'],
|
||||
[('multi',), 'OK'],
|
||||
[('set', 'k', 'v'), 'QUEUED'],
|
||||
[('get', 'k'), 'QUEUED'],
|
||||
[('exec',), ['OK', 'v']],
|
||||
]),
|
||||
]
|
||||
|
||||
def check(cmd, r):
|
||||
if len(cmd) == 1:
|
||||
print('EXEC %s' % (str(cmd[0]),))
|
||||
return True
|
||||
if hasattr(cmd[1], '__call__'):
|
||||
isPass = cmd[1](r)
|
||||
else:
|
||||
isPass = r == cmd[1]
|
||||
if isPass:
|
||||
print('PASS %s:%s' % (str(cmd[0]), repr(r)))
|
||||
else:
|
||||
print('FAIL %s:%s != %s' % (str(cmd[0]), repr(r), repr(cmd[1])))
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def testCase(name, cmds):
|
||||
print('---------- %s --------' % name)
|
||||
succ = True
|
||||
for cmd in cmds:
|
||||
try:
|
||||
r = c.execute_command(*cmd[0])
|
||||
if not check(cmd, r):
|
||||
succ = False
|
||||
except Exception as excp:
|
||||
succ = False
|
||||
if len(cmd) > 1:
|
||||
print('EXCP %s:%s %s' % (str(cmd[0]), str(cmd[1]), str(excp)))
|
||||
else:
|
||||
print('EXCP %s %s' % (str(cmd[0]), str(excp)))
|
||||
return succ
|
||||
|
||||
def pipelineTestCase(name, cmds):
|
||||
print('---------- %s pipeline --------' % name)
|
||||
succ = True
|
||||
p = c.pipeline(transaction=False)
|
||||
try:
|
||||
for cmd in cmds:
|
||||
p.execute_command(*cmd[0])
|
||||
res = p.execute()
|
||||
for i in xrange(0, len(cmds)):
|
||||
if not check(cmds[i], res[i]):
|
||||
succ = False
|
||||
except Exception as excp:
|
||||
succ = False
|
||||
print('EXCP %s' % str(excp))
|
||||
return succ
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(conflict_handler='resolve')
|
||||
parser.add_argument('-t', default=False, action='store_true', help='enable transaction test')
|
||||
parser.add_argument('-h', nargs='?', default='127.0.0.1', help='host')
|
||||
parser.add_argument('-p', nargs='?', default=7617, type=int, help='port')
|
||||
parser.add_argument('case', nargs='*', default=None, help='specify test case')
|
||||
args = parser.parse_args()
|
||||
a = set()
|
||||
host = '127.0.0.1' if not args.h else args.h
|
||||
port = 7617 if not args.p else args.p
|
||||
c = redis.StrictRedis(host=host, port=port)
|
||||
if args.case:
|
||||
a = set(args.case)
|
||||
fails = []
|
||||
for case in Cases:
|
||||
if len(a) == 0 or case[0] in a:
|
||||
if not testCase(case[0], case[1]) or not pipelineTestCase(case[0], case[1]):
|
||||
fails.append(case[0])
|
||||
if args.t or 'transaction' in a:
|
||||
succ = True
|
||||
for case in TransactionCases:
|
||||
if not pipelineTestCase(case[0], case[1]):
|
||||
succ = False
|
||||
if not succ:
|
||||
fails.append('transaction')
|
||||
print('--------------------------------------------')
|
||||
if len(fails) > 0:
|
||||
print('******* Some case test fail *****')
|
||||
for cmd in fails:
|
||||
print cmd
|
||||
else:
|
||||
print('Good! all Case Pass.')
|
||||
|
||||
91
test/pubsub.py
Executable file
@ -0,0 +1,91 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# predixy - A high performance and full features proxy for redis.
|
||||
# Copyright (C) 2017 Joyield, Inc. <joyield.com@gmail.com>
|
||||
# All rights reserved.
|
||||
|
||||
import time
|
||||
import redis
|
||||
import sys
|
||||
import argparse
|
||||
|
||||
c1 = None
|
||||
c2 = None
|
||||
|
||||
def test():
|
||||
ps = c1.pubsub()
|
||||
stats = [
|
||||
[ps, 'subscribe', ['ch']],
|
||||
[ps, 'get_message', [], {'pattern': None, 'type': 'subscribe', 'channel': 'ch', 'data': 1L}],
|
||||
[c2, 'publish', ['ch', 'hello'], 1],
|
||||
[ps, 'get_message', [], {'pattern': None, 'type': 'message', 'channel': 'ch', 'data': 'hello'}],
|
||||
[ps, 'subscribe', ['ch1', 'ch2']],
|
||||
[ps, 'get_message', [], {'pattern': None, 'type': 'subscribe', 'channel': 'ch1', 'data': 2L}],
|
||||
[ps, 'get_message', [], {'pattern': None, 'type': 'subscribe', 'channel': 'ch2', 'data': 3L}],
|
||||
[c2, 'publish', ['ch1', 'channel1'], lambda x:True],
|
||||
[c2, 'publish', ['ch2', 'channel2'], lambda x:True],
|
||||
[ps, 'get_message', [], {'pattern': None, 'type': 'message', 'channel': 'ch1', 'data': 'channel1'}],
|
||||
[ps, 'get_message', [], {'pattern': None, 'type': 'message', 'channel': 'ch2', 'data': 'channel2'}],
|
||||
[ps, 'psubscribe', ['ch*']],
|
||||
[ps, 'get_message', [], {'pattern': None, 'type': 'psubscribe', 'channel': 'ch*', 'data': 4L}],
|
||||
[c2, 'publish', ['ch', 'hello'], 2],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['data']=='hello'],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['data']=='hello'],
|
||||
[ps, 'psubscribe', ['ch1*', 'ch2*']],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['type']=='psubscribe'],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['type']=='psubscribe'],
|
||||
[ps, 'unsubscribe', ['ch']],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['type']=='unsubscribe'],
|
||||
[c2, 'publish', ['ch', 'hello'], 1],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['data']=='hello'],
|
||||
[ps, 'punsubscribe', ['ch*']],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['type']=='punsubscribe'],
|
||||
[ps, 'unsubscribe', ['ch1', 'ch2']],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['type']=='unsubscribe'],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['type']=='unsubscribe'],
|
||||
[ps, 'punsubscribe', ['ch1*', 'ch2*']],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['type']=='punsubscribe'],
|
||||
[ps, 'get_message', [], lambda x:type(x)==type({}) and x['type']=='punsubscribe'],
|
||||
]
|
||||
def run(stat):
|
||||
func = getattr(stat[0], stat[1])
|
||||
r = func(*stat[2])
|
||||
if len(stat) == 3:
|
||||
print('EXEC %s(*%s)' % (stat[1], repr(stat[2])))
|
||||
return True
|
||||
if hasattr(stat[3], '__call__'):
|
||||
isPass = stat[3](r)
|
||||
else:
|
||||
isPass = r == stat[3]
|
||||
if isPass:
|
||||
print('PASS %s(*%s):%s' % (stat[1], repr(stat[2]), repr(r)))
|
||||
return True
|
||||
else:
|
||||
print('FAIL %s(*%s):%s != %s' % (stat[1], repr(stat[2]), repr(r), repr(stat[3])))
|
||||
return False
|
||||
|
||||
succ = True
|
||||
for stat in stats:
|
||||
if not run(stat):
|
||||
succ = False
|
||||
time.sleep(0.2)
|
||||
print '---------------------------------'
|
||||
if succ:
|
||||
print 'Good! PubSub test pass'
|
||||
else:
|
||||
print 'Oh! PubSub some case fail'
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(conflict_handler='resolve')
|
||||
parser.add_argument('-h', nargs='?', default='127.0.0.1', help='host')
|
||||
parser.add_argument('-p', nargs='?', default=7617, type=int, help='port')
|
||||
args = parser.parse_args()
|
||||
host = '127.0.0.1' if not args.h else args.h
|
||||
port = 7617 if not args.p else args.p
|
||||
c1 = redis.StrictRedis(host=host, port=port)
|
||||
c2 = redis.StrictRedis(host=host, port=port)
|
||||
test()
|
||||
|
||||
|
||||