友链
导航
These are the good times in your life,
so put on a smile and it'll be alright
友链
导航
做 cache 的话,memcached 比 redis 好太多!
更新 cache 的复杂方法是 CAS - 比较并交换:
备份:
SET
、GET
HSET
、HGET
LPUSH
、LPOP
SADD
ZADD
通过阻塞式读取监听新元素:
A > BRPOP queue 0 # 阻塞 B > LPUSH queue task (integer) 1 # A 立即收到结果 1) "queue" 2) "task" B > LLEN queue (integer) 0
使用 BRPOP 同时按顺序监听多个队列,第一个没任务了,才会从第二个取
# 同时监听优先级为 1、2、3 的队列 A > BRPOP queue:1 queue:2 queue:3 0 # 阻塞 B > LPUSH queue:2 task (integer) 1 # A 立即收到结果 1) "queue:2" 2) "task" # 如果 A 暂停监听,先在多个队列 push 任务,即使先 push 3 B > LPUSH queue:3 task (integer) 1 B > LPUSH queue:2 task (integer) 1 # A 启动监听后,会拿到更优先的队列的任务 A > BRPOP queue:1 queue:2 queue:3 0 1) "queue:2" 2) "task"
查看响应速度
$ redis-cli --latency -h 127.0.0.1 -p 6379 min: 0, max: 15, avg: 0.12 (2839 samples)
maxmemory 2mb maxmemory-policy allkeys-lru # 避免 redis 已经被老数据撑满
Starting with redis 2.6.0, you can run lua scripts, which execute atomically. I have never written one, but I think it would look something like this
> EVAL "return redis.call('del', unpack(redis.call('keys', ARGV[1])))" 0 *foo*
redis 除了 kv 存储外,还有一套 pub/sub 功能。
keys *
是看不到 pub/sub 的 channel 的。
需要 SUBSCRIBE first second
或者 PSUBSCRIBE news.*
(p for pattern)订阅,才能拿到 pub 的消息。
1. Client A reads count as 10. 2. Client B reads count as 10. 3. Client A increments 10 and sets count to 11. 4. Client B increments 10 and sets count to 11. We wanted the value to be 12, but instead it is 11!
list, 是 “左 L 前, 右 R 后” 的字符串列表
无序, 元素唯一
each value has an associated score. This score is used to sort the elements in the set.
hash are maps between string fields and string values, so they are the perfect data type to represent objects
MAXCONN
:- is used to set the maximum connections allowed to access memcached. It seems that each request will end up in one connection and if its full then the rest have to wait for its turn.CACHESIZE
:- This is what makes all the difference. In most cases the servers have huge chunks of RAM, mine had 15 GBs, it was an AMAZON xlarge instance. As we know memcached is all about RAM. And CACHESIZE defines the size of it. And to my surprise its in MBs. Means the default config was for just 64 MBs. Now that could be frustrating even for memcached when you are working on a site with a thousand nodes function get_foo(int userid) { data = db_select("SELECT * FROM users WHERE userid = ?", userid); return data; }
function get_foo(int userid) { /* first try the cache */ data = memcached_fetch("userrow:" + userid); if (!data) { /* not found : request database */ data = db_select("SELECT * FROM users WHERE userid = ?", userid); /* then store in cache until next get */ memcached_add("userrow:" + userid, data); } return data; }
function update_foo(int userid, string dbUpdateString) { /* first update database */ result = db_execute(dbUpdateString); if (result) { /* database update successful : fetch data to be stored in cache */ data = db_select("SELECT * FROM users WHERE userid = ?", userid); /* the previous line could also look like data = createDataFromDBString(dbUpdateString); */ /* then store in cache until next get */ memcached_set("userrow:" + userid, data); } }