redis海量数据测试(续一)
作者: 曲文庆 日期: 2011-10-09 16:02
redis海量数据测试(续一)
提纲:
- redis海量数据测试( 续一)
- 环境
- 运行脚本
- redis配置
- 运行时间
- 运行结果
- aof导出
- 重启redis
100000000 keys in Redis 2.2.12
用1到100000000数字做key,用随机uuid做value,写入redis
key:value格式:
100000000:a47d8af2-09d3-4195-afd3-c2d8a094a614 |
环境
CPU:Intel(R) Xeon(R) CPU E5620 @ 2.40GHz X 2
MEM:32G
DISK:300G SAS
运行脚本
#!/bin/sh log=~/redis_run.log slog=~/redis_status.log redis=/usr/local/redis/bin/redis-cli cat /dev/null > $log cat /dev/null > $slog echo `date` >> $log k=1 while [ $k -le 100000000 ] do v=`cat /proc/sys/kernel/random/uuid` $redis set $k $v & #echo "$k:$v" >> $log y=$[ $k % 10000 ] if [ $y -eq 0 ] ; then echo -e "\n\n\n`date`" >> $slog $redis info >> $slog fi ((k++)) done echo `date` >> $log |
对比《redis海量数据测试》,关闭了日志的记录
redis配置
daemonize yes pidfile /var/run/redis.pid port 6379 timeout 30 loglevel verbose logfile /home/redis/logs/redis.log databases 16 rdbcompression yes dbfilename dump.rdb dir /home/redis/rdbs slave-serve-stale-data yes maxmemory 30G maxmemory-policy volatile-lru appendonly yes appendfsync everysec no-appendfsync-on-rewrite no slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /home/redis/redis.swap vm-max-memory 30G vm-page-size 32 vm-pages 134217728 vm-max-threads 16 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 activerehashing yes |
对比《redis海量数据测试》,关闭了rdb持久化方式,只保留了aof,关闭了slave。
运行时间
从
Fri Sep 16 15:52:30 CST 2011
到
Sun Sep 18 19:09:40 CST 2011
约51小时
运行结果
redis_version:2.2.12 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll process_id:9103 uptime_in_seconds:184906 uptime_in_days:2 lru_clock:1610994 used_cpu_sys:4193.95 used_cpu_user:6680.75 used_cpu_sys_children:0.00 used_cpu_user_children:0.00 connected_clients:1 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:0 used_memory:15394540608 used_memory_human:14.34G used_memory_rss:20111601664 mem_fragmentation_ratio:1.31 use_tcmalloc:0 loading:0 aof_enabled:1 changes_since_last_save:100000000 bgsave_in_progress:0 last_save_time:1316159275 bgrewriteaof_in_progress:0 total_connections_received:100010001 total_commands_processed:100010000 expired_keys:0 evicted_keys:0 keyspace_hits:0 keyspace_misses:100000000 hash_max_zipmap_entries:512 hash_max_zipmap_value:64 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=100000000,expires=0 |
appendonly.aof | 6.6G |
内存占用 | 14.34G |
aof导出
上述结果集做aof导出
从
Mon Sep 19 12:13:18 CST 2011
到
Mon Sep 19 12:14:10 CST 2011
将近1分钟时间
重启redis
[13185] 19 Sep 12:03:16 # User requested shutdown... [13185] 19 Sep 12:03:16 * Calling fsync() on the AOF file. [13185] 19 Sep 12:03:16 * Removing the pid file. [13185] 19 Sep 12:03:16 # Redis is now ready to exit, bye bye... [13222] 19 Sep 12:03:17 * Server started, Redis version 2.2.12 [13222] 19 Sep 12:06:19 - Accepted 127.0.0.1:3418 [13222] 19 Sep 12:06:19 - Client closed connection [13222] 19 Sep 12:06:23 * DB loaded from append only file: 186 seconds [13222] 19 Sep 12:06:23 * The server is now ready to accept connections on port 6379 [13222] 19 Sep 12:06:24 - DB 0: 100000001 keys (0 volatile) in 134217728 slots HT. |
重启约用时186秒
评论: 0 |
引用: 0 |
阅读: 7698
redis使用过程中的几点经验教训 (2012-10-25 15:08)
Redis 2.2 性能问题 (2012-03-26 18:55)
redis将slave转换为master (2011-12-16 17:30)
Redis for cacti 模板 (2011-11-17 11:32)
redis海量数据测试(续三) (2011-11-17 11:04)
redis海量数据测试(续二) (2011-11-17 11:02)
redis海量数据测试 (2011-10-09 16:00)
CentOS 下 Redis 2.2.12 安装配置详解 (2011-10-09 15:56)