ARST打卡第206周[206/521]

Algorithm

lc1019_链表中的下一个更大节点

  • 链表中节点数为 n
  • 1 <= n <= 1e4
  • 1 <= Node.val <= 1e9

思路:

  • 因为 n 的取值范围的问题,可以O(n^2),所以应该可以暴力法
  • 优化的方法应该是双指针,先一个找到,然后记录右指针,右指针左边的值一定小于左指针和自己。中间的值可以维护一个有序容器,multiset(注意这里删除就不要删除值,而是删除一个迭代器指针指向的位置)。空间换时间。
  • 再优化一点,就是一开始把所有值都先插入进去,然后一个个比较删除即可。时间复杂度O(nlogn)和空间复杂度是O(n)。

结果一看题解,发现确实就是典型的单调栈经典问题,惭愧了,又忘了…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class Solution {
public:
vector<int> nextLargerNodes(ListNode* head) {
vector<int> ans;
stack<pair<int, int>> s;

ListNode* cur = head;
int idx = -1;
while (cur) {
++idx;
ans.push_back(0);
while (!s.empty() && s.top().first < cur->val) {
ans[s.top().second] = cur->val;
s.pop();
}
s.emplace(cur->val, idx);
cur = cur->next;
}

return ans;
}
};

Review

坚持30天僧人般严苛自律的生活_思考

博主想表达的不仅仅是自律,更多的是要体会自己内心的需求,接受好的不好的自己,倾听自己内心的声音,不是一味地逃避自己不喜欢的一面,而是要学会与它相处,体会真正的诉求,毕竟所有这些才组成了真正的我们,这些不同的人格都是为了保护我们,或者让我们更好,学会与他们相处,而不是把他们当做敌人,过多的压抑会遭到反噬,和谐的相处体会才能得到更好的效果,无论是工作还是生活中。

Tips

大佬写在工作10周年

在这个行业做的久了, 你就会越加的尊敬Oracle. 在单机存储引擎领域, Oracle 做的真的是非常的极致, 现在PolarDB 能够和Oracle PK 也只是因为赛道变了, 但是在单机存储引擎上, 我仍然觉得Oracle 很多地方值得我们学习. 所以不管别人怎么说, 我还是觉得 Oracle, AWS aurora 团队那些糟老头子比我们强, 一定有很多值得我们学习的地方, 我还是觉得数据库领域是慢工出细活, 没有所谓的灵光一闪, 只有日积月累.

RAFT加ROCKSDB重新思考

看懂了一些(比如: WAL写入基于Qurom机制进行,可以有效的规避掉慢节点。),但是还有一些处于懵懂状态,存储行业还是学海无涯,继续学习。

Share-rocksdb6.29版本编译安装使用测试

环境说明

  • Ubuntu18.04 (Windows Subsystem for Linux, WSL2)
  • GCC4.8, C++11
  • RocksDB6.29b (RocksDB7.0以上版本需要GCC7 和 C++17了,所以这里用的是6.29b,只需要C++11, GCC4.8)

依赖安装

按照6.29b的官方教程把依赖都装好

也可以一键安装一些依赖

1
2
3
# 可以先更新一下软件包的情况
# sudo apt-get update
sudo apt-get install libsnappy-dev zlib1g-dev libbz2-dev liblz4-dev libzstd-dev libgflags-dev

静态编译安装

1
2
make static_lib -j$(nproc)
make install-static -j$(nproc)

-j$(nproc) 是使用机器的全部CPU,根据本身机器的CPU数来合理设置,比如配置成-j$(($(nproc) / 2))

简单用例编译运行

  • 创建数据库测试目录并且编码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cd ~/code/rocksdb
mkdir /tmp/rocksdb_tmp

⚡ 04/13|10:57:46  rocksdb   6.29.fb  vim test.cpp
⚡ 04/13|10:57:58  rocksdb   6.29.fb  cat test.cpp
#include <cstdio>
#include <string>
#include "rocksdb/db.h"
#include "rocksdb/slice.h"
#include "rocksdb/options.h"

using namespace std;
using namespace rocksdb;

const std::string PATH = "/tmp/rocksdb_tmp";
int main(){
DB* db;
Options options;
options.create_if_missing = true;
Status status = DB::Open(options, PATH, &db);
assert(status.ok());
Slice key("foo");
Slice value("bar");
std::string get_value;
status = db->Put(WriteOptions(), key, value);
if(status.ok()){
status = db->Get(ReadOptions(), key, &get_value);
if(status.ok()){
printf("get %s success!!\n", get_value.c_str());
}else{
printf("get failed\n");
}
}else{
printf("put failed\n");
}
delete db;
}

  • 进行编译运行
1
2
3
 ⚡ 04/13|10:31:32  rocksdb   6.29.fb  g++ test.cpp -o test.out -std=c++11  -lrocksdb -lz -lbz2 -lsnappy -pthread -lzstd -llz4 -ldl
⚡ 04/13|10:33:35  rocksdb   6.29.fb  ./test.out
get bar success!!

参数
-L:表示要连接的库所在目录 (这里没用到)
-l:指定链接时需要的动/静态库。
注意: 编译器查找动态连接库时有隐含的命名规则,即在给出的名字前面加上lib,后面加上.a或.so来确定库的名称。

所以编译生成的是 librocksdb.a 需要写成 -lrocksdb 的样子

中途可能会遇到的一些问题

Q: rocksdb/./util/compression.h:116: undefined reference to ZSTD_freeDCtx' A: 编译加 -lzstd, 也即g++ test.cpp -o test.out -std=c++11 -lrocksdb -lz -lbz2 -lsnappy -pthread -lzstd`

Q: rocksdb/./util/compression.h:1074: undefined reference to LZ4_compressBound' A: 同理加-llz4, 即g++ test.cpp -o test.out -std=c++11 -lrocksdb -lz -lbz2 -lsnappy -pthread -lzstd -llz4`

Q: rocksdb/env/env_posix.cc:108: undefined reference to dlclose' A: 增加动态链接选项 -ldl,即g++ test.cpp -o test.out -std=c++11 -lrocksdb -lz -lbz2 -lsnappy -pthread -lzstd -llz4 -ldl`
这里乌龙,把dl写成了ld一会儿

跑测试

编译db_bench并且简单测试脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
# 编译时间有点久,尽量多搞点CPU
⚡ 04/13|11:28:26  rocksdb   6.29.fb  make db_bench -j$(nproc)
$DEBUG_LEVEL is 1
Makefile:170: Warning: Compiling in debug mode. Don.t use the resulting binary in production
# ...很多 CC 编译
AR librocksdb_debug.a
/usr/bin/ar: creating librocksdb_debug.a
CCLD db_bench
⚡ 04/13|11:35:16  rocksdb   6.29.fb 
⚡ 04/13|11:35:16  rocksdb   6.29.fb  ./db_bench
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
RocksDB: version 6.29
Date: Thu Apr 13 11:37:14 2023
CPU: 6 * Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz
CPUCache: 9216 KB
Keys: 16 bytes each (+ 0 bytes user-defined timestamp)
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
Prefix: 0 bytes
Keys per prefix: 0
RawSize: 110.6 MB (estimated)
FileSize: 62.9 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
WARNING: Assertions are enabled; benchmarks unnecessarily slow
------------------------------------------------
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
DB path: [/tmp/rocksdbtest-0/dbbench]
fillseq : 3.444 micros/op 290364 ops/sec; 32.1 MB/s
Please disable_auto_compactions in FillDeterministic benchmark
✘ ⚡ 04/13|11:39:44  rocksdb   6.29.fb  ./db_bench --disable_auto_compactions=true
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
RocksDB: version 6.29
Date: Thu Apr 13 11:40:04 2023
CPU: 6 * Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz
CPUCache: 9216 KB
Keys: 16 bytes each (+ 0 bytes user-defined timestamp)
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
Prefix: 0 bytes
Keys per prefix: 0
RawSize: 110.6 MB (estimated)
FileSize: 62.9 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
WARNING: Assertions are enabled; benchmarks unnecessarily slow
------------------------------------------------
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
DB path: [/tmp/rocksdbtest-0/dbbench]
fillseq : 3.601 micros/op 277713 ops/sec; 30.7 MB/s
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
DB path: [/tmp/rocksdbtest-0/dbbench]
n is too small to fill 7 levels
✘ ⚡ 04/13|11:40:15  rocksdb   6.29.fb 

跑benchmark.sh bulkload

在 benchmark.sh 脚本中,如果没有指定 –duration 或者 –num 选项,那么 bulkload 基准测试模式将会一直运行下去,直到手动停止脚本或者遇到错误。这种情况下,测试的持续时间将会取决于数据集的大小以及 RocksDB 的写入性能。如果测试数据集较小或者 RocksDB 的写入性能较低,测试可能需要很长时间才能结束。

因此,在运行 benchmark.sh 脚本时,建议使用 –duration 或者 –num 选项来限制测试的持续时间或者测试数据集的大小。这样可以避免测试持续时间过长或者测试数据集过大导致的问题。同时,需要根据实际情况来调整测试参数以获得更好的测试效果。

比如:

1
./tools/benchmark.sh bulkload --num=10000000 --value_size=100 --duration=180

在这个命令中,–duration=180 选项指定测试持续时间为 180 秒。同时,–num=10000000 选项指定测试数据集的大小为 1000 万个键值对,–value_size=100 选项指定每个值的大小为 100 字节。需要注意的是,延长测试时间可能会增加测试的负载和资源占用,因此需要根据实际情况进行调整。

下面是我的测试,第一次跑没有限定时间和文件大小,跑了很久很久才手动 ctrl+c掉,然后才想到可能需要设定时间,才找到duration参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
mkdir /tmp/rocksdb
export DB_DIR=/tmp/rocksdb/db
export WAL_DIR=/tmp/rocksdb/wal
export TEMP=/tmp/rocksdb/tmp
export OUTPUT_DIR=/tmp/rocksdb/output

⚡ 04/13|11:40:15  rocksdb   6.29.fb  ./tools/benchmark.sh bulkload
===== Benchmark =====
Starting bulkload (ID: ) at Thu Apr 13 11:41:23 CST 2023
Bulk loading 8000000000 random keys
./db_bench --benchmarks=fillrandom --use_existing_db=0 --disable_auto_compactions=1 --sync=0 --max_background_compactions=16 --max_write_buffer_number=8 --allow_concurrent_memtable_write=false --max_background_flushes=7 --level0_file_num_compaction_trigger=10485760 --level0_slowdown_writes_trigger=10485760 --level0_stop_writes_trigger=10485760 --db=/tmp/rocksdb/db --wal_dir=/tmp/rocksdb/wal --num=8000000000 --num_levels=6 --key_size=20 --value_size=400 --block_size=8192 --cache_size=17179869184 --cache_numshardbits=6 --compression_max_dict_bytes=0 --compression_ratio=0.5 --compression_type=zstd --level_compaction_dynamic_level_bytes=true --bytes_per_sync=8388608 --cache_index_and_filter_blocks=0 --pin_l0_filter_and_index_blocks_in_cache=1 --benchmark_write_rate_limit=0 --hard_rate_limit=3 --rate_limit_delay_max_milliseconds=1000000 --write_buffer_size=134217728 --target_file_size_base=134217728 --max_bytes_for_level_base=1073741824 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=60 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --threads=1 --memtablerep=vector --allow_concurrent_memtable_write=false --disable_wal=1 --seed=1681357283 2>&1 | tee -a /tmp/rocksdb/output/benchmark_bulkload_fillrandom.log
RocksDB: version 6.29
Date: Thu Apr 13 11:41:23 2023
CPU: 6 * Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz
CPUCache: 9216 KB
2023/04/13-11:42:24 ... thread 0: (25745000,25745000) ops and (428780.8,428780.8) ops/second in (60.042332,60.042332) seconds

** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 77/0 4.20 GB 4.2 0.0 0.0 0.0 4.2 4.2 0.0 1.0 0.0 13.4 320.08 109.05 77 4.157 0 0 0.0 0.0
Sum 77/0 4.20 GB 0.0 0.0 0.0 0.0 4.2 4.2 0.0 1.0 0.0 13.4 320.08 109.05 77 4.157 0 0 0.0 0.0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 4.2 4.2 0.0 1.0 0.0 13.4 320.08 109.05 77 4.157 0 0 0.0 0.0

** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 4.2 4.2 0.0 0.0 0.0 13.4 320.08 109.05 77 4.157 0 0 0.0 0.0

# 很多日志

^C

⚡ 04/13|15:01:29  rocksdb   6.29.fb  ./tools/benchmark.sh bulkload --num=10000000 --value_size=100 --duration=180
===== Benchmark =====
Starting bulkload (ID: ) at Thu Apr 13 15:01:40 CST 2023
Bulk loading 8000000000 random keys
./db_bench --benchmarks=fillrandom --use_existing_db=0 --disable_auto_compactions=1 --sync=0 --max_background_compactions=16 --max_write_buffer_number=8 --allow_concurrent_memtable_write=false --max_background_flushes=7 --level0_file_num_compaction_trigger=10485760 --level0_slowdown_writes_trigger=10485760 --level0_stop_writes_trigger=10485760 --db=/tmp/rocksdb/db --wal_dir=/tmp/rocksdb/wal --num=8000000000 --num_levels=6 --key_size=20 --value_size=400 --block_size=8192 --cache_size=17179869184 --cache_numshardbits=6 --compression_max_dict_bytes=0 --compression_ratio=0.5 --compression_type=zstd --level_compaction_dynamic_level_bytes=true --bytes_per_sync=8388608 --cache_index_and_filter_blocks=0 --pin_l0_filter_and_index_blocks_in_cache=1 --benchmark_write_rate_limit=0 --hard_rate_limit=3 --rate_limit_delay_max_milliseconds=1000000 --write_buffer_size=134217728 --target_file_size_base=134217728 --max_bytes_for_level_base=1073741824 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=60 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --num=10000000 --value_size=100 --duration=180 --threads=1 --memtablerep=vector --allow_concurrent_memtable_write=false --disable_wal=1 --seed=1681369300 2>&1 | tee -a /tmp/rocksdb/output/benchmark_bulkload_fillrandom.log
RocksDB: version 6.29
Date: Thu Apr 13 15:01:47 2023
CPU: 6 * Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz
CPUCache: 9216 KB
2023/04/13-15:02:48 ... thread 0: (31184000,31184000) ops and (519717.6,519717.6) ops/second in (60.001818,60.001818) seconds

** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 25/0 1.29 GB 1.3 0.0 0.0 0.0 1.3 1.3 0.0 1.0 0.0 4.5 294.22 54.30 25 11.769 0 0 0.0 0.0
Sum 25/0 1.29 GB 0.0 0.0 0.0 0.0 1.3 1.3 0.0 1.0 0.0 4.5 294.22 54.30 25 11.769 0 0 0.0 0.0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.3 1.3 0.0 1.0 0.0 4.5 294.22 54.30 25 11.769 0 0 0.0 0.0

** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.3 1.3 0.0 0.0 0.0 4.5 294.22 54.30 25 11.769 0 0 0.0 0.0

Blob file count: 0, total size: 0.0 GB

Uptime(secs): 60.5 total, 57.0 interval
Flush(GB): cumulative 1.288, interval 1.288
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 1.29 GB write, 21.80 MB/s write, 0.00 GB read, 0.00 MB/s read, 294.2 seconds
Interval compaction: 1.29 GB write, 23.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 294.2 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 23 memtable_slowdown, interval 23 total count
Block cache LRUCache@0x55f687960330#2714 capacity: 16.00 GB collections: 1 last_copies: 1 last_secs: 0.00024 secs_since: 61
Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%)

** File Read Latency Histogram By Level [default] **

** DB Stats **
Uptime(secs): 60.5 total, 57.0 interval
Cumulative writes: 0 writes, 31M keys, 0 commit groups, 0.0 writes per commit group, ingest: 3.92 GB, 66.35 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:35.883 H:M:S, 59.3 percent
Interval writes: 0 writes, 29M keys, 0 commit groups, 0.0 writes per commit group, ingest: 3810.79 MB, 66.85 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Interval stall: 00:00:35.883 H:M:S, 63.0 percent

2023/04/13-15:03:48 ... thread 0: (27315000,58499000) ops and (455241.6,487479.8) ops/second in (60.001110,120.002928) seconds

** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 53/0 2.73 GB 2.7 0.0 0.0 0.0 2.8 2.8 0.0 1.0 0.0 4.1 698.07 115.65 54 12.927 0 0 0.0 0.0
Sum 53/0 2.73 GB 0.0 0.0 0.0 0.0 2.8 2.8 0.0 1.0 0.0 4.1 698.07 115.65 54 12.927 0 0 0.0 0.0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.5 1.5 0.0 1.0 0.0 3.8 403.86 61.34 29 13.926 0 0 0.0 0.0

** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 2.8 2.8 0.0 0.0 0.0 4.1 698.07 115.65 54 12.927 0 0 0.0 0.0

Blob file count: 0, total size: 0.0 GB

Uptime(secs): 120.5 total, 60.0 interval
Flush(GB): cumulative 2.785, interval 1.497
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 2.79 GB write, 23.67 MB/s write, 0.00 GB read, 0.00 MB/s read, 698.1 seconds
Interval compaction: 1.50 GB write, 25.54 MB/s write, 0.00 GB read, 0.00 MB/s read, 403.9 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 51 memtable_slowdown, interval 28 total count
Block cache LRUCache@0x55f687960330#2714 capacity: 16.00 GB collections: 1 last_copies: 1 last_secs: 0.00024 secs_since: 121
Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%)

** File Read Latency Histogram By Level [default] **

** DB Stats **
Uptime(secs): 120.5 total, 60.0 interval
Cumulative writes: 0 writes, 58M keys, 0 commit groups, 0.0 writes per commit group, ingest: 7.35 GB, 62.50 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:01:17.887 H:M:S, 64.6 percent
Interval writes: 0 writes, 27M keys, 0 commit groups, 0.0 writes per commit group, ingest: 3516.70 MB, 58.61 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Interval stall: 00:00:42.004 H:M:S, 70.0 percent

2023/04/13-15:04:48 ... thread 0: (27837000,86336000) ops and (463946.1,479635.3) ops/second in (60.000503,180.003431) seconds

** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 82/0 4.23 GB 4.2 0.0 0.0 0.0 4.2 4.2 0.0 1.0 0.0 4.0 1076.38 170.62 82 13.127 0 0 0.0 0.0
Sum 82/0 4.23 GB 0.0 0.0 0.0 0.0 4.2 4.2 0.0 1.0 0.0 4.0 1076.38 170.62 82 13.127 0 0 0.0 0.0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 1.4 1.4 0.0 1.0 0.0 3.9 378.31 54.97 28 13.511 0 0 0.0 0.0

** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
High 0/0 0.00 KB 0.0 0.0 0.0 0.0 4.2 4.2 0.0 0.0 0.0 4.0 1076.38 170.62 82 13.127 0 0 0.0 0.0

Blob file count: 0, total size: 0.0 GB

Uptime(secs): 180.5 total, 60.0 interval
Flush(GB): cumulative 4.229, interval 1.444
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 4.23 GB write, 23.99 MB/s write, 0.00 GB read, 0.00 MB/s read, 1076.4 seconds
Interval compaction: 1.44 GB write, 24.65 MB/s write, 0.00 GB read, 0.00 MB/s read, 378.3 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 79 memtable_slowdown, interval 28 total count
Block cache LRUCache@0x55f687960330#2714 capacity: 16.00 GB collections: 1 last_copies: 1 last_secs: 0.00024 secs_since: 181
Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%)

** File Read Latency Histogram By Level [default] **

** DB Stats **
Uptime(secs): 180.5 total, 60.0 interval
Cumulative writes: 0 writes, 86M keys, 0 commit groups, 0.0 writes per commit group, ingest: 10.85 GB, 61.58 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:02:1.031 H:M:S, 67.0 percent
Interval writes: 0 writes, 27M keys, 0 commit groups, 0.0 writes per commit group, ingest: 3583.90 MB, 59.73 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Interval stall: 00:00:43.144 H:M:S, 71.9 percent

Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
Keys: 20 bytes each (+ 0 bytes user-defined timestamp)
Values: 100 bytes each (50 bytes after compression)
Entries: 10000000
Prefix: 0 bytes
Keys per prefix: 0
RawSize: 1144.4 MB (estimated)
FileSize: 667.6 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: ZSTD
Compression sampling rate: 0
Memtablerep: VectorRepFactory
Perf Level: 1
WARNING: Assertions are enabled; benchmarks unnecessarily slow
------------------------------------------------
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
DB path: [/tmp/rocksdb/db]
fillrandom : 2.085 micros/op 479677 ops/sec; 54.9 MB/s
Microseconds per write:
Count: 86346999 Average: 2.0847 StdDev: 40.44
Min: 0 Median: 0.5086 Max: 30633
Percentiles: P50: 0.51 P75: 0.76 P99: 1.81 P99.9: 961.81 P99.99: 1266.70
------------------------------------------------------
[ 0, 1 ] 84890082 98.313% 98.313% ####################
( 1, 2 ] 735804 0.852% 99.165%
( 2, 3 ] 298782 0.346% 99.511%
( 3, 4 ] 52267 0.061% 99.571%
( 4, 6 ] 11956 0.014% 99.585%
( 6, 10 ] 4381 0.005% 99.590%
( 10, 15 ] 13884 0.016% 99.606%
( 15, 22 ] 141096 0.163% 99.770%
( 22, 34 ] 51510 0.060% 99.829%
( 34, 51 ] 23390 0.027% 99.857%
( 51, 76 ] 9056 0.010% 99.867%
( 76, 110 ] 3030 0.004% 99.871%
( 110, 170 ] 1135 0.001% 99.872%
( 170, 250 ] 403 0.000% 99.872%
( 250, 380 ] 217 0.000% 99.873%
( 380, 580 ] 114 0.000% 99.873%
( 580, 870 ] 144 0.000% 99.873%
( 870, 1300 ] 109602 0.127% 100.000%
( 1300, 1900 ] 58 0.000% 100.000%
( 1900, 2900 ] 18 0.000% 100.000%
( 2900, 4400 ] 22 0.000% 100.000%
( 4400, 6600 ] 16 0.000% 100.000%
( 6600, 9900 ] 16 0.000% 100.000%
( 9900, 14000 ] 9 0.000% 100.000%
( 14000, 22000 ] 5 0.000% 100.000%
( 22000, 33000 ] 2 0.000% 100.000%

Compacting...
./db_bench --benchmarks=compact --use_existing_db=1 --disable_auto_compactions=1 --sync=0 --level0_file_num_compaction_trigger=4 --level0_stop_writes_trigger=20 --max_background_compactions=16 --max_write_buffer_number=8 --max_background_flushes=7 --db=/tmp/rocksdb/db --wal_dir=/tmp/rocksdb/wal --num=8000000000 --num_levels=6 --key_size=20 --value_size=400 --block_size=8192 --cache_size=17179869184 --cache_numshardbits=6 --compression_max_dict_bytes=0 --compression_ratio=0.5 --compression_type=zstd --level_compaction_dynamic_level_bytes=true --bytes_per_sync=8388608 --cache_index_and_filter_blocks=0 --pin_l0_filter_and_index_blocks_in_cache=1 --benchmark_write_rate_limit=0 --hard_rate_limit=3 --rate_limit_delay_max_milliseconds=1000000 --write_buffer_size=134217728 --target_file_size_base=134217728 --max_bytes_for_level_base=1073741824 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=60 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --num=10000000 --value_size=100 --duration=180 --threads=1 2>&1 | tee -a /tmp/rocksdb/output/benchmark_bulkload_compact.log
RocksDB: version 6.29
Date: Thu Apr 13 15:05:02 2023
CPU: 6 * Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz
CPUCache: 9216 KB
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
Keys: 20 bytes each (+ 0 bytes user-defined timestamp)
Values: 100 bytes each (50 bytes after compression)
Entries: 10000000
Prefix: 0 bytes
Keys per prefix: 0
RawSize: 1144.4 MB (estimated)
FileSize: 667.6 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: ZSTD
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
WARNING: Assertions are enabled; benchmarks unnecessarily slow
------------------------------------------------
DB path: [/tmp/rocksdb/db]
compact : 60822353.000 micros/op 0 ops/sec;
Completed bulkload (ID: ) in 263 seconds
ops/sec mb/sec Size-GB L0_GB Sum_GB W-Amp W-MB/s usec/op p50 p75 p99 p99.9 p99.99 Uptime Stall-time Stall% Test Date Version Job-ID
479677 54.9 0.0 4.2 4.2 1.0 23.8 2.1 0.5 0.8 2 962 1267 180 00:02:1.031 67.0 bulkload 2023-04-13T15:01:47.000+08:00 6.29

测试报告简单理解

在 RocksDB 的 db_bench 基准测试工具中,测试报告会在测试结束后自动生成。报告包含了多个测试结果指标,可以用来评估 RocksDB 的读写性能以及其他指标。下面是一些常见的测试结果指标:

  • ops/sec:每秒操作数,表示 RocksDB 每秒能够执行的读写操作数量。
  • MB/s:每秒传输的数据量,表示 RocksDB 每秒能够读写的数据量。
  • Latency(at 50th/90th/99th percentile):响应时间,表示 RocksDB 的读写操作的平均响应时间。50th、90th 和 99th percentile 分别表示操作响应时间在前 50%、90% 和 99% 的操作的响应时间。
  • Stalls(count):表示 RocksDB 因为某些原因而停止工作的次数。

测试报告的每一行对应一次测试结果,列出了 RocksDB 在测试过程中的各项性能指标。报告的最后几行总结了测试结果,并给出了平均值和标准差等统计信息。

在评估测试报告时,需要注意不同测试模式的结果指标可能会有所不同,需要结合实际情况进行评估。同时,需要注意测试结果受多种因素影响,包括测试数据集的大小、读写操作的类型、硬件环境等,因此需要进行多次测试并综合考虑。

跑点高级特性吧

可以后续研究一下

其他(清理)

注意: 会清掉 db_bench 运行文件,所以谨慎使用

1
2
# 清理掉编译产生的很多的 *.d 文件,不影响 librocksdb.a 库的使用
make clean

参考链接

Ubuntu安装Rocksdb并调试