测试程序还用之前的
http://blog.itpub.net/29254281/viewspace-1841299/
这次测试,使用的是家里的电脑,性能比单位工作的电脑配置要好一些.
MySQL配置
innodb_buffer_pool_size=512m
innodb_flush_log_at_trx_commit =0
sync_binlog=0
innodb_support_xa=0
log_bin=master
版本 5.6.14
每次测试,Insert 200w记录.
1.使用Load File接口,普通表,4个索引,每100个记录提交一次
-
create table chat_message(
-
id bigint primary key auto_increment,
-
src_userid bigint not null,
-
target_userid bigint not null,
-
message varchar(200),
-
ts timestamp not null default current_timestamp,
-
s1 int,
-
s2 int,
-
s3 int,
-
s4 int
-
);
-
create index inx_1 on chat_message(src_userid,target_userid,s1,ts);
-
create index inx_2 on chat_message(src_userid,target_userid,s2,ts);
-
create index inx_3 on chat_message(src_userid,target_userid,s3,ts);
-
create index inx_4 on chat_message(src_userid,target_userid,s4,ts);
测试结果
每秒 10638 Insert
2.使用Load File接口,使用日期的范围分区
-
create table chat_message(
-
id bigint auto_increment,
-
src_userid bigint not null,
-
target_userid bigint not null,
-
message varchar(200),
-
ts timestamp not null default current_timestamp,
-
s1 int,
-
s2 int,
-
s3 int,
-
s4 int,
-
primary key (id,ts)
-
)
-
partition by range(UNIX_TIMESTAMP(ts))
-
(
-
partition p1 VALUES LESS THAN(UNIX_TIMESTAMP(‘2015-10-01 00:00:00‘)),
-
partition p2 VALUES LESS THAN(UNIX_TIMESTAMP(‘2015-11-01 00:00:00‘)),
-
partition p3 VALUES LESS THAN(UNIX_TIMESTAMP(‘2015-12-01 00:00:00‘)),
-
partition p4 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-01-01 00:00:00‘)),
-
partition p5 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-02-01 00:00:00‘)),
-
partition p6 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-03-01 00:00:00‘)),
-
partition p7 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-04-01 00:00:00‘)),
-
partition p8 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-05-01 00:00:00‘)),
-
partition p9 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-06-01 00:00:00‘)),
-
partition p10 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-07-01 00:00:00‘)),
-
partition p11 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-08-01 00:00:00‘)),
-
partition p12 VALUES LESS THAN(UNIX_TIMESTAMP(‘2016-09-01 00:00:00‘))
-
);
-
-
create index inx_1 on chat_message(src_userid,target_userid,s1,ts);
-
create index inx_2 on chat_message(src_userid,target_userid,s2,ts);
-
create index inx_3 on chat_message(src_userid,target_userid,s3,ts);
-
create index inx_4 on chat_message(src_userid,target_userid,s4,ts);
测试结果
每秒 10989 Insert
3.在日期范围分区基础上,增加4个子分区
-
create table chat_message(
-
id bigint auto_increment,
-
src_userid bigint not null,
-
target_userid bigint not null,
-
message varchar(200),
-
ts timestamp not null default current_timestamp,
-
s1 int,
-
s2 int,
-
s3 int,
-
s4 int,
-
primary key (id,ts,src_userid)
-
)
-
partition by range(UNIX_TIMESTAMP(ts))
-
subpartition by key(src_userid)
-
subpartitions 4(
-
partition p201506 VALUES LESS THAN(UNIX_TIMESTAMP(‘2015-10-01 00:00:00‘)),
-
partition p201507 VALUES LESS THAN(UNIX_TIMESTAMP(‘2015-11-01 00:00:00‘)),
-
partition p201508 VALUES LESS THAN(UNIX_TIMESTAMP(‘2015-12-01 00:00:00‘))
-
);
-
-
create index inx_1 on chat_message(src_userid,target_userid,s1,ts);
-
create index inx_2 on chat_message(src_userid,target_userid,s2,ts);
-
create index inx_3 on chat_message(src_userid,target_userid,s3,ts);
-
create index inx_4 on chat_message(src_userid,target_userid,s4,ts);
测试结果
每秒 8810 Insert
3.在日期范围分区基础上,增加16个子分区
每秒 6688 Insert
4.在日期范围分区基础上,增加64个子分区
每秒 8368 Insert
结论:
1.在日期的范围分区上,再增加Hash分区,显著降低每秒Insert数量
2.随着数据量的增加,每秒Insert数量显著下降. 比如表中已经有200w数据,再增加200w数据,每秒Insert从1w左右直接掉到1k左右.
MySQL大量数据入库的性能比较(分区)
原文:http://blog.itpub.net/29254281/viewspace-1843669/