Flask-SQLAlchemy批量插入数据性能测试

Flask-SQLAlchemy批量插入数据性能测试

Mysql数据库服务器4核,批量写入1万条数据、每条数据4个字段 性能测试:

1、常规for循环一条条写入 (这种方式波动比较大,基本在60-120s之间)

startTime = time.time()
for i in range(10000):
    db.session.add(
        Test(
            blog_name='阿汤博客{}'.format(i),
            blog_url='https://www.amd5.cn/'.format(i),
            blog_desc='我是阿汤博客,地址是https://www.amd5.cn/'.format(i),
            create_time=datetime.datetime.now()
        )
    )
db.session.commit()
endTime = time.time()
diff = round(endTime - startTime, 3)
print("耗时:{}s").format(diff)

耗时:95.468s

2、通过bulk_save_objects批量写入

startTime = time.time()
db.session.bulk_save_objects(
    [
        Test(
            blog_name='阿汤博客{}'.format(i),
            blog_url='https://www.amd5.cn/'.format(i),
            blog_desc='我是阿汤博客,地址是https://www.amd5.cn/'.format(i),
            create_time=datetime.datetime.now()
        )
        for i in range(10000)
    ]
)
endTime = time.time()
diff = round(endTime - startTime, 3)
print("耗时:{}s").format(diff)
耗时:0.695s

3、通过bulk_insert_mappings批量写入

startTime = time.time()
db.session.bulk_insert_mappings(
    Test,
    [
        dict(
            blog_name='阿汤博客{}'.format(i), 
            blog_url='https://www.amd5.cn/'.format(i),
            blog_desc='我是阿汤博客,地址是https://www.amd5.cn/'.format(i), 
            create_time=datetime.datetime.now()
        )
        for i in range(10000)
    ]
)
endTime = time.time()
diff = round(endTime - startTime, 3)
print("耗时:{}s").format(diff)

耗时:0.658s

4、原生insert批量写入

startTime = time.time()
db.session.execute(
    Test.__table__.insert(),
    [
        {
            "blog_name": '阿汤博客{}'.format(i),
            "blog_url": 'https://www.amd5.cn/'.format(i),
            "blog_desc": '我是阿汤博客,地址是https://www.amd5.cn/'.format(i),
            "create_time": datetime.datetime.now()
         }
        for i in range(10000)
    ]
)
endTime = time.time()
diff = round(endTime - startTime, 3)
print("耗时:{}s").format(diff)

耗时:0.434s

总结: 总得来说,只要不是使用第一种方式批量写入,基本上不会有太大的性能问题。

Last updated