postgresql json取值为何这么慢?

阿里云国内75折 回扣 微信号:monov8
阿里云国际,腾讯云国际,低至75折。AWS 93折 免费开户实名账号 代冲值 优惠多多 微信号:monov8 飞机:@monov6

一、缘起

慢sql分析,总行数80w+。

比较特殊的是:其中有个字段info是jsonb类型,写法:info::json->'length' as length

同样的查询条件查这个字段和不查这个字段相差3.3倍

那看来就是json取值拖垮了查询的性能。

取jsonb中的字段有多种取法(如下), 那他们有什么区别呢,对性能有啥影响呢?

  • info::json->'length'
  • info::jsonb->'length'
  • info::json->>'length'
  • info::jsonb->>'length'
  • info->'length'
  • info->'length'
  • info->>'length'
  • info->>'length'

二、对比

2.1 输出类型对比

查询不同写法的类型:

select
info::json->'length' AS "info::json->", pg_typeof(info::json->'length' ) ,
info::jsonb->'length' AS "info::jsonb->" , pg_typeof(info::jsonb->'length' ),
info::json->>'length' AS "info::json->>" , pg_typeof(info::json->>'length' ),
info::jsonb->>'length' AS "info::jsonb->>" , pg_typeof(info::jsonb->>'length'),
info->'length' AS "info->" , pg_typeof(info->'length' ),
info->'length' AS "info->" , pg_typeof(info->'length' ),
info->>'length' AS "info->>" , pg_typeof(info->>'length' ),
info->>'length' AS "info->>" , pg_typeof(info->>'length' )
from t_test_json limit 1;

结果

 info::json-> | pg_typeof | info::jsonb-> | pg_typeof | info::json->> | pg_typeof | info::jsonb->> | pg_typeof | info-> | pg_typeof | info-> | pg_typeof | info->> | pg_typeof | info->> | pg_typeof
--------------+-----------+---------------+-----------+---------------+-----------+----------------+-----------+--------+-----------+--------+-----------+---------+-----------+---------+-----------
123.9 | json | 123.9 | jsonb | 123.9 | text | 123.9 | text | 123.9 | jsonb | 123.9 | jsonb | 123.9 | text | 123.9 | textttui 

分析小结

  • ->> 输出类型为text
  • ->输出到底为何得看调用它的数据类型,比如:info类型是jsonb, 那么info->'length'为jsonb类型
  • ::json、::jsonb起到类型转换的作用。
  • info本来就是jsonb类型,info::jsonb算无效转换,是否对性能有影响,待会验证

2.2 性能对比

jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info::json->'length' AS "info::json->", pg_typeof(info::json->'length' )
jihite-> from t_test_json limit 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.04 rows=1 width=36) (actual time=0.028..0.028 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..30.62 rows=750 width=36) (actual time=0.027..0.027 rows=1 loops=1)
Planning time: 0.056 ms
Execution time: 0.047 ms
(4 rows) jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info::jsonb->'length' AS "info::jsonb->" , pg_typeof(info::jsonb->'length' )
jihite-> from t_test_json limit 1
jihite-> ;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.03 rows=1 width=36) (actual time=0.017..0.017 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..23.12 rows=750 width=36) (actual time=0.015..0.015 rows=1 loops=1)
Planning time: 0.053 ms
Execution time: 0.031 ms
(4 rows) jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info::jsonb->'length' AS "info::jsonb->" , pg_typeof(info::jsonb->'length' )
jihite-> from t_test_json limit 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.03 rows=1 width=36) (actual time=0.010..0.010 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..23.12 rows=750 width=36) (actual time=0.009..0.009 rows=1 loops=1)
Planning time: 0.037 ms
Execution time: 0.022 ms
(4 rows) jihite=>
jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info::json->>'length' AS "info::json->>" , pg_typeof(info::json->>'length' )
jihite-> from t_test_json limit 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.04 rows=1 width=36) (actual time=0.026..0.027 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..30.62 rows=750 width=36) (actual time=0.025..0.025 rows=1 loops=1)
Planning time: 0.056 ms
Execution time: 0.046 ms
(4 rows) jihite=>
jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info::jsonb->>'length' AS "info::jsonb->>" , pg_typeof(info::jsonb->>'length')
jihite-> from t_test_json limit 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.03 rows=1 width=36) (actual time=0.012..0.012 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..23.12 rows=750 width=36) (actual time=0.011..0.011 rows=1 loops=1)
Planning time: 0.053 ms
Execution time: 0.029 ms
(4 rows) jihite=>
jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info->'length' AS "info->" , pg_typeof(info->'length' )
jihite-> from t_test_json limit 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.03 rows=1 width=36) (actual time=0.014..0.014 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..23.12 rows=750 width=36) (actual time=0.013..0.013 rows=1 loops=1)
Planning time: 0.052 ms
Execution time: 0.030 ms
(4 rows) jihite=>
jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info->'length' AS "info->" , pg_typeof(info->'length' )
jihite-> from t_test_json limit 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.03 rows=1 width=36) (actual time=0.013..0.013 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..23.12 rows=750 width=36) (actual time=0.012..0.012 rows=1 loops=1)
Planning time: 0.051 ms
Execution time: 0.029 ms
(4 rows) jihite=>
jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info->>'length' AS "info->>" , pg_typeof(info->>'length' )
jihite-> from t_test_json limit 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.03 rows=1 width=36) (actual time=0.012..0.013 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..23.12 rows=750 width=36) (actual time=0.011..0.011 rows=1 loops=1)
Planning time: 0.053 ms
Execution time: 0.030 ms
(4 rows) jihite=>
jihite=> EXPLAIN ANALYSE
jihite-> select
jihite-> info->>'length' AS "info->>" , pg_typeof(info->>'length' )
jihite-> from t_test_json limit 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.03 rows=1 width=36) (actual time=0.012..0.013 rows=1 loops=1)
-> Seq Scan on t_test_json (cost=0.00..23.12 rows=750 width=36) (actual time=0.011..0.011 rows=1 loops=1)
Planning time: 0.053 ms
Execution time: 0.029 ms
(4 rows)

从执行耗时(Execution time)分析小结

执行了类型转换 jsonb->json,转换性能(0.46ms)显然低出不转换(0.3ms)

三、优化

把查询字段:info::json->'length' 改为info->>'length',减少类型转换导致性能的损耗。

四、待调查

4.1 同类型转换是否影响性能

字段本身是jsonb, 进行强转::jsonb 是否对性能造成影响,还是在执行预编译时就已被优化

从大量数据的压测看,转换会对性能有影响,但是不大

4.2 如何分析函数的耗时

在explain analyze时,主要分析了索引对性能的影响,那函数的具体影响如何查看呢?

五、附

5.1 json、jsonb区别

  • jsonb 性能优于json
  • jsonb 支持索引
  • 【最大差异:效率】jsonb 写入时会处理写入数据,写入相对较慢,json会保留原始数据(包括无用的空格)

推荐把JSON 数据存储为jsonb

5.2 postgresql查看字段类型函数

pg_typeof()

5.3 性能分析指令

如果您有一条执行很慢的 SQL 语句,您想知道发生了什么以及如何优化它。
EXPLAIN ANALYSE 能够获取数据库执行 sql 语句,所经历的过程,以及耗费的时间,可以协助优化性能。

关键参数:

Execution time: *** ms 表明了实际的SQL 执行时间,其中不包括查询计划的生成时间

5.4 示例中的建表语句

# 建表语句

create table t_test_json
(
id bigserial not null PRIMARY KEY,
task character varying not null,
info jsonb not null,
create_time timestamp not null default current_timestamp
);

# 压测数据

insert into t_test_json(task, info) values('1', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('2', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('3', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('4', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('5', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('6', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('7', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('8', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('9', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('10', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('11', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('12', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('13', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('14', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('15', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('16', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('17', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('18', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('19', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');
insert into t_test_json(task, info) values('20', '{"length": 123.9, "avatar": "avatar_url", "tags": ["python", "golang", "db"]}');

5.5 示例中的压测脚本

import time
import psycopg dbname, user, pwd, ip, port = '', '', '', '', '5432'
connection = "dbname=%s user=%s password=%s host=%s port=%s" % (dbname, user, pwd, ip, port)
db = psycopg.connect(connection)
cur = db.cursor() ss = 0
lens = 20
for i in range(lens):
s = time.time()
sql = ''' select
task.id,
act.payload::json->'prod_type' as prod_type
from
t_test_json
order by id
offset %s limit 1000 ''' % (i * 1000)
#print("sql:", sql)
cur.execute(sql)
rev = cur.fetchall() e = time.time()
print("scan:", i, e - s)
ss += (e - s) print('avg', ss / lens)
阿里云国内75折 回扣 微信号:monov8
阿里云国际,腾讯云国际,低至75折。AWS 93折 免费开户实名账号 代冲值 优惠多多 微信号:monov8 飞机:@monov6