Skip to content

Commit

Permalink
[#12293] YSQL: Use better defaults for copy command
Browse files Browse the repository at this point in the history
Summary:
With AsyncFlush landed in master (D16005), we adjust the default values to achieve a better copy performance. Note that we cannot set ROWS PER TRANSACTION too high as it will OOM the tserver; once packed column is enabled, we will set it at higher value (see D16717).

Note that if one sets ROWS PER TRANSACTION too high, we get the error:

```
ysqlsh:/home/centos/gen_table/insert_script.sql:6: ERROR:  Remote error: Service unavailable (yb/rpc/yb_rpc.cc:165): Call rejected due to memory pressure: Call yb.tserver.PgClientService.Perform 172.151.28.7:54138 => 172.151.28.7:9100 (request call id 2301)
```

Test Plan:
jenkins

Run and test on portal (make sure to disable packed column):
On copying 1M rows with 2 CPUs:
```
512 ysql_session_max_batch_size, 20K rows per transaction:
01:50.504
3072 ysql_session_max_batch_size, 20K rows per transaction:
01:45.250
```

Reviewers: smishra, rthallam

Reviewed By: rthallam

Subscribers: yql

Differential Revision: https://phabricator.dev.yugabyte.com/D16752
  • Loading branch information
lnguyen-yugabyte committed Apr 29, 2022
1 parent 5dc5d72 commit d95a159
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion src/postgres/src/include/commands/copy.h
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
#include "parser/parse_node.h"
#include "tcop/dest.h"

#define DEFAULT_BATCH_ROWS_PER_TRANSACTION 1000
#define DEFAULT_BATCH_ROWS_PER_TRANSACTION 20000

/* CopyStateData is private in commands/copy.c */
typedef struct CopyStateData *CopyState;
Expand Down
2 changes: 1 addition & 1 deletion src/yb/yql/pggate/pggate_flags.cc
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ DEFINE_uint64(ysql_prefetch_limit, 1024,
DEFINE_double(ysql_backward_prefetch_scale_factor, 0.0625 /* 1/16th */,
"Scale factor to reduce ysql_prefetch_limit for backward scan");

DEFINE_uint64(ysql_session_max_batch_size, 512,
DEFINE_uint64(ysql_session_max_batch_size, 3072,
"Use session variable ysql_session_max_batch_size instead. "
"Maximum batch size for buffered writes between PostgreSQL server and YugaByte DocDB "
"services");
Expand Down

0 comments on commit d95a159

Please sign in to comment.