pgloader is a data loading tool for PostgreSQL, using the COPY
command.
Its main avantage over just using COPY
or \copy
and over using a
Foreign Data Wrapper is the transaction behaviour, where pgloader will
keep a separate file of rejected data and continue trying to copy
good
data in your database.
The default PostgreSQL behaviour is transactional, which means that any erroneous line in the input data (file or remote database) will stop the bulk load for the whole table.
pgloader also implements data reformating, the main example of that being a
transformation from MySQL dates 0000-00-00
and 0000-00-00 00:00:00
to
PostgreSQL NULL
value (because our calendar never had a year zero).
pgloader is now a Common Lisp program, tested using the SBCL and CCL implementation with Quicklisp.
apt-get install sbcl
apt-get install libmysqlclient-dev
wget http://beta.quicklisp.org/quicklisp.lisp
sbcl --load quicklisp.lisp
* (quicklisp-quickstart:install)
* (ql:add-to-init-file)
The current version of the code depends on a recent version of Postmodern not found in Quicklisp yet at the time of this writing:
cd ~/quicklisp/local-projects/
git clone https://github.com/marijnh/Postmodern.git
git clone -b empty-strings-and-nil https://github.com/dimitri/cl-csv.git
git clone http://git.tapoueh.org/git/pgloader.git
Now you can use the #!
script or build a self-contained binary executable
file, as shown below. You might have to modify it the pgloader.lisp
script
because it's now hard coded to use /usr/local/bin/sbcl
and you probably
want to change that part then:
./pgloader.lisp --help
Each time you run the pgloader
command line, it will check that all its
dependencies are installed and compiled and if that's not the case fetch
them from the internet and prepare them (thanks to Quicklisp). So please
be patient while that happens and make sure we can actually connect and
download the dependencies.
First, make sure you have downloaded all the required Common Lisp dependencies that pgloader uses, and install the buildapp application:
$ sbcl
* (ql:quickload "pgloader")
* (ql:quickload "buildapp")
* (buildapp:build-buildapp "./buildapp")
If you just installed SBCL and Quicklisp to use pgloader, that command should do it:
./buildapp --logfile /tmp/build.log \
--asdf-tree ~/quicklisp/dists \
--load-system pgloader \
--entry pgloader:main \
--dynamic-space-size 4096 \
--output pgloader.exe
You can also use the option --compress-core
if your platform supports it,
so has to reduce the size of the generated binary.
When you're a Common Lisp developper or otherwise already using Quicklisp with some local-projects and a local source registry setup for asdf, use a command line like this:
./buildapp --logfile /tmp/build.log \
--asdf-tree ~/quicklisp/local-projects \
--manifest-file ./manifest.ql \
--asdf-tree ~/quicklisp/dists \
--load-system pgloader \
--entry pgloader:main \
--dynamic-space-size 4096 \
--output pgloader.exe
That command requires a manifest.ql
file that you can obtain with the lisp
command:
(ql:write-asdf-manifest-file "path/to/manifest.ql")
Give as many command files that you need to pgloader:
./pgloader.lisp <file.load>
See the documentation file pgloader.1.md
for details. You can compile that
file into a manual page or an HTML page thanks to the pandoc
application:
$ apt-get install pandoc
$ pandoc pgloader.1.md -o pgloader.1
$ pandoc pgloader.1.md -o pgloader.html
Some notes about what I intend to be working on next.
- prepare an all-included binary for several platforms
- review pgloader.pgsql:reformat-row date-columns arguments
- review connection string handling for both PostgreSQL and MySQL
- provide a better toplevel API
- implement tests
- commands:
LOAD
andINI
formats - compat with
SQL*Loader
format
Here's a quick spec of the LOAD
grammar:
LOAD FROM '/path/to/filename.txt'
stdin
http://url.to/some/file.txt
mysql://[user[:pass]@][host[:port]]/dbname
[ COMPRESSED WITH zip | bzip2 | gzip ]
WITH workers = 2,
batch size = 25000,
batch split = 5,
reject file = '/tmp/pgloader/<table-name>.dat'
log file = '/tmp/pgloader/pgloader.log',
log level = debug | info | notice | warning | error | critical,
truncate,
fields [ optionally ] enclosed by '"',
fields escaped by '"',
fields terminated by '\t',
lines terminated by '\r\n',
encoding = 'latin9',
drop table,
create table,
create indexes,
reset sequences
SET guc-1 = 'value', guc-2 = 'value'
PREPARE CLIENT WITH ( <lisp> )
PREPARE SERVER WITH ( <sql> )
INTO postgresql://[user[:pass]@][host[:port]]/dbname?table-name
[ WITH <options> SET <gucs> ]
(
field-name data-type field-desc [ with column options ],
...
)
USING (expression field-name other-field-name) as column-name,
...
INTO table-name [ WITH <options> SET <gucs> ]
(
*
)
WHEN
FINALLY ON CLIENT DO ( <lisp> )
ON SERVER DO ( <lisp> )
< data here if loading from stdin >
The accepted column options are:
terminated by ':'
nullif { blank | zero date }
date format "DD-Month-YYYY"
And we need a database migration command syntax too:
LOAD DATABASE FROM mysql://localhost:3306/dbname
INTO postgresql://localhost/db
WITH drop tables,
create tables,
create indexes,
reset sequences,
<options>
SET guc = 'value', ...
CAST tablename.column to timestamptz drop default,
varchar to text,
int with extra auto_increment to bigserial,
datetime to timestamptz drop default,
date to date drop default;
- write proper documentation
- host a proper website for the tool, with use cases and a tutorial
- error management with a local buffer (done)
- error reporting (done)
- add input line number to log file?
- import directly from MySQL, file based export/import (done)
- import directly from MySQL streaming (done)
- general CSV and Flexible Text source formats
- fixed cols input data format
- compressed input (gzip, other algos)
- fetch data from S3
- experiment with perfs and inlining the transformation functions
- add typemod expression to cast rules in the command language
- add per-column support for cast rules in the system
- PostgreSQL COPY Text format output for any supported input
- automatic creation of schema (from MySQL schema, or from CSV header)
- pre-fetch some rows to guesstimate data types?
- some more parallelizing options
- support for partitionning in pgloader itself
Data reformating is now going to have to happen in Common Lisp mostly, maybe offer some other languages (cl-awk etc).
- raw reformating, before rows are split
- per column reformating
- date (zero dates)
- integer and "" that should be NULL
- user-defined columns (constants, functions of other rows)
- column re-ordering
Have a try at something approaching:
WITH data AS (
COPY FROM ...
RETURNING x, y
)
SELECT foo(x), bar(y)
FROM data
WHERE ...
A part of that needs to happen client-side, another part server-side, and
the grammar has to make it clear what happens where. Maybe add a WHERE
clause to the COPY
or LOAD
grammar for the client.
- add a web controler with pretty monitoring
- launch new jobs from the web controler
- MySQL replication, reading from the binlog directly
- plproxy (re-)sharding support
- partitioning support
- remote archiving support (with (delete returning *) insert into)