-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
csvsql Killed #633
Comments
Can you run the "top" command while this is going on - you might be running out of some system resource along the way. |
If you run |
Will do! In the middle of summer finals but I'm picking my project up right after. |
I'm having a similar problem, but with a file that's about 40Mb. There is no additional output when I use -v, just "Killed". The table is created, but is empty, and so running the same command a second time does actually yield verbose input indicating that the table already exists. Small files work well, though. |
I suspect your RAM was full! Search for 'oom killer' |
Thanks. I monitored memory usage as I ran the command - yes, available memory decreases beyond the amount of memory used while the device is 'at rest', so that's likely. Are there any settings I can use to expand available memory using storage? I've already started splitting the file - that may be the easiest work around for now. |
If you are on linux you can increase the size of your swap partition, it will save your program from being killed, but it will probably slow everything down, as the OS will use HDD as substitute for memory and it is MUCH slower. |
It's running on a cloud server, so there's no browser, but I'll get more memory if I need to process more files. So far, it's just the one. Thanks for your help! |
csvsql is not performant on huge data, last I tried (and remembered from the docs). I recommend that you use csvsql for just creating schemas or for smaller jobs, and pipe your CSV data into a database using another tool. csvkit is great for getting the data ready for a database. I don't think it was meant for big data tasks. As an example of another tool, datanews/tables, when i used it a long while ago when it was still in development, worked quite beautifully. It's a CLI tool too, but built in node: |
It worked when I split the file into a max of 20k records. Took some Wrote a script to load each of the split files and it worked well. On Oct 18, 2016 9:20 PM, "Dan Nguyen" notifications@github.com wrote:
|
Closing this since there was a resolution. Opened a documentation ticket #735. |
Add a global ‘max_precision’ argument to print_table
I've been attempting to use csvsql to create a table from an ~4gb .csv file and after around 10-15 minutes the ubuntu terminal returns 'killed'. I'm generally new to using this module etc. so if you need more information let me know.
Input:
csvsql train.csv
csvsql --no-contraints train.csv
output:
killed
edit: if anyone could let me know what's happening/give another route I'd appreciate it
The text was updated successfully, but these errors were encountered: