How do I split a large MySql backup file into multiple files?

14

7

I have a 250 MB backup SQL file but the limit on the new hosting is only 100 MB ...

Is there a program that let's you split an SQL file into multiple SQL files?

It seems like people are answering the wrong question ... so I will clarify more:

I ONLY have the 250 MB file and only have the new hosting using phpMyAdmin which currently has no data in the database. I need to take the 250 MB file and upload it to the new host but there is a 100 MB SQL backup file upload size limit. I simply need to take one file that is too large and split it out into multiple files each containing only full valid SQL statements (no statements can be split between two files).

Brian T Hannan

Posted 2010-10-01T15:50:02.867

Reputation: 891

Does it have to be multiple valid SQL files? I'd zip the file into several archives in 100MB chunks and upload that. – Nifle – 2010-10-01T16:37:42.883

Yes, it has to be multiple valid SQL files ... as far as not having half of one statement in one file and the other half in another file. – Brian T Hannan – 2010-10-01T17:11:36.330

Answers

5

From How do I split the output from mysqldump into smaller files?

First dump the schema (it surely fits in 2Mb, no?)

mysqldump -d --all-databases

and restore it.

Afterwards dump only the data in separate insert statements, so you can split the files and restore them without having to concatenate them on the remote server

mysqldump --all-databases --extended-insert=FALSE --no-create-info=TRUE

harrymc

Posted 2010-10-01T15:50:02.867

Reputation: 306 093

Good advice! Makes perfect sense. Really good when there are complex dependencies between tables so that a splitting of the export of both schema and data doesn't work. – Reed Richards – 2011-11-24T10:57:42.380

7

Simplest way to split the backup file is to use a software sqldumpsplitter, which allows you to split the db file into multiple db files. Download here

Or else use this terminal command.

split -l 600 ./path/to/source/file.sql ./path/to/dest/file-

Here, 600 is the number of lines you wish to have in your split files. And the two arguments are source and the destination of the files respectively.

NOTE: you must check the split files, you don't split any command.

GIRI

Posted 2010-10-01T15:50:02.867

Reputation: 214

1Thanks for hinting in the correct direction! But instead of using split I recommend using csplit. It won't split right after a certain line number, which might be inbetween a list of INSERT values for example. You can pass a regex to identify lines to split at. Use it in the following way, example: csplit -k dump.sql '/^CREATE TABLE .*/' '{900}' This will split your dump file before each create table … statement. Change this to your needs. (By the way I pass {900} instead of {*} because I get an error "csplit: *}: bad repetition count", maybe just a problem on MacOS?) – Arvid – 2017-01-23T12:36:18.163

3

I wrote mysqldumpsplitter (shell script), which splits the databases/tables as instructed in a quick and easy way. See all the possible use cases of how-to extract from mysqldump.

sh mysqldumpsplitter.sh --source mysqldump-file.sql --extract DB --match_str database-name

mysql_user

Posted 2010-10-01T15:50:02.867

Reputation: 141

2

This code will do exactly what you want (and it's open source):

https://web.archive.org/web/20160313044916/http://rodo.nl/index.php?page=mysql-splitter

It allows you to split any SQL file into several smaller files (you can define the maximum size) SQL syntax will be kept correct and it works with 'multiple insert' query syntax.

Hope this helps!

Will0

Posted 2010-10-01T15:50:02.867

Reputation: 121

0

I had this problem too and decided to code an extremely memory & CPU efficient piece of code that splits a single .sql file into several (one-per-table).

I had to write it since any other solution I found was not performing good enough. On a real 16GB dump i managed to get it splitted in less than 2 minutes.

The code and instructions are available at the project page on github

Andrea Baccega

Posted 2010-10-01T15:50:02.867

Reputation: 111

0

1) Do you have the option to upload the file by another method eg: scp or ftp and then restore it from the local file?

2) Will your ISP take the file on CD and load it for you?

3) Can you restore the file to a local server and then make a series of backup files from it using specific criteria to keep the individual sizes down?

4) You could split the file manually then tidy up the SQL commands at the end of the files?

Linker3000

Posted 2010-10-01T15:50:02.867

Reputation: 25 670

I was thinking about option 2, 3, and 4. 2 would be last resort. 3 is actually probably the next thing I'm going to try to do. 4 was my first thought, but I can't even open the large file in notepad or any editor b/c it takes hours to open a 250 MB file of all text ... all notepad-like apps freeze and never return. – Brian T Hannan – 2010-10-01T18:55:13.563

@Brian: 4 was my first thought, but I can't even open the large file in notepad or any editor - Notepad++ can handle them fine. else look at http://superuser.com/questions/34749/text-editor-for-very-big-file-windows

– Sathyajith Bhat – 2010-10-01T20:27:03.507

If you want to split the file AUTOMATICALLY and then tidy up the smaller pieces, have a look at the split option of the Swiss File Knife utility (http://stahlworks.com/dev/swiss-file-knife.html)

– Linker3000 – 2010-10-05T08:20:23.540

0

There's a couple of options if you can run a bash or perl script. Try this one from yoodey.com

#!/usr/bin/perl -w
#
# splitmysqldump - split mysqldump file into per-database dump files.
use strict;
use warnings;
my $dbfile;
my $dbname = q{};
my $header = q{};
while (<>) {    

# Beginning of a new database section:
    # close currently open file and start a new one
    if (m/-- Current Database\: \`([-\w]+)\`/) {
    if (defined $dbfile && tell $dbfile != -1) {
        close $dbfile or die "Could not close file!"
    }
    $dbname = $1;
    open $dbfile, ">>", "$1_dump.sql" or die "Could not create file!";
    print $dbfile $header;
    print "Writing file $1_dump.sql ...\n";
    }
    if (defined

$dbfile && tell $dbfile != -1) {
    print $dbfile $_;
    }
    # Catch dump file header in the beginning
    # to be printed to each separate dump file.
    if (!

$dbname) { $header .= $_; }
}
close $dbfile or die "Could not close file!"

Jrgns

Posted 2010-10-01T15:50:02.867

Reputation: 101

0

You can split a large file in Eclipse. I have tried a 105GB file in Windows successfully:

Just add the MySQLDumpSplitter library to your project: http://dl.bintray.com/verace/MySQLDumpSplitter/jar/

Quick note on how to import:

- In Eclipse, Right click on your project --> Import
- Select "File System" and then "Next"
- Browse the path of the jar file and press "Ok"
- Select (thick) the "MySQLDumpSplitter.jar" file and then "Finish"
- It will be added to your project and shown in the project folder in Package Explorer in Eclipse
- Double click on the jar file in Eclipse (in Package Explorer)
- The "MySQL Dump file splitter" window opens which you can specify the address of your dump file and proceed with split.

Alisa

Posted 2010-10-01T15:50:02.867

Reputation: 229

-1

Instead of splitting the file, you could use a MySQL client on your local machine and connect it to the remote MySQL DB. I use HeidiSQL and have found it very good.

Of course it may take a while to send the 250MB of SQL statements across the Internet.

You could also try BigDump

Hydaral

Posted 2010-10-01T15:50:02.867

Reputation: 1 674