Has anyone ever attempted to upgrade an old Berkeley database that must be dumped via db_dump185?
When I try to dump a database containing comments from a website, as follows:
$ bash-3.2$ db_dump185 -f comment.dump comment.db
I get this error:
File size limit exceeded (core dumped)
Is there a way to avoid this?
Here is out output of ulimit -a
:
$ulimit -a
core file size (blocks, -c) 200000
data seg size (kbytes, -d) 200000
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 32743
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) 200000
open files (-n) 100
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 20
virtual memory (kbytes, -v) 200000
file locks (-x) unlimited
And this is the database:
$ ls -l comment.db
-rwxr-xr-x 1 daiello staff 184393728 Jan 12 2012 comment.db
I want to make sure that this question gets an answer. What @Alan suggested db_dump185 comment.db | cat > comment.dump
really helped. Continuing with the dump eventually consumed all available real memory and most of the swap.
So we moved the database files to a bigger server and subsequently ran into the dreaded db_dump185: seq: invalid argument
error. I don't believe db_dump185 has a repair function, but I haven't done all the research that I want to do yet.