1

Has anyone ever attempted to upgrade an old Berkeley database that must be dumped via db_dump185?

When I try to dump a database containing comments from a website, as follows:

$ bash-3.2$ db_dump185 -f comment.dump comment.db 

I get this error:

File size limit exceeded (core dumped)

Is there a way to avoid this?


Here is out output of ulimit -a:

$ulimit -a

core file size          (blocks, -c) 200000
data seg size           (kbytes, -d) 200000
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 32743
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) 200000
open files                      (-n) 100
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 20
virtual memory          (kbytes, -v) 200000
file locks                      (-x) unlimited

And this is the database:

$ ls -l comment.db 
-rwxr-xr-x 1 daiello staff 184393728 Jan 12 2012 comment.db

I want to make sure that this question gets an answer. What @Alan suggested db_dump185 comment.db | cat > comment.dump really helped. Continuing with the dump eventually consumed all available real memory and most of the swap.

So we moved the database files to a bigger server and subsequently ran into the dreaded db_dump185: seq: invalid argument error. I don't believe db_dump185 has a repair function, but I haven't done all the research that I want to do yet.

Mark Henderson
  • 68,316
  • 31
  • 175
  • 255
Dave Aiello
  • 127
  • 10
  • What does your `ulimit -a` return? – Janne Pikkarainen Aug 15 '12 at 21:10
  • core file size (blocks, -c) 200000 data seg size (kbytes, -d) 200000 scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 32743 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) 200000 open files (-n) 100 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 20 – Dave Aiello Aug 15 '12 at 21:13
  • virtual memory (kbytes, -v) 200000 file locks (-x) unlimited – Dave Aiello Aug 15 '12 at 21:14
  • Hmm, I should have applied some markup to the comments, I guess? Sorry. – Dave Aiello Aug 15 '12 at 21:14
  • OK, file size there unlimited. How big is your database? – Janne Pikkarainen Aug 15 '12 at 21:16
  • $ ls -l comment.db -rwxr-xr-x 1 daiello staff 184393728 Jan 12 2012 comment.db – Dave Aiello Aug 15 '12 at 21:46
  • What's the secret to putting line breaks into comments using mini-Markdown? – Dave Aiello Aug 15 '12 at 21:48
  • You don't put line breaks in comments. You update your question with the additional information people are looking for. – larsks Aug 16 '12 at 00:22
  • aIt could be a Large File Support thing. Is the dump file about 2 gigs when it dies? If that's the problem, you should be able to work around it by outputting to a pipe. `db_dump185 comment.db | cat > comment.dump`. – Alan Curry Aug 16 '12 at 01:26

0 Answers0