15

Let's say a (very) large process is crashing and dumping core, and we know the cause from other information (possibly an assert message, maybe something else).

Is there a way to stop the core dump from being completely generated, since it's a waste in this case?

For instance, would kill -9 of a core dumping process interrupt the corefile generation?

Obviously, if we knew ahead of time that we don't want core dumps, we could set the ulimit appropriately or use the OS's various core file control utilities.

But this question is about the "core dump already in progress" stage...

(For instance, imagine I'm the requestor in https://stackoverflow.com/questions/18368242/how-to-bypass-a-2tb-core-dump-file-system-limit and don't want to waste 5-6 TB of disk space :) )

Mike G.
  • 401
  • 3
  • 14
  • On Linux you can disable core dump from being generated...could it be an option ? – krisFR Jan 31 '14 at 20:52
  • No - the core dumps are necessary in general, but we're just looking for a way to stop them in those cases where we know what the problem is without needing the core, to save time/disk space/etc.... Of course we can just delete the core once it's done dumping (or even unlink it before that), but there's no reason to tie up a few gigs of space on disk if we could just kill the core dump earlier. – Mike G. Feb 01 '14 at 02:11
  • You could possibly use "cat /dev/null > " in the script calling the program when a certain condition is met, so as long as the /proc/ entry exists, sleep every few second and run the /dev/null copy to the core file. This will zero it out. I'm not sure of the full context behind the question, but this can work. – Schrute Feb 08 '14 at 04:18
  • Schrute, that would have the same effect as unlinking the core, wouldn't it? The disk space and system resources would still be consumed until the core finishes writing - the file size just wouldn't be visible in du or ls. – Mike G. Feb 08 '14 at 14:43
  • Resources yes, however this is a common method to deal with a large file such as core/log file and not stop the PID. It just depends on what the goal is. – Schrute Feb 08 '14 at 21:33

3 Answers3

8

Generally: no, there is no way to Reliably kill a coredump.

That being said there is a possibility (at least in linux) for commercial *NIX probably no way

The possibility lies in the fact that the 3.x series of the kernel is able to interrupt file writing. One possibility is to find the thread that is doing the dumping and repeatedly send SIGKILL to it until it succeeds.

This patch series fixes the issue to some level.

Other possibility is to use the alternate syntax for the coredump_pattern. The manual says that since 2.6.19 instead of a pattern you can use a pipe and a program (with params) that will handle the dump. Ergo you will have control which dump will get written to where (/dev/null being the obvious candidate for your useless cores).

This patch also deserves a bit of attention: http://linux.derkeiler.com/Mailing-Lists/Kernel/2010-06/msg00918.html

zeridon
  • 760
  • 3
  • 6
  • Thanks zeridon - that's definitely the best answer so far. – Mike G. Feb 06 '14 at 18:21
  • I would think that if you use the piping mechanism you can just quit without reading the pipe data if you know that not something you want to keep (unless a broken pipes creates another problem... would need to test that.) – Alexis Wilke Nov 22 '15 at 07:38
0

check this link out,it may can be helpful

https://publib.boulder.ibm.com/httpserv/ihsdiag/coredumps.html

  • Whilst this may theoretically answer the question, [it would be preferable](http://meta.stackexchange.com/q/8259) to include the essential parts of the answer here, and provide the link for reference. – Tom O'Connor Feb 05 '14 at 07:46
  • Thanks Hamed, but while that link does have a lot of interesting info, I don't think it answers my question (unless I missed it - I'm not all that awake yet :) ) – Mike G. Feb 05 '14 at 11:39
-1

It looks like you could run ulimit -c (assuming you're using bash) to limit the core dump size.

See: https://askubuntu.com/questions/220905/how-to-remove-limit-on-core-dump-file-size

and

http://ss64.com/bash/ulimit.html

Kerry
  • 1
  • 1
    Kerry, as I said in the OP, "Obviously, if we knew ahead of time that we don't want core dumps, we could set the ulimit appropriately or use the OS's various core file control utilities." I'm trying to see if there's a way to stop a core dump once it has already begun for a very large task (to save time/disk space/resources). – Mike G. Feb 04 '14 at 23:32