What about monitoring the system calls on the side of the kernel using an external gdb instance.
This could be done by setting up a virtual machine that is configured to run the code of interest. Then QEMU and KVM (by my knowledge) have to be configured to open a port for gdb debugging of the kernel. (See guides blow.)
If this VM is started gdb could be attached to its kernel during boot.
The next step is to set gdb properties and breakpoints to fire on any execve (and consorts) that sets the code of interest as new program. Then let the gdb run until it hits this breakpoint. At this point during the execution of the program the pid of the process running the code of interest could be extracted and breakpoints could be set in gdb that are hit (in the kernel code) on any system call of this process (including fork and execve calls that might lead to additional processes to observe).
In theory this should be a good solution that is hard to doge.
One problem is that everything in the guest system becomes horribly slow, and you might get a huge amount of unwanted calls as bycatch (that you have to filter using gdb...). Additional gdb might have to be extended using python to get the conditional breakpoints working with the required conditions (especially for an automatic child process detection).
Guides how to connect gdb to the guest:
Whamcloud Wiki, ReadHat Helpdesk, Stackoverflow
(I did not try these guides. I used gdb some years ago to debug some details of the kernel for a students project. There I used a simple condition on a breakpoint to detect fork calls of a specific process.)
On top of these there are some other techniques to debug a kernel.
PS: Be aware that there are ways to escape a virtual machine (an old example).