Rather than give you a direct answer, because it's mostly going to be "it depends", I thought it might be more interesting to take you through a recent discovery of mine that describes the behaviour of memory allocation on Windows, so that you've got some context behind all of it. I don't know enough about the Linux memory management model so I won't comment on that side of things, although I am told that it is quite similar.
Recently I was thinking about W^X memory protection features offered by the Windows kernel, known as "dynamic code policy". The idea is that once a page mapped into an application has been marked writable, it can never be marked executable. This is a powerful exploit mitigation because it prevents almost every class of exploit involving delivery of shellcode. You can't just do a buffer overflow, gain control of the instruction pointer, ROP to VirtualProtect, mark your payload as RWX, and execute it - the VirtualProtect call fails because memory cannot be writable and executable.
My thought for a generic bypass here was that you could VirtualAlloc some read+write memory, VirtualFree it, then VirtualAlloc again as read+execute, hoping to get re-allocated the same pages. As it turns out, this doesn't work. The reason why is relevant to your question.
There are three cases where Windows memory management will scrub pages to zero:
- When a page is committed to a process
- When a page has been marked as
MEM_RESET
- When a page has been freed and is marked as dirty
The first one thwarts my plan. Processes can be allocated virtual address space without committing them, e.g. via calling VirtualAlloc with MEM_RESERVE
. The reserved address space isn't directly mapped to any physical memory or swap, but it is kept for use by the process in case it needs it. In order to actually use the memory the pages must be committed, e.g. by calling VirtualAlloc with MEM_RESERVE | MEM_COMMIT
. Upon being committed a page will be zeroed if it has not already been by some other means. As such my trick of allocating writable data, freeing it, then allocating it as executable is broken because the memory manager zeroes the page.
The second one is also interesting. If you're done using a block of memory for now, but will use it again later, you can reset the page using MEM_RESET
. This indicates to the memory manager that it should not decommit the page from the process, but it should not bother swapping them out to disk as the data contained within is no longer interesting. The system will zero these pages in the background when there is free time. However, the application can request to undo the reset flag if the system has not yet zeroed out the page, using the MEM_RESET_UNDO
flag.
The third option is that you decommit and release a page, and it is completely free to be reused by the memory manager with any other process. If the page contains data it is marked as dirty. The system will either scrub it in the background or actively zero it when committed again.
These are the case for system level memory management. Process-level management is much different, e.g. using HeapAlloc, library allocators (malloc), or a completely custom allocator. These designs may allocate and commit a memory page and perform their own management of memory use on top of that committed block. As such, a libc free() call may not actually decommit underlying memory pages. It is also totally possible for a malloc() call to return pages that have not been reset - this is why calloc() exists.
The next part of the question is whether browsers have any active defenses against this. I don't know if they do. It's certainly possible that their memory allocators explicitly zero all new allocated pages (or set them to some magic value, e.g. 0xDEADBEEF) in order to limit the potential for and better identify cases of uninitialised memory reads. Also, for example, Microsoft Edge's memory allocator is specially designed to offer increased security against memory corruption, so resetting allocations is part of that.
In summary, it depends!