6

The other day I've been thinking about Spectre and Meltdown and the ability of one process to access the memory of another.

On my Linux system currently I have all JavaScript disabled to eliminate the possibility of some JS program to access memory in a bad way. If some website requires JS to work I close all other applications first and leave only the browser window with that single site open, then I enable JS only for that site for a limited time, then revert the settings. The idea is - to reduce the chance of having any sensitive data in memory (pseudo single tasking).

But that got me thinking:

  1. Does memory deallocation which program/kernel does erase the actual RAM contents?

  2. Or does it simply provide the possibility for another program to allocate that memory while still keeping the previous content in RAM until another process overwrites it? (thinking about cache etc)

If 1 is valid - then what I am doing may be meaningful (to an extent). But if 2 is the actual case it may be completely futile.

So how does this work? Please share actual facts about it.

george
  • 161
  • 1
  • 1
    See also [Is free() zeroing out memory?](https://stackoverflow.com/questions/30683519/is-free-zeroing-out-memory), [Does Windows clear memory pages?](https://stackoverflow.com/questions/18385556/does-windows-clear-memory-pages) – Sjoerd Jan 24 '18 at 11:32
  • @Sjoerd is it the same for Linux? – george Jan 24 '18 at 13:03
  • It is generally the same. The gist is that the memory is not actually wiped until a page fault actually pulls the pages into use. Before that, the pages are simply marked as unused and any new memory a process requests has every page simply pointing to a single "zero page". Writes to any of these pages triggers copy-on-write, and an unused page is (truly) zeroed and handed to the process. Linux handling of the zero page is a bit different from Windows, but the core idea is the same. I'll write a detailed answer on this a bit later. – forest Jan 24 '18 at 14:18
  • Actually, I'm thinking if it doesn't make more sense to NOT close applications. When you close applications, there is a chance that the OS maps the freed physical pages with interesting information into the address space of a malicious process (marked as non-readable for the process, of course), which can be then accessed by meltdown. – Andy Jan 24 '18 at 14:45

2 Answers2

4

Rather than give you a direct answer, because it's mostly going to be "it depends", I thought it might be more interesting to take you through a recent discovery of mine that describes the behaviour of memory allocation on Windows, so that you've got some context behind all of it. I don't know enough about the Linux memory management model so I won't comment on that side of things, although I am told that it is quite similar.

Recently I was thinking about W^X memory protection features offered by the Windows kernel, known as "dynamic code policy". The idea is that once a page mapped into an application has been marked writable, it can never be marked executable. This is a powerful exploit mitigation because it prevents almost every class of exploit involving delivery of shellcode. You can't just do a buffer overflow, gain control of the instruction pointer, ROP to VirtualProtect, mark your payload as RWX, and execute it - the VirtualProtect call fails because memory cannot be writable and executable.

My thought for a generic bypass here was that you could VirtualAlloc some read+write memory, VirtualFree it, then VirtualAlloc again as read+execute, hoping to get re-allocated the same pages. As it turns out, this doesn't work. The reason why is relevant to your question.

There are three cases where Windows memory management will scrub pages to zero:

  • When a page is committed to a process
  • When a page has been marked as MEM_RESET
  • When a page has been freed and is marked as dirty

The first one thwarts my plan. Processes can be allocated virtual address space without committing them, e.g. via calling VirtualAlloc with MEM_RESERVE. The reserved address space isn't directly mapped to any physical memory or swap, but it is kept for use by the process in case it needs it. In order to actually use the memory the pages must be committed, e.g. by calling VirtualAlloc with MEM_RESERVE | MEM_COMMIT. Upon being committed a page will be zeroed if it has not already been by some other means. As such my trick of allocating writable data, freeing it, then allocating it as executable is broken because the memory manager zeroes the page.

The second one is also interesting. If you're done using a block of memory for now, but will use it again later, you can reset the page using MEM_RESET. This indicates to the memory manager that it should not decommit the page from the process, but it should not bother swapping them out to disk as the data contained within is no longer interesting. The system will zero these pages in the background when there is free time. However, the application can request to undo the reset flag if the system has not yet zeroed out the page, using the MEM_RESET_UNDO flag.

The third option is that you decommit and release a page, and it is completely free to be reused by the memory manager with any other process. If the page contains data it is marked as dirty. The system will either scrub it in the background or actively zero it when committed again.

These are the case for system level memory management. Process-level management is much different, e.g. using HeapAlloc, library allocators (malloc), or a completely custom allocator. These designs may allocate and commit a memory page and perform their own management of memory use on top of that committed block. As such, a libc free() call may not actually decommit underlying memory pages. It is also totally possible for a malloc() call to return pages that have not been reset - this is why calloc() exists.

The next part of the question is whether browsers have any active defenses against this. I don't know if they do. It's certainly possible that their memory allocators explicitly zero all new allocated pages (or set them to some magic value, e.g. 0xDEADBEEF) in order to limit the potential for and better identify cases of uninitialised memory reads. Also, for example, Microsoft Edge's memory allocator is specially designed to offer increased security against memory corruption, so resetting allocations is part of that.

In summary, it depends!

Polynomial
  • 132,208
  • 43
  • 298
  • 379
  • Firefox at least does not, since it just uses jemalloc. – forest Jan 25 '18 at 04:46
  • Actually I take that back. I just remembered that Firefox only uses jemalloc on Linux, but uses the native system malloc on Windows. – forest Mar 01 '18 at 06:08
0

It is not required from OS perspective to erase memory after freeing, and with high probability it is not done, since it is a costly operation. So unless you have a very special kernel, there will be no guarantee that your physical memory contains no sensitive information after closing applications.

Andy
  • 263
  • 1
  • 8
  • This is correct but should have more detail. For example, freeing memory with `free()` and deallocating it with `munmap()` are completely different operations. The former simply returns the data to the pool managed by glibc, whereas the latter actually calls into the kernel and rips the address space away from the process. – forest Jan 24 '18 at 14:16
  • This can be verified by printing the elements from a simple C style array of INTs that has been declared but neither initialized nor assigned.(In the stack.) This may take a large array and many iterations in some circumstances because all memory is wiped on reboot. So there is some probability of being allocated a previously unused page of memory which is dependent on the total amount of spare memory, and the number and size of processes that have been executed since the last reboot. – Max Power Feb 02 '22 at 17:39