3

Sometimes, when I develop an exploit that perfectly works on a given machine, it will fail on a different one, despite them having the same OS/Architecture and configurations (like language, which in my experience can have some effects on the process memory, at least on Windows).

I am aware of the fact that techniques exist to achieve more reliability in some cases. Still, I wonder if an exploit developed and tested on a given device (let's say a Google Pixel), with or without using these techniques, is supposed to work on another (like a Samsung Galaxy S8).

If not, why? Shouldn't the same binary have the same process memory? I can think of different cases in which this shouldn't be true, for example if the program loads data that are device-specific. But when it isn't the case, will the same binary, feeded with the same input, behave differently when ran on different devices?

This questions assumes that ASLR is disabled.

EDIT: I am asking about exploiting memory corruption vulnerabilities - like use-after-frees and OOB reads.

Not Now
  • 199
  • 11
  • Depends on an exploit. E.g. SQL injection will work perfectly across devices. Please be more specific: what are you trying to exploit? how does it fail? et cetera. – ximaera Mar 30 '18 at 23:23
  • I have added an edit. I am talking specifically about memory corruption vulnerabilities. Also, this is not specific to a single bug/exploit, it's a broad question about process memory. – Not Now Mar 30 '18 at 23:28
  • 2
    Having the same overall architecture does not mean the software for loading libraries and memory handling will be the same. The Pixel and Galaxy S8 probably do not have the same CPU model, even if they both use the ARM ISA. – forest Mar 31 '18 at 04:13
  • @forest how does that even possible both devices are RISC-based. – muneeb_ahmed Mar 31 '18 at 13:09
  • @muneeb_ahmed I'm not sure what you're asking. What does it matter if both are RISC? – forest Apr 01 '18 at 01:05
  • I am saying both have Harvard based architecture. – muneeb_ahmed Apr 01 '18 at 04:15

1 Answers1

2

Heap layout is a fickle thing. Real-life examples from the windows world:

  • Non-english versions of program load additional internationalization DLL at fixed address in middle of heap.

  • different allocation patterns between starting program on commandline (as your fuzzer/debugger does) and user opens document with "file"-"open".

  • Security product loads %$§#-ton of DLLs into all processes that all start threads & allocate stuff on the heap. I guess they prevent exploits by not leaving any free memory for them.

  • Windows XP brought all new heap allocation strategy with a service pack. Was it SP2?

  • Browsers have complicated heuristics for when they precompile & jit, how much they jit, when they free jitted code, etc. Those heuristics also look at total availaible system memory. Same goes for garbage collection.

Never assume too much about your heap layout.

manduca
  • 1,111
  • 7
  • 10