Today I was tasked with the analysis of a .vmem file of a Windows RDS one of our customers due to some "strange" connections coming from native Windows processes.
The extracted .vmem file has a size of 20GB.
Requesting the imageinfo with C:\Python27_64\python.exe vol.py -f XXXXXXX-Snapshot184.vmem imageinfo
has so far taken up to 60 minutes without movement after:
Volatility Foundation Volatility Framework 2.6
INFO : volatility.debug : Determining profile based on KDBG search...
According to the volatility FAQ, there have even been reports of memdumps of over 200GB being analyzed with volatility.
What are the best practices to analyze large memdumps?
Granted that patience is a virtue and loading 20GB into memory can take some time, especially if the bytes have to be analyzed for signatures, I am looking for solutions or tips in the trend of:
- File format XXX will give better performance results than format XXX
- Running plugin XXX will speed up usage of future plugins
- Tweaks (unofficial, unsupported, undocumented)
I just can't imagine anyone waiting X days to wait for imageinfo
to return the correct profile for large dumps.