8

The Buffer "Ghosting" Phenomenon

It is possible to observe contents of (old and currently used) graphics buffers on a monitor under certain circumstances, posing an information disclosure risk, when combined with shoulder surfing:

  1. Growing a Window rapidly in some Stacking and Tiling Window Managers (always)
  2. Shutting down the X11 server (sometimes)

Either part of (1) or the whole (2) screen will show portions of currently active windows (from possibly different workspaces/desktops), or long-since closed Windows.

These sometimes look lightly or heavily "corrupted" depending on age and buffer activity since window destruction. Remnant buffers shown sometimes look like "sprite sheets".

This phenomenon occurs for a fraction of a second on resize, or ~1 second on X11 shutdown, making it impractical to provide screenshots. Sorry.

The Core Question

How do I feasibly mitigate or eliminate the risk of disclosing the information contained in these remnant buffers?

This assumes that it is impractical to always hide my monitor, which would be the classic anti-shoulder-surfing method.

Additional information

Attempt at explaining the causes

This only happens on local X servers, not over ssh, so I assume the underlying buffers are in the graphics card memory.

Hypotheses based on consulting xlib documentation:

  1. The X11 server provides an enlarged buffer in which to paint the (now larger) window, but the application providing the window does not fully clear/paint into this window before the X11 server begins to display it. This leads to the reading out of some garbage data in the buffers, which sometimes happen to be coherent images if the memory of the buffer was used for another window previously.

  2. After X11 server shutdown there remains garbage data in the underlying buffers from old windows. The graphics card is still active, but kernel mode setting has not taken over yet, and garbage data is output for some time.

Reproduced on these setups

On Window resize & X11 Server shutdown:

  • Arch Linux (latest), i3, xorg-server 1.18.3-1, nvidia-340xx, Nvidia GT218
  • OpenSuSE Linux 13.2|42.1, i3, x11-video-nvidiaG02|G03|G04, Nvidia G98 Quadro
  • Debian Jessie Linux, i3, xorg-server, nouveau|mesagl, intel integrated graphics

On X11 server shutdown:

  • Opensuse 13.2|42.1, LXDE|GNOME|KDE, x11-video-nvidiaG02|G03|G04, Nvidia G98 Quadro

1 Answers1

3

A partial answer:

What you call "buffer ghosting", is also known as palinopsia bug. The link provides a proof of concept, a short script to show your video RAM with content of already closed applications. Other than regular RAM, GPU video RAM is not zeroed by default when it is allocated. Some drivers do it, some don't; zeroing the RAM slightly costs performance.

You can forbid an application to have access to GPU with untrusted cookies. Example: xauth -f $HOME/mycookie generate . untrusted XAUTHORITY=$HOME/mycookie glxgears glxgears will fail to start because it has no GPU access. Using trusted instead of untrusted allows GPU access, and glxgears works.

Using untrusted cookies on a non-compositing window manager like i3 or openbox may be save. Compositing window managers respective desktops with 3D effects like Gnome or KDE may store window content in video memory even if the application itself does not. I'm not sure if X itself may use video RAM in cases not preconceived here.

This only happens on local X servers, not over ssh

Afaik, ssh always uses untrusted cookies.

How do I feasibly mitigate or eliminate the risk of disclosing the information contained in these remnant buffers?

Never use your GPU (sigh). Yell at the developers Write a bug report. Don't view pron in chrome incognito mode.

The video RAM even survives a reboot from one system to another.

The only real secure way may be to disable GPU on kernel level.

From a security point of view, the driver should zero the video memory immediatly after deallocating, because the video memory can be accessed without using the driver and also persists after reboot. At least it should zero the video memory when allocated. Neither nore is done.


Some applications can have trouble with untrusted cookies. There is also the possibility to disable the OpenGL / GLX extension in the X server. (Please regard that this has no effect for Wayland compositors like Gnome 3 Wayland session).

Create a file:

/etc/X11/xorg.conf.d/disable-gpu.conf

with the the content:

Section "Extensions"
    Option "GLX" "Disable"
EndSection

Instead of this permanent xorg.conf configuration you can run X a single time with GLX disabled. Start X from tty2 with startx -- -extension GLX vt2. (Set N in vtN to the number of the tty).

mviereck
  • 359
  • 2
  • 7
  • 1
    The proof of concept you linked produces the exact type of output I described; I could reproduce the reboot-survival behaviour, too. I have explicitly blacklisted applications that handle secrets (including my terminal emulator) to use software rendering to work around this issue for now, using the xauth method you described. This seems an effective workaround, as long as hardware accelleration is not required for the application. –  Jun 11 '17 at 14:11
  • Fine that using untrusted cookies works for you! Thinking about this, a compositing window manager like mutter (Gnome) or Kwin (KDE) may store window content in video memory even if the application itself does not. Using a window manager like i3 or openbox along with untrusted cookies should be save. I'll add this to the answer. – mviereck Jun 11 '17 at 17:01