AMDGPU
AMDGPU is the open source graphics driver for AMD Radeon graphics cards from the Graphics Core Next family.
Selecting the right driver
Depending on the card you have, find the right driver in Xorg#AMD. At the moment there is Xorg radeon driver support for Southern Islands (SI) through Arctic Islands (AI) cards. AMD has no plans to support pre-GCN GPUs. Owners of unsupported GPUs may use the open source radeon driver.
Installation
Install the mesa package, which provides the DRI driver for 3D acceleration.
- For 32-bit application support, also install the lib32-mesa package from the multilib repostory.
- For the DDX driver (which provides 2D acceleration in Xorg), install the xf86-video-amdgpu package.
- For Vulkan support, install the vulkan-radeon or amdvlk package. Optionally install the or package for 32-bit application support. Prefer vulkan-radeon for running DirectX12 games through Wine/Proton, as amdvlk is broken for this purpose.
Support for accelerated video decoding is provided by and for VA-API and mesa-vdpau and lib32-mesa-vdpau packages for VDPAU.
Experimental
It may be worthwhile for some users to use the upstream experimental build of mesa, to enable features such as AMD Navi improvements that have not landed in the standard mesa packages.
Install the package, which provides the DRI driver for 3D acceleration.
- For 32-bit application support, also install the package from the mesa-git repository or the AUR.
- For the DDX driver (which provides 2D acceleration in Xorg), install the package.
- For Vulkan support using the mesa-git repository below, install the vulkan-radeon-git package. Optionally install the lib32-vulkan-radeon-git package for 32-bit application support. This should not be required if building from the AUR.
Enable Southern Islands (SI) and Sea Islands (CIK) support
The package enables AMDGPU support for cards of the Southern Islands (HD 7000 Series, SI, ie. GCN 1) and Sea Islands (HD 8000 Series, CIK, ie. GCN 2). The kernel driver needs to be loaded before the radeon. You can check which kernel driver is loaded by running . It should be like this:
$ lspci -k | grep -A 3 -E "(VGA|3D)"
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Curacao PRO [Radeon R7 370 / R9 270/370 OEM] Subsystem: Gigabyte Technology Co., Ltd Device 226c Kernel driver in use: amdgpu Kernel modules: radeon, amdgpu
If the driver is not in use, follow instructions in the next section.
Load amdgpu driver
The module parameters of both and modules are and si_support=
.
They need to be set as kernel parameters or in a modprobe configuration file, and depend on the cards GCN version.
You can use both parameters if you are unsure which kernel card you have.
Set module parameters in kernel command line
Set one of the following kernel parameters:
- Southern Islands (SI):
- Sea Islands (CIK):
Specify the correct module order
Make sure has been set as first module in the Mkinitcpio#MODULES array, e.g. .
Set module parameters in modprobe.d
Create the configuration modprobe files in , see for syntax details.
For Southern Islands (SI) use option si_support=1
, for Sea Islands (CIK) use option , e.g.:
Make sure is in the the array in /etc/mkinitcpio.conf
and regenerate the initramfs.
Compile kernel which supports amdgpu driver
When building or compiling a kernel, and/or should be set in the config.
Disable loading radeon completely at boot
The kernel may still probe and load the module depending on the specific graphics chip involved, but the module is unnecessary to have loaded after confirming works as expected. Reboot between each step to confirm it works before moving to the next step:
- Use the module parameters on the kernel commandline method to ensure works as expected
- Use the mkinitcpio method but do not add to the configuration
- Test that will remove the kernel module cleanly after logged into the desktop
- Blacklist the module from being probed by the kernel during second stage boot:
/etc/modprobe.d/radeon.conf
blacklist radeon
The output of and should now only show the amdgpu driver loading, radeon should not be present. The directory should not exist.
ACO compiler
The ACO compiler is an open source shader compiler created and developed by Valve Corporation to directly compete with the LLVM compiler, the AMDVLK drivers, as well as Windows 10. It offers lesser compilation time and also performs better while gaming than LLVM and AMDVLK.
Some benchmarks can be seen in It's FOSS and Phoronix (1) (2) (3).
Since mesa version 20.2, the ACO compiler is enabled by default.
Loading
The kernel module is supposed to load automatically on system boot.
If it does not:
- Make sure to #Enable Southern Islands (SI) and Sea Islands (CIK) support when needed.
- Make sure you have the latest package installed. This driver requires the latest firmware for each model to successfully boot.
- Make sure you do not have or as a kernel parameter, since requires KMS.
- Check that you have not disabled by using any kernel module blacklisting.
It is possible it loads, but late, after the X server requires it. In this case:
Xorg configuration
Xorg will automatically load the driver and it will use your monitor's EDID to set the native resolution. Configuration is only required for tuning the driver.
If you want manual configuration, create , and add the following:
Using this section, you can enable features and tweak the driver settings, see amdgpu(4) first before setting driver options.
Tear free rendering
TearFree controls tearing prevention using the hardware page flipping mechanism. If this option is set, the default value of the property is 'on' or 'off' accordingly. If this option is not set, the default value of the property is auto, which means that TearFree is on for rotated outputs, outputs with RandR transforms applied and for RandR 1.4 slave outputs, otherwise off:
Option "TearFree" "true"
You can also enable TearFree temporarily with xrandr:
$ xrandr --output output --set TearFree on
Where should look like DisplayPort-0
or and can be acquired by running .
DRI level
DRI sets the maximum level of DRI to enable. Valid values are 2 for DRI2 or 3 for DRI3. The default is 3 for DRI3 if the Xorg version is >= 1.18.3, otherwise DRI2 is used:
Option "DRI" "3"
Variable refresh rate
10-bit color
Newer AMD cards support 10bpc color, but the default is 24-bit color and 30-bit color must be explicitly enabled. Enabling it can reduce visible banding/artifacts in gradients, if the applications support this too. To check if your monitor supports it search for "EDID" in your Xorg log file (e.g. or ):
[ 336.695] (II) AMDGPU(0): EDID for output DisplayPort-0 [ 336.695] (II) AMDGPU(0): EDID for output DisplayPort-1 [ 336.695] (II) AMDGPU(0): Manufacturer: DEL Model: a0ec Serial#: 123456789 [ 336.695] (II) AMDGPU(0): Year: 2018 Week: 23 [ 336.695] (II) AMDGPU(0): EDID Version: 1.4 [ 336.695] (II) AMDGPU(0): Digital Display Input [ 336.695] (II) AMDGPU(0): 10 bits per channel
To check whether it is currently enabled search for "Depth"):
[ 336.618] (**) AMDGPU(0): Depth 30, (--) framebuffer bpp 32 [ 336.618] (II) AMDGPU(0): Pixel depth = 30 bits stored in 4 bytes (32 bpp pixmaps)
With the default configuration it will instead say the depth is 24, with 24 bits stored in 4 bytes.
To check whether 10-bit works, exit Xorg if you have it running and run which will display a black and white grid, then press and to exit X, and run Xorg -depth 30 -retro
. If this works fine, then 10-bit is working.
To launch in 10-bit via , use startx -- -depth 30
. To permanently enable it, create or add to:
Reduce output latency
If you want to minimize latency you can disable page flipping and tear free:
See Gaming#Reducing DRI latency to further reduce latency.
Features
Video acceleration
Monitoring
Monitoring your GPU is often used to check the temperature and also the P-states of your GPU.
CLI
GUI
- .
- AmdGuid — A basic fan control GUI fully written in Rust.
Manually
To check your GPU's P-states, execute:
$ cat /sys/class/drm/card0/device/pp_od_clk_voltage
To monitor your GPU, execute:
$ watch -n 0.5 cat /sys/kernel/debug/dri/0/amdgpu_pm_info
To check your GPU utilization, execute:
$ cat /sys/class/drm/card0/device/gpu_busy_percent
To check your GPU frequency, execute:
$ cat /sys/class/drm/card0/device/pp_dpm_sclk
To check your GPU temperature, execute:
$ cat /sys/class/drm/card0/device/hwmon/hwmon*/temp1_input
To check your VRAM frequency, execute:
$ cat /sys/class/drm/card0/device/pp_dpm_mclk
To check your VRAM usage, execute:
$ cat /sys/class/drm/card0/device/mem_info_vram_used
To check your VRAM size, execute:
$ cat /sys/class/drm/card0/device/mem_info_vram_total
Overclocking
Since Linux 4.17, it is possible to adjust clocks and voltages of the graphics card via .
Boot parameter
It is required to unlock access to adjust clocks and voltages in sysfs by appending the Kernel parameter .
Not all bits are defined, and new features may be added over time. Setting all 32 bits may enable unstable features that cause problems such as screen flicker or broken resume from suspend. It should be sufficient to set the PP_OVERDRIVE_MASK bit, 0x4000, in combination with the default ppfeaturemask. To compute a reasonable parameter for your system, execute:
$ printf 'amdgpu.ppfeaturemask=0x%x\n' "$(($(cat /sys/module/amdgpu/parameters/ppfeaturemask) | 0x4000))"
Manual (default)
To set the GPU clock for the maximum P-state 7 on e.g. a Polaris GPU to 1209MHz and 900mV voltage, run:
# echo "s 7 1209 900" > /sys/class/drm/card0/device/pp_od_clk_voltage
The same procedure can be applied to the VRAM, e.g. maximum P-state 2 on Polaris 5xx series cards:
# echo "m 2 1850 850" > /sys/class/drm/card0/device/pp_od_clk_voltage
To apply, run
# echo "c" > /sys/class/drm/card0/device/pp_od_clk_voltage
To check if it worked out, read out clocks and voltage under 3D load:
# watch -n 0.5 cat /sys/kernel/debug/dri/0/amdgpu_pm_info
You can reset to the default values using this:
# echo "r" > /sys/class/drm/card0/device/pp_od_clk_voltage
It is also possible to forbid the driver so switch to certain P-states, e.g. to workaround problems with deep powersaving P-states like flickering artifacts or stutter. To force the highest VRAM P-state on a Polaris RX 5xx card, while still allowing the GPU itself to run with lower clocks, run:
# echo "manual" > /sys/class/drm/card0/device/power_dpm_force_performance_level # echo "2" > /sys/class/drm/card0/device/pp_dpm_mclk
Allow only the three highest GPU P-states:
# echo "5 6 7" > /sys/class/drm/card0/device/pp_dpm_sclk
To set the allowed maximum power consumption of the GPU to e.g. 50 Watts, run
# echo 50000000 > /sys/class/drm/card0/device/hwmon/hwmon0/power1_cap
Until Linux kernel 4.20, it will only be possible to decrease the value, not increase.
Assisted
If you are not inclined to fully manually overclock your GPU, there are some overclocking tools that are offered by the community to assist you to overclock and monitor your AMD GPU.
CLI tools
GUI tools
Startup on boot
If you want your settings to apply automatically upon boot, consider looking at this Reddit thread to configure and apply your settings on boot.
Power profiles
AMDGPU offers several optimizations via power profiles, one of the most commonly used is the compute mode for OpenCL intensive applications. Available power profiles can be listed with:
To use a specific power profile you should first enable manual control over them with:
# echo "manual" > /sys/class/drm/card0/device/power_dpm_force_performance_level
Then to select a power profile by writing the NUM field associated with it, e.g. to enable COMPUTE run:
# echo "5" > /sys/class/drm/card0/device/pp_power_profile_mode
Enable GPU display scaling
To avoid the usage of the scaler which is built in the display, and use the GPU own scaler instead, when not using the native resolution of the monitor, execute:
$ xrandr --output output --set "scaling mode" scaling_mode
Possible values for are: , , , .
- To show the available outputs and settings, execute:
$ xrandr --prop
- To set
scaling mode = Full aspect
for just every available output, execute:
$ for output in $(xrandr --prop | grep -E -o -i "^[A-Z\-]+-[0-9]+"); do xrandr --output "$output" --set "scaling mode" "Full aspect"; done
Troubleshooting
Xorg or applications will not start
- "(EE) AMDGPU(0): [DRI2] DRI2SwapBuffers: drawable has no back or front?" error after opening glxgears, can open Xorg server but OpenGL applications crash.
- "(EE) AMDGPU(0): Given depth (32) is not supported by amdgpu driver" error, Xorg will not start.
Setting the screen's depth under Xorg to 16 or 32 will cause problems/crash. To avoid that, you should use a standard screen depth of 24 by adding this to your "screen" section:
Screen artifacts and frequency problem
Dynamic power management may cause screen artifacts to appear when displaying to monitors at higher frequencies (anything above 60Hz) due to issues in the way GPU clock speeds are managed.
A workaround is saving or low
in .
To make it persistent, you may create a udev rule:
To determine the name execute:
$ udevadm info --attribute-walk /sys/class/drm/card0 | grep "KERNEL="
There is also a GUI solution where you can manage the "power_dpm" with and radeon-profile-daemon-gitAUR.
Artifacts in Chromium
If you see artifacts in Chromium, try to force the vulkan based backend. Go to and enable and .
R9 390 series poor performance and/or instability
If you experience issues with a AMD R9 390 series graphics card, set as kernel parameters to force the use of amdgpu driver instead of radeon.
If it still does not work, try disabling DPM, by setting the kernel parameters to:
Freezes with "[drm] IP block:gmc_v8_0 is hung!" kernel error
If you experience freezes and kernel crashes during a GPU intensive task with the kernel error " [drm] IP block:gmc_v8_0 is hung!" , a workaround is to set as kernel parameters to force the GPUVM page tables update to be done using the CPU. Downsides are listed here .
Cursor corruption
If you experience issues with the mouse cursor sometimes not rendering properly, set Option "SWCursor" "True"
in the section of the configuration file.
If you are using for scaling and the cursor is flickering or disappearing, you may be able to fix it by setting the property: .
System freeze or crash when gaming on Vega cards
Dynamic power management may cause a complete system freeze whilst gaming due to issues in the way GPU clock speeds are managed. A workaround is to disable dynamic power management, see ATI#Dynamic power management for details.
WebRenderer (Firefox) corruption
Artifacts and other anomalies may present themselves (e.g. inability to select extension options) when WebRenderer is force enabled by the user. Workaround is to fall back to OpenGL compositing.
Double-speed or "chipmunk" audio, or no audio when a 4K@60Hz device is connected
This is sometimes caused by a communication issue between an AMDGPU device and a 4K display connected over HDMI. A possible workaround is to enable HDR or "Ultra HD Deep Color" via the display's built-in settings. On many Android based TVs, this means setting this to "Standard" instead of "Optimal".
Issues with power management / dynamic re-activation of a discrete amdgpu graphics card
If you encounter issues similar to , you can workaround the issue by setting the kernel parameter , which prevents the dGPU from being powered down dynamically at runtime.
kfd: amdgpu: TOPAZ not supported in kfd
In the system journal or the kernel message keyring a critical level error message
kfd: amdgpu: TOPAZ not supported in kfd
may appear. If you are not planning to use Radeon Open Compute, this can be safely ignored. It is not supported in TOPAZ, as they are old GPUs.