< NVIDIA

NVIDIA/Tips and tricks

Fixing terminal resolution

Transitioning from nouveau may cause your startup terminal to display at a lower resolution.

For GRUB, see GRUB/Tips and tricks#Setting the framebuffer resolution for details.

For systemd-boot, set console-mode in esp/EFI/loader/loader.conf. See systemd-boot#Loader configuration for details.

For rEFInd, add to esp/EFI/refind/refind.conf and /etc/refind.d/refind.conf (latter file is optional but recommended):

use_graphics_for linux

A small caveat is that this will hide the kernel parameters from being shown during boot.

Using TV-out

See Wikibooks:NVIDIA/TV-OUT.

X with a TV (DFP) as the only display

The X server falls back to CRT-0 if no monitor is automatically detected. This can be a problem when using a DVI connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.

To force NVIDIA to use DFP, store a copy of the EDID somewhere in the filesystem so that X can parse the file instead of reading EDID from the TV/DFP.

To acquire the EDID, start nvidia-settings. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled "GPU-0" or similar), click the DFP section (again, DFP-0 or similar), click on the Acquire Edid Button and store it somewhere, for example, /etc/X11/dfp0.edid.

If in the front-end mouse and keyboard are not attached, the EDID can be acquired using only the command line. Run an X server with enough verbosity to print out the EDID block:

$ startx -- -logverbose 6

After the X Server has finished initializing, close it and your log file will probably be in /var/log/Xorg.0.log. Extract the EDID block using nvidia-xconfig:

$ nvidia-xconfig --extract-edids-from-file=/var/log/Xorg.0.log --extract-edids-output-file=/etc/X11/dfp0.bin

Edit xorg.conf by adding to the Device section:

Option "ConnectedMonitor" "DFP"
Option "CustomEDID" "DFP-0:/etc/X11/dfp0.bin"

The option forces the driver to recognize the DFP as if it were connected. The provides EDID data for the device, meaning that it will start up just as if the TV/DFP was connected during X the process.

This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.

If the above changes did not work, in the xorg.conf under Device section you can try to remove the and add the following lines:

Option "ModeValidation" "NoDFPNativeResolutionCheck"
Option "ConnectedMonitor" "DFP-0"

The prevents NVIDIA driver from disabling all the modes that do not fit in the native resolution.

Headless (no monitor) resolution

In headless mode, resolution falls back to 640x480, which is used by VNC or Steam Link. To start in a higher resolution e.g. 1920x1080, specify a Virtual entry under the Screen subsection in xorg.conf:

Section "Screen"
   [...]
   SubSection     "Display"
       Depth       24
       Virtual     1920 1080
   EndSubSection
EndSection

Check the power source

The NVIDIA X.org driver can also be used to detect the GPU's current source of power. To see the current power source, check the 'GPUPowerSource' read-only parameter (0 - AC, 1 - battery):

Listening to ACPI events

NVIDIA drivers automatically try to connect to the acpid daemon and listen to ACPI events such as battery power, docking, some hotkeys, etc. If connection fails, X.org will output the following warning:

While completely harmless, you may get rid of this message by disabling the option in your :

Section "Device"
  ...
  Driver "nvidia"
  Option "ConnectToAcpid" "0"
  ...
EndSection

If you are on laptop, it might be a good idea to install and enable the acpid daemon instead.

Displaying GPU temperature in the shell

There are three methods to query the GPU temperature. nvidia-settings requires that you are using X, nvidia-smi or nvclock do not. Also note that nvclock currently does not work with newer NVIDIA cards such as GeForce 200 series cards as well as embedded GPUs such as the Zotac IONITX's 8800GS.

nvidia-settings

To display the GPU temp in the shell, use nvidia-settings as follows:

$ nvidia-settings -q gpucoretemp

This will output something similar to the following:

Attribute 'GPUCoreTemp' (hostname:0.0): 41.
'GPUCoreTemp' is an integer attribute.
'GPUCoreTemp' is a read-only attribute.
'GPUCoreTemp' can use the following target types: X Screen, GPU.

The GPU temps of this board is 41 C.

In order to get just the temperature for use in utilities such as rrdtool or conky:

nvidia-smi

Use nvidia-smi which can read temps directly from the GPU without the need to use X at all, e.g. when running Wayland or on a headless server. To display the GPU temperature in the shell, use nvidia-smi as follows:

$ nvidia-smi

This should output something similar to the following:

Only for temperature:

In order to get just the temperature for use in utilities such as rrdtool or conky:

$ nvidia-smi --query-gpu=temperature.gpu --format=csv,noheader,nounits
52

Reference: https://www.question-defense.com/2010/03/22/gpu-linux-shell-temp-get-nvidia-gpu-temperatures-via-linux-cli.

nvclock

Use which is available from the AUR.

There can be significant differences between the temperatures reported by nvclock and nvidia-settings/nv-control. According to this post by the author (thunderbird) of nvclock, the nvclock values should be more accurate.

Overclocking and cooling

Enabling overclocking

Warning: Overclocking might permanently damage your hardware. You have been warned.

Overclocking is controlled via Coolbits option in the Device section, which enables various unsupported features:

Option "Coolbits" "value"

The Coolbits value is the sum of its component bits in the binary numeral system. The component bits are:

  • (bit 0) - Enables overclocking of older (pre-Fermi) cores on the Clock Frequencies page in nvidia-settings.
  • (bit 1) - When this bit is set, the driver will "attempt to initialize SLI when using GPUs with different amounts of video memory".
  • (bit 2) - Enables manual configuration of GPU fan speed on the Thermal Monitor page in nvidia-settings.
  • (bit 3) - Enables overclocking on the PowerMizer page in nvidia-settings. Available since version 337.12 for the Fermi architecture and newer.
  • (bit 4) - Enables overvoltage using nvidia-settings CLI options. Available since version 346.16 for the Fermi architecture and newer.

To enable multiple features, add the Coolbits values together. For example, to enable overclocking and overvoltage of Fermi cores, set .

The documentation of Coolbits can be found in /usr/share/doc/nvidia/html/xconfigoptions.html and here.

Setting static 2D/3D clocks

Set the following string in the Device section to enable PowerMizer at its maximum performance level (VSync will not work without this line):

Option "RegistryDwords" "PerfLevelSrc=0x2222"

Allow change to highest performance mode

Since changing performance mode and overclocking memory rate has little to no effect in nvidia-settings, try this:

  • Setting Coolbits to 24 or 28 and remove Powermizer RegistryDwords -> Restart X
  • find out max. Clock and Memory rate. (this can be LOWER than what your gfx card reports after booting!):
    $ nvidia-smi -q -d SUPPORTED_CLOCKS
  • set rates for GPU 0:

After setting the rates the max. performance mode works in nvidia-settings and you can overclock graphics-clock and memory transfer rate.

Saving overclocking settings

Typically, clock and voltage offsets inserted in the nvidia-settings interface are not saved, being lost after a reboot. Fortunately, there are tools that offer an interface for overclocking under the proprietary driver, able to save the user's overclocking preferences and automatically applying them on boot. Some of them are:

  • - graphical, applies settings on desktop session start
  • and - graphical, applies settings on system boot
  • - text based, profiles are configuration files in /etc/nvoc.d/, applies settings on desktop session start

Otherwise, and attributes can be set in the command-line interface of nvidia-settings on startup. For example:

$ nvidia-settings -a "GPUGraphicsClockOffset[performance_level]=offset"
$ nvidia-settings -a "GPUMemoryTransferRateOffset[performance_level]=offset"

Where is the number of the highest performance level. If there are multiple GPUs on the machine, the GPU ID should be specified: .

Custom TDP Limit

Modern NVIDIA graphics cards throttle frequency to stay in their TDP and temperature limits. To increase performance it is possible to change the TDP limit, which will result in higher temperatures and higher power consumption.

For example, to set the power limit to 160.30W:

# nvidia-smi -pl 160.30

To set the power limit on boot (without driver persistence):

Set fan speed at login

You can adjust the fan speed on your graphics card with nvidia-settings' console interface. First ensure that your Xorg configuration has enabled the bit 2 in the Coolbits option.

Note: GeForce 400/500 series cards cannot currently set fan speeds at login using this method. This method only allows for the setting of fan speeds within the current X session by way of nvidia-settings.

Place the following line in your xinitrc file to adjust the fan when you launch Xorg. Replace with the fan speed percentage you want to set.

nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=n"

You can also configure a second GPU by incrementing the GPU and fan number.

nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=n" \
                -a "[gpu:1]/GPUFanControlState=1" -a  [fan:1]/GPUTargetFanSpeed=n" &

If you use a login manager such as GDM or SDDM, you can create a desktop entry file to process this setting. Create ~/.config/autostart/nvidia-fan-speed.desktop and place this text inside it. Again, change to the speed percentage you want.

[Desktop Entry]
Type=Application
Exec=nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUTargetFanSpeed=n"
X-GNOME-Autostart-enabled=true
Name=nvidia-fan-speed

To make it possible to adjust the fanspeed of more than one graphics card, run:

$ nvidia-xconfig --enable-all-gpus
$ nvidia-xconfig --cool-bits=4

Kernel module parameters

Some options can be set as kernel module parameters, a full list can be obtained by running or looking at . See Gentoo:NVidia/nvidia-drivers#Kernel module parameters as well.

For example, enabling the following will turn on kernel mode setting (see above) and enable the PAT feature , which affects how memory is allocated. PAT was first introduced in Pentium III and is supported by most newer CPUs (see wikipedia:Page attribute table#Processors). If your system can support this feature, it should improve performance.

On some notebooks, to enable any NVIDIA settings tweaking you must include this option, otherwise it responds with "Setting applications clocks is not supported" etc.

Preserve video memory after suspend

By default the NVIDIA Linux drivers save and restore only essential video memory allocations on system suspend and resume. Quoting NVIDIA (, also available with the nvidia-utils package in ): The resulting loss of video memory contents is partially compensated for by the user-space NVIDIA drivers, and by some applications, but can lead to failures such as rendering corruption and application crashes upon exit from power management cycles.

The still experimental system enables saving all video memory (given enough space on disk or main RAM). The interface is through the file as follows:

  • write "suspend" (or "hibernate") to immediately before writing to the usual Linux /sys/power/state file
  • write "resume" to immediately after waking up, or after an unsuccessful attempt to suspend or hibernate.

To save and restore all video memory contents, load the kernel module can with the option and enable and .

To make the changes permanent, create the following:

The interaction with is handled by the simple Unix shell script at , which will itself be called by systemd or other tools. The nvidia-utils package ships with the following services (which essentially just call nvidia-sleep.sh): , , .

Driver persistence

NVIDIA has a daemon that can be optionally run at boot. In a standard single-GPU X desktop environment the persistence daemon is not needed and can actually create issues . See the Driver Persistence section of the NVIDIA documentation for more details.

To start the persistence daemon at boot, enable the . For manual usage see the upstream documentation.

gollark: I think the issue is `env.term` somehow not existing,
gollark: Yes, "fixed".
gollark: Great, I "fixed" it.
gollark: Oh, no, the `nil` metatable thing is not actually to patch any bug, it just introduces new ones.
gollark: The root of it seems to be that one of your tables is nil.
This article is issued from Archlinux. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.