Recently, I have been doing some deep learning on Tensorflow and PyTorch by dual booting my PC into Ubuntu 18.04 and training with CUDA drivers.
However, I am using that graphics card to display my desktop as well, and the desktop becomes annoyingly laggy when running training, especially in GPU accelerated applications like Firefox or POLAR. Changing to using the internal graphics would fix this.
In addition, it would be nice to claim back the graphics RAM used by X.Org, by dedicating the GPU to CUDA use only 1.
Credit to Stas Berkman for these instructions on AskUbuntu
Note Ubuntu 18.04 uses X.Org as the display server. Other versions of Ubuntu may be using Wayland, which will have different instructions. If you are not using an Intel processor, you will need different instructions!
Steps
- Ensure the internal Intel graphics are enabled in your BIOS. I had to turn it on.
- (Optional) set the default output to “internal” or similar in your BIOS so the boot screen appears on the right output
- Plug your monitor into the output of the internal graphics. You can have it plugged into both this and the graphics card at once, it is fine 2.
- Check the address of your internal Intel card using
lspci
|
|
- Edit/create
/etc/X11/xorg.conf
, ensuring the PCI number matches, see the AskUbuntu post for details.
Section "Device"
Identifier "intel"
Driver "intel"
BusId "PCI:0:2:0"
EndSection
Section "Screen"
Identifier "intel"
Device "intel"
EndSection
- Check your existing code works.
In my case, the code had set the CUDA_VISIBLE_DEVICES environmental variable, which hid my GPU from TensorFlow! Removing this got things working again.
Now when running nvidia-smi
I get the following output, showing that the GPU is completely free of X.Org or any other process 🙂.
|
|
-
I have a known issue with CUDA no longer working after suspend/resume, which this might fix, or at least make reloading the kernel module easier. ↩︎
-
I use both cables so I can still play games on my windows partition ↩︎