Expect no witty sayings or clever analyses here – I just think GPUs are cool. And here are a few reasons why:
Exhibit A: Machine Learning Training a standard feed forward neural net on CIFAR-10 progresses at 50usec/sample; my 2.4 Ghz i7 takes almost 500usec/sample. The total set takes around 5 min to train on the GPU vs over a 35 min on my CPU. On long tasks this means a difference of days to weeks.
Exhibit B: Video transcoding In order to make backups of all my blu-ray disks, I rip and transcode them using ffmpeg or handbrake. Normally Im lucky to get a few dozen frames per second – completely making out my CPU during the process. By compiling ffmpeg to include nvenc/cuda support I get 456 fps (19x faster). As the screenshots show, my avg cpu usage was below 20% – and even GPU usage stayed under 10%. Video quality was superb (i couldnt tell the difference).
ffmpeg -vsync 0 -hwaccel cuvid -i 00800.m2ts -c:a copy -c:v h264_nvenc -b:v 5M prince_egypt.mp4
CPU Load During NVENC encoding GPU Load During NVENC encoding


My setup:
- GPU: RTX 2070 Super (8GB ram)
- CPU: i7-8700K (6 core HT @3.7Ghz)
- RAM: 32GB
- Disk: 1TB PM981 (NVME)