Do you need a whole PC to run a GPU?

Do you need a whole PC to run a GPU?

Shop Micro Center Holiday Deals: https://micro.center/e787eb Check out Micro Center's Other Top Deals: https://micro.center/315b05 Shop Micro Center Bundles: https://micro.center/f7b037 Visit Micro Center News: https://micro.center/9b77f2 Can a Raspberry Pi match the performance of a modern desktop PC when it comes to GPU performance? Of course not, in a general sense. But there are specific use cases where you'd be surprised by how close the Pi comes, performance-wise. In some cases, the Pi is actually faster! Resources I mentioned in this video: AI Benchmark results and methodology: https://github.com/geerlingguy/ai-ben... Multi-GPU benchmark results: https://github.com/geerlingguy/ai-ben... Use AMD GPUs on Pi: https://www.jeffgeerling.com/blog/202... Use Nvidia GPUs on Pi: https://www.jeffgeerling.com/blog/202... Use Intel GPUs on Pi: https://www.jeffgeerling.com/blog/202... eGPU Setup (some links are affiliate links): Minisforum DEG1 eGPU Dock: https://amzn.to/4s3gUz6 Micro SATA Cables Oculink to M.2 adapter: https://amzn.to/49dKKcE Super Flower 850W PSU: https://www.microcenter.com/product/6... AMD Radeon AI Pro R9700: https://www.microcenter.com/product/7... Dual GPU Setup (some links are affiliate links): chenyang PCIe 4.0 M.2 NGFF to SFF-8643: https://amzn.to/4pyqSqt 10Gtek SFF-8644 to SFF-8643 Cable: https://amzn.to/3MKNbup Dolphin PCIe HBA MXH932: https://dolphinics.com/products/MXH93... Dolphin 3 slot PCIe Backplane: https://dolphinics.com/products/IBP-G... HUGE thanks to Patrick from ‪@ServeTheHomeVideo‬ for helping me record at Micro Center in Phoenix :) Support me on Patreon:   / geerlingguy   Sponsor me on GitHub: https://github.com/sponsors/geerlingguy Merch: https://www.redshirtjeff.com 2nd Channel:    / @geerlingengineering   3rd Channel:    / @level2jeff   Contents: 00:00 - Pi vs PC 00:53 - Enough for Jellyfin? Local LLMs? 01:50 - 4 GPUs 1 Pi 02:16 - Comparing costs and energy use 03:07 - Gaming on haitus (for now) 03:39 - The setups 04:56 - ffmpeg and Jellyfin media transcoding 07:31 - 3D rendering with GravityMark 08:39 - LLMs on AMD 09:36 - LLMs on Nvidia 12:27 - Drivers, Vulkan, and CUDA 13:06 - Dual GPU - Setup 15:24 - Sharing memory and PCIe ACS 16:28 - llama.cpp performance - mixed Nvidia GPUs 17:08 - 52GB of VRAM on AMD 17:28 - Intel PC dual Nvidia GPU comparison 17:56 - Who wins?