For me a bramble of Raspberry Pi computers is more specific than a cluster in that the focus is on running web-facing micro services. I'm not sure how CUDA fits in that usage model, but there are other considerations as wellI'm building a small bramble (currently planning on 4 Pi4 nodes), and I have an extra older GPU (1070). Would it be at all reasonable or useful to upgrade the head node to a Pi5 to try and incorporate it to the bramble, or are the other limitations on problem size/complexity going to prevent it from being meaningfully used? Also would using different hardware for the head and compute nodes interfere with overall cluster performance?
Have you read anything suggesting Nvidia CUDA might work with a 1070 connected to a Pi 5?
Jeff has a GPU compatibility matrix at
https://pipci.jeffgeerling.com/#gpus-graphics-cards
but the column for Pi 5 seems mostly in testing. I think the limited progress so far has been with AMD graphics cards and open source drivers.
For CUDA one needs a proprietary Nvidia driver to work on the Pi hardware. There are definitely ARM compatible drivers which date back to at least the Tegra TK1 about ten years ago. Today the most visible ARM implementation of CUDA is for the Nvidia Grace Hopper supercomputers
In my opinion, one of those drivers working on a Raspberry Pi is unlikely. Highly skilled hacking of system software to get CUDA working on a Pi 5 would make someone famous for at least a week on Hackaday
https://hackaday.com/
Since nobody has done such already, the details must be very difficult. On the other hand, it might be possible to make an engaging blog of failure provided enough jokes were included.
The Pi 5 only supports a single lane through its PCIe socket. At best transfers between device and host memory will be slow compared to, for example, a 10 year old Xeon server. Even so, a Pi host could be useful for CUDA calculations that take place entirely within device memory.
Statistics: Posted by ejolson — Sat May 25, 2024 5:10 am