Very few, and quite pricey. I will consider it for sure, but I lean towards an APU.
should be trivial to make a container with this as base + whisper and push to dockerhub, what iGPUs do those containers support? hmm on looking further i am not sure they can use any of it as whisper fast is designed to use cuda. It looks like pytorch is used in the official openai whisper tho… i am just not sure if accelerating that helps or not…