Using GPU for both encoding and decoding

For the developers that use FFmpeg in their software.
Post Reply
jerq
Posts: 1
Joined: Thu Jan 30, 2020 3:19 pm

Using GPU for both encoding and decoding

Post by jerq »

I am currently trying to decode a RTSP stream to RGB24 pixel format into the pipe. My setup is using python 3.6 with FFMPEG 4.2 and CUDA 10.1 and successfully in using GPU for the decoder to write to the pipe for manipulation. My current code is:

ffmpeg -y -i "rtsp address" -hwaccel cuvid -vcodec h264_cuvid -vsync 0 -o pipe -format rawvideo -pix_fmt rgb24

I notice the CPU usage is still hovering around 20-40% which is pretty high. Thus, I suspect the encoding also requires GPU. If i add a -vcodec h264_nvenc, the error returned is RGB24 is not supported. I don't mind working with NV12, which is the default if i leave the -pix_fmt empty, as I may manually convert it to RGB24. May I know if anybody knows if I may directly use -vcodec h264_nvenc to ensure my CPU usage is as low as possible as I will need to scale this up and the CPU usage of 40% per stream is not viable.

Other options I am looking at is if it is possible to output an image (jpg or png) to the pipe for manipulation. I think writing it to file and reading it and processing it again maybe too slow for my operation.

Thank you.

Post Reply