![]() ![]() I think that my settings or setup of h264_nvenc must be wrong, since GPU-based encoding should be faster than software-based encoding. The issue is, I noticed that the latency with h264_nvenc is significantly more than the latency with AV_CODEC_ID_H264 (the software-based version). Using them in TorchAduio requires FFmpeg built with NVENC/NVDEC support. Using hardware encoder/decoder improves the speed of loading and saving certain types of videos. set encoder parameters to max performancenĪv_opt_set(context->priv_data, "preset", "llhq", 0) This tutorial shows how to use NVIDIA’s hardware video decoder (NVDEC) and encoder (NVENC) with TorchAudio. I currently have an NVIDIA GPU, so I tried using the following settings with h264_nvenc: codec = avcodec_find_encoder_by_name("h264_nvenc") These settings work well, but I am trying to switch to hardware-based encoding to take the workload off my CPU. set encoder parameters to max performanceĪv_opt_set(context->priv_data, "preset", "ultrafast", 0) Īv_opt_set(context->priv_data, "tune", "zerolatency", 0) My current settings are: avcodec_register_all() Ĭodec = avcodec_find_encoder(AV_CODEC_ID_H264) Ĭontext = avcodec_alloc_context3(encoder->codec) Ĭontext->gop_size = 1 // send SPS/PPS headers every packet Because I'm streaming this video frames after they're encoded, minimizing latency is crucial to me. I'm currently using Libavcodec to encode video frames using H.264. ![]()
0 Comments
|