UDP streaming and sws_scale problem

For the developers that use FFmpeg in their software.
taansari
Posts: 81
Joined: Fri Sep 28, 2012 6:18 am

Re: UDP streaming and sws_scale problem

Post by taansari » Mon Dec 17, 2012 5:58 pm

rogerdpack wrote:for enumerating sizes I usually call out to ffmpeg.exe and "parse the stdout"
http://ffmpeg.zeranoe.com/forum/viewtop ... rate#p2963 may also be interesting to you.
There are rumors in the ffmpeg-devel list that they'll be coming out with an enumeration API (like a real API) sometime, too...
-r
Thanks for the tip and the link - the rumors look promising; hope we get more improvements soon!

And an update: even after setting the dictionary correctly, when I streamed over network, every thing looked fine 'apparently', but when network traffic was monitored using my convential way, I got the same (about 3 MB) bandwidth consumption per minute. This is really strange since it seems to work fine with FFmpeg.exe (with custom resolution as low as 160x120 IIRC and bandwidht consumption 0.4 MB and I reckon it does something similar to what I am doing right now).

Just a thought, I am using h264 baseline profile, could this be limiting the capabilities somehow?

p.s. I remember previsously also you mentioned about parsing stdout output, ummm... how does that work exactly, can you kindly give some hints?

rogerdpack
Posts: 1878
Joined: Fri Aug 05, 2011 9:56 pm

Re: UDP streaming and sws_scale problem

Post by rogerdpack » Mon Dec 17, 2012 8:33 pm

My guess is somehow you're not monitoring it accurately, but I don't know for sure.

https://github.com/rdp/ruby_simple_gui_ ... helpers.rb

get_options_video_device
is how I parse them in ruby.
GL!
-roger-

taansari
Posts: 81
Joined: Fri Sep 28, 2012 6:18 am

Re: UDP streaming and sws_scale problem

Post by taansari » Tue Dec 18, 2012 4:39 am

rogerdpack wrote:My guess is somehow you're not monitoring it accurately, but I don't know for sure.

https://github.com/rdp/ruby_simple_gui_ ... helpers.rb

get_options_video_device
is how I parse them in ruby.
GL!
-roger-
Thanks for the tip! I hope there is something similar available for c++ as well.
My guess is somehow you're not monitoring it accurately, but I don't know for sure.
I know my measuring method is not ideal; but the same process for ffmpeg.exe gives lesser transfer rate (0.4 MB), and for me gives about 2.5 MB per minute.

Maybe I should ask ffmpeg-devel?

taansari
Posts: 81
Joined: Fri Sep 28, 2012 6:18 am

Re: UDP streaming and sws_scale problem

Post by taansari » Tue Dec 18, 2012 6:28 am

Another update:

I tried streaming MPG video instead of h264 I have been streaming so far. Following are approximate results:

File format: MPG
Decoder: 160x120
Duration: 1 minute
Approximate data per minute: 2.67 MB

File format: MPG
Decoder: 320x240
Duration: 1 minute
Approximate data per minute: 9.19 MB

File format: MPG
Decoder: 640x480
Duration: 1 minute
Approximate data per minute: 32.57 MB

Bottom line is: Changing file/container format changes how amount of data transmitted on different resolutions.

What am I missing for the h264? Really hard to visualize.

Any guidance, anyone?

p.s. I have floated this question on ffmpeg-devel lets see what the replies are...

taansari
Posts: 81
Joined: Fri Sep 28, 2012 6:18 am

Re: UDP streaming and sws_scale problem

Post by taansari » Tue Dec 18, 2012 10:20 am

Ok, I have come up with a test application that writes to disk, and can be changed to transmit over udp, and disk dump size is almost same!

This program stuffs data in 640x480 packet, and scales it to required destination, then saves/transmit; duration of whole test is 01 minute:

Code: Select all

/*
 * Copyright (c) 2001 Fabrice Bellard
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 * THE SOFTWARE.
 */

/**
 * @file
 * libavcodec API use example.
 *
 * Note that libavcodec only handles codecs (mpeg, mpeg4, etc...),
 * not file formats (avi, vob, mp4, mov, mkv, mxf, flv, mpegts, mpegps, etc...). See library 'libavformat' for the
 * format handling
 */

#include <Windows.h>
#include <math.h>
extern "C"
{
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libavdevice/avdevice.h"
#include "libswscale/swscale.h"
#include "libavutil/dict.h"
#include "libavutil/error.h"
#include "libavutil/opt.h"
}

#define DECODER_WIDTH	640
#define DECODER_HEIGHT	480

#define ENCODER_WIDTH	640
#define ENCODER_HEIGHT	480

/* 5 seconds stream duration */
#define STREAM_DURATION   60.0
#define STREAM_FRAME_RATE 25 /* 25 images/s */
#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */

static int sws_flags = SWS_BICUBIC;

/**************************************************************/
/* audio output */

static float t, tincr, tincr2;
static int16_t *samples;
static int audio_input_frame_size;

/* Add an output stream. */
static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
                            enum AVCodecID codec_id)
{
    AVCodecContext *c;
    AVStream *st;

    /* find the encoder */
    *codec = avcodec_find_encoder(codec_id);
    if (!(*codec)) {
        fprintf(stderr, "Could not find codec\n");
        exit(1);
    }

    st = avformat_new_stream(oc, *codec);
    if (!st) {
        fprintf(stderr, "Could not allocate stream\n");
        exit(1);
    }
    st->id = oc->nb_streams-1;
    c = st->codec;

    switch ((*codec)->type) {
    case AVMEDIA_TYPE_AUDIO:
        st->id = 1;
        c->sample_fmt  = AV_SAMPLE_FMT_S16;
        c->bit_rate    = 64000;
        c->sample_rate = 44100;
        c->channels    = 2;
        break;

    case AVMEDIA_TYPE_VIDEO:
        avcodec_get_context_defaults3(c, *codec);
        c->codec_id = codec_id;

        c->bit_rate = 400000;
        /* Resolution must be a multiple of two. */
        c->width    = ENCODER_WIDTH;
        c->height   = ENCODER_HEIGHT;
        /* timebase: This is the fundamental unit of time (in seconds) in terms
         * of which frame timestamps are represented. For fixed-fps content,
         * timebase should be 1/framerate and timestamp increments should be
         * identical to 1. */
        c->time_base.den = STREAM_FRAME_RATE;
        c->time_base.num = 1;
        c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
        c->pix_fmt       = STREAM_PIX_FMT;
        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
            /* just for testing, we also add B frames */
            c->max_b_frames = 2;
        }
        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
            /* Needed to avoid using macroblocks in which some coeffs overflow.
             * This does not happen with normal video, it just happens here as
             * the motion of the chroma plane does not match the luma plane. */
            c->mb_decision = 2;
        }
    break;

    default:
        break;
    }

    /* Some formats want stream headers to be separate. */
    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
        c->flags |= CODEC_FLAG_GLOBAL_HEADER;

    return st;
}

/**************************************************************/
/* audio output */


static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
{
    AVCodecContext *c;

    c = st->codec;

    /* open it */
    if (avcodec_open2(c, codec, NULL) < 0) {
        fprintf(stderr, "Could not open audio codec\n");
        exit(1);
    }

    /* init signal generator */
    t     = 0;
    tincr = 2 * M_PI * 110.0 / c->sample_rate;
    /* increment frequency by 110 Hz per second */
    tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

    if (c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE)
        audio_input_frame_size = 10000;
    else
        audio_input_frame_size = c->frame_size;
    samples = (int16_t *) av_malloc(audio_input_frame_size *
                        av_get_bytes_per_sample(c->sample_fmt) *
                        c->channels);
    if (!samples) {
        fprintf(stderr, "Could not allocate audio samples buffer\n");
        exit(1);
    }
}

/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
 * 'nb_channels' channels. */
static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
{
    int j, i, v;
    int16_t *q;

    q = samples;
    for (j = 0; j < frame_size; j++) {
        v = (int)(sin(t) * 10000);
        for (i = 0; i < nb_channels; i++)
            *q++ = v;
        t     += tincr;
        tincr += tincr2;
    }
}

static void write_audio_frame(AVFormatContext *oc, AVStream *st)
{
    AVCodecContext *c;
    AVPacket pkt = { 0 }; // data and size must be 0;
    AVFrame *frame = avcodec_alloc_frame();
    int got_packet, ret;

    av_init_packet(&pkt);
    c = st->codec;

    get_audio_frame(samples, audio_input_frame_size, c->channels);
    frame->nb_samples = audio_input_frame_size;
    avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
                             (uint8_t *)samples,
                             audio_input_frame_size *
                             av_get_bytes_per_sample(c->sample_fmt) *
                             c->channels, 1);

    ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
    if (ret < 0) {
        fprintf(stderr, "Error encoding audio frame\n");
        exit(1);
    }

    if (!got_packet)
        return;

    pkt.stream_index = st->index;

    /* Write the compressed frame to the media file. */
    if (av_interleaved_write_frame(oc, &pkt) != 0) {
        fprintf(stderr, "Error while writing audio frame\n");
        exit(1);
    }
    avcodec_free_frame(&frame);
}

static void close_audio(AVFormatContext *oc, AVStream *st)
{
    avcodec_close(st->codec);

    av_free(samples);
}

/**************************************************************/
/* video output */

static AVFrame *frame;
static AVPicture src_picture, dst_picture;
static int frame_count;

static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
{
    int ret;
    AVCodecContext *c = st->codec;

    /* open the codec */
    if (avcodec_open2(c, codec, NULL) < 0) {
        fprintf(stderr, "Could not open video codec\n");
        exit(1);
    }

    /* allocate and init a re-usable frame */
    frame = avcodec_alloc_frame();
    if (!frame) {
        fprintf(stderr, "Could not allocate video frame\n");
        exit(1);
    }

    /* Allocate the encoded raw picture. */
    ret = avpicture_alloc(&dst_picture, c->pix_fmt, c->width, c->height);
    if (ret < 0) {
        fprintf(stderr, "Could not allocate picture\n");
        exit(1);
    }

    /* If the output format is not YUV420P, then a temporary YUV420P
     * picture is needed too. It is then converted to the required
     * output format. */
	if (1) {
		ret = avpicture_alloc(&src_picture, PIX_FMT_YUV420P, DECODER_WIDTH, DECODER_HEIGHT);
		if (ret < 0) {
			fprintf(stderr, "Could not allocate temporary picture\n");
			exit(1);
		}
	}
    //if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
    //    ret = avpicture_alloc(&src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
    //    if (ret < 0) {
    //        fprintf(stderr, "Could not allocate temporary picture\n");
    //        exit(1);
    //    }
    //}

    /* copy data and linesize picture pointers to frame */
    *((AVPicture *)frame) = dst_picture;
}

/* Prepare a dummy image. */
static void fill_yuv_image(AVPicture *pict, int frame_index,
                           int width, int height)
{
    int x, y, i;

    i = frame_index;

    /* Y */
    for (y = 0; y < height; y++)
        for (x = 0; x < width; x++)
            pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;

    /* Cb and Cr */
    for (y = 0; y < height / 2; y++) {
        for (x = 0; x < width / 2; x++) {
            pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
            pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
        }
    }
}

static void write_video_frame(AVFormatContext *oc, AVStream *st)
{
    int ret;
    static struct SwsContext *sws_ctx;
    AVCodecContext *c = st->codec;

    if (frame_count >= STREAM_NB_FRAMES) {
        /* No more frames to compress. The codec has a latency of a few
         * frames if using B-frames, so we get the last frames by
         * passing the same picture again. */
    } else {
        if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
            /* as we only generate a YUV420P picture, we must convert it
             * to the codec pixel format if needed */
            if (!sws_ctx) {
                sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,
                                         c->width, c->height, c->pix_fmt,
                                         sws_flags, NULL, NULL, NULL);
                if (!sws_ctx) {
                    fprintf(stderr,
                            "Could not initialize the conversion context\n");
                    exit(1);
                }
            }
            fill_yuv_image(&src_picture, frame_count, c->width, c->height);
            sws_scale(sws_ctx,
                      (const uint8_t * const *)src_picture.data, src_picture.linesize,
                      0, c->height, dst_picture.data, dst_picture.linesize);
        } else 
		{
			 /* as we only generate a YUV420P picture, we must convert it
			* to the codec pixel format if needed */
			if (!sws_ctx) 
			{
				sws_ctx = sws_getContext( DECODER_WIDTH, DECODER_HEIGHT, PIX_FMT_YUV420P,
					c->width, c->height, c->pix_fmt,
					sws_flags, NULL, NULL, NULL);
				if (!sws_ctx) 
				{
					fprintf(stderr,
						"Could not initialize the conversion context\n");
					exit(1);
				}
			}
			fill_yuv_image(&src_picture, frame_count, DECODER_WIDTH, DECODER_HEIGHT);
			sws_scale(sws_ctx,
				(const uint8_t * const *)src_picture.data, src_picture.linesize,
				0, DECODER_WIDTH, dst_picture.data, dst_picture.linesize);
            //fill_yuv_image(&dst_picture, frame_count, c->width, c->height);
        }
    }

    if (oc->oformat->flags & AVFMT_RAWPICTURE) {
        /* Raw video case - directly store the picture in the packet */
        AVPacket pkt;
        av_init_packet(&pkt);

        pkt.flags        |= AV_PKT_FLAG_KEY;
        pkt.stream_index  = st->index;
        pkt.data          = dst_picture.data[0];
        pkt.size          = sizeof(AVPicture);

        ret = av_interleaved_write_frame(oc, &pkt);
    } else {
        /* encode the image */
        AVPacket pkt;
        int got_output;

        av_init_packet(&pkt);
        pkt.data = NULL;    // packet data will be allocated by the encoder
        pkt.size = 0;

        ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
        if (ret < 0) {
            fprintf(stderr, "Error encoding video frame\n");
            exit(1);
        }

        /* If size is zero, it means the image was buffered. */
        if (got_output) {
            if (c->coded_frame->key_frame)
                pkt.flags |= AV_PKT_FLAG_KEY;

            pkt.stream_index = st->index;

            /* Write the compressed frame to the media file. */
            ret = av_interleaved_write_frame(oc, &pkt);
        } else {
            ret = 0;
        }
    }
    if (ret != 0) {
        fprintf(stderr, "Error while writing video frame\n");
        exit(1);
    }
    frame_count++;
}

static void close_video(AVFormatContext *oc, AVStream *st)
{
    avcodec_close(st->codec);
    av_free(src_picture.data[0]);
    av_free(dst_picture.data[0]);
    av_free(frame);
}

/**************************************************************/
/* media file output */

int main(int argc, char **argv)
{
    char filename[99]="";
    AVOutputFormat *fmt;
    AVFormatContext *oc;
    AVStream *audio_st, *video_st;
    AVCodec *audio_codec, *video_codec;
    double audio_pts, video_pts;
    int i;

    /* Initialize libavcodec, and register all codecs and formats. */
    av_register_all();

    if (argc != 2) {
        printf("usage: %s output_file\n"
               "API example program to output a media file with libavformat.\n"
               "This program generates a synthetic audio and video stream, encodes and\n"
               "muxes them into a file named output_file.\n"
               "The output format is automatically guessed according to the file extension.\n"
               "Raw images can also be output by using '%%d' in the filename.\n"
               "\n", argv[0]);
        return 1;
    }

	strcpy(filename,"a.h264");
    /* allocate the output media context */
    avformat_alloc_output_context2(&oc, NULL, NULL, filename);
    if (!oc) {
        printf("Could not deduce output format from file extension: using MPEG.\n");
        avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
    }
    if (!oc) {
        return 1;
    }
    fmt = oc->oformat;

    /* Add the audio and video streams using the default format codecs
     * and initialize the codecs. */
    video_st = NULL;
    audio_st = NULL;

    if (fmt->video_codec != AV_CODEC_ID_NONE) {
        video_st = add_stream(oc, &video_codec, fmt->video_codec);
    }
    if (fmt->audio_codec != AV_CODEC_ID_NONE) {
        audio_st = add_stream(oc, &audio_codec, fmt->audio_codec);
    }

    /* Now that all the parameters are set, we can open the audio and
     * video codecs and allocate the necessary encode buffers. */
    if (video_st)
        open_video(oc, video_codec, video_st);
    if (audio_st)
        open_audio(oc, audio_codec, audio_st);

    av_dump_format(oc, 0, filename, 1);

    /* open the output file, if needed */
    if (!(fmt->flags & AVFMT_NOFILE)) {
        if (avio_open(&oc->pb, filename, AVIO_FLAG_WRITE) < 0) {
            fprintf(stderr, "Could not open '%s'\n", filename);
            return 1;
        }
    }

    /* Write the stream header, if any. */
    if (avformat_write_header(oc, NULL) < 0) {
        fprintf(stderr, "Error occurred when opening output file\n");
        return 1;
    }

    if (frame)
        frame->pts = 0;
    for (;;) {
        /* Compute current audio and video time. */
        if (audio_st)
            audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
        else
            audio_pts = 0.0;

        if (video_st)
            video_pts = (double)video_st->pts.val * video_st->time_base.num /
                        video_st->time_base.den;
        else
            video_pts = 0.0;

        if ((!audio_st || audio_pts >= STREAM_DURATION) &&
            (!video_st || video_pts >= STREAM_DURATION))
            break;

        /* write interleaved audio and video frames */
        if (!video_st || (video_st && audio_st && audio_pts < video_pts)) {
            write_audio_frame(oc, audio_st);
        } else {
            write_video_frame(oc, video_st);
            frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
        }
		Sleep(40);
    }

    /* Write the trailer, if any. The trailer must be written before you
     * close the CodecContexts open when you wrote the header; otherwise
     * av_write_trailer() may try to use memory that was freed on
     * av_codec_close(). */
    av_write_trailer(oc);

    /* Close each codec. */
    if (video_st)
        close_video(oc, video_st);
    if (audio_st)
        close_audio(oc, audio_st);

    /* Free the streams. */
    for (i = 0; i < oc->nb_streams; i++) {
        av_freep(&oc->streams[i]->codec);
        av_freep(&oc->streams[i]);
    }

    if (!(fmt->flags & AVFMT_NOFILE))
        /* Close the output file. */
        avio_close(oc->pb);

    /* free the stream */
    av_free(oc);

    return 0;
}
You can changes this line:
strcpy(filename,"a.h264");
to
strcpy(filename,"udp://<destination_ip>:8888/a.h264");
and monitor traffic.

Also, after first run please change these line:
#define ENCODER_WIDTH 640
#define ENCODER_HEIGHT 480

to very small size, like:

#define ENCODER_WIDTH 160
#define ENCODER_HEIGHT 120

then monitor the sizes again - almost the same! __Why__???

Note: above just outputs dummy data, but resembles the question I had at start.

taansari
Posts: 81
Joined: Fri Sep 28, 2012 6:18 am

Re: UDP streaming and sws_scale problem

Post by taansari » Tue Dec 18, 2012 10:47 am

I would like to share my webcam results of both 640x480 and 160x120 video of approximate duration 13 seconds (resolution set at time of opening up decoder). Is it possible to upload to this site? (both have .h264 extension, and forum is not allowing me to post h264 files - file sizes combined is approximately 1.2 MB).

rogerdpack
Posts: 1878
Joined: Fri Aug 05, 2011 9:56 pm

Re: UDP streaming and sws_scale problem

Post by rogerdpack » Tue Dec 18, 2012 7:09 pm

sounds like a question for the libav-user mailing list, maybe they know more...

I would suggest saving files to a dropbox or google drive that can then make it public...maybe? (I've never really done it :)

taansari
Posts: 81
Joined: Fri Sep 28, 2012 6:18 am

Re: UDP streaming and sws_scale problem

Post by taansari » Wed Dec 19, 2012 4:35 am

rogerdpack wrote:sounds like a question for the libav-user mailing list, maybe they know more...

I would suggest saving files to a dropbox or google drive that can then make it public...maybe? (I've never really done it :)
I just noticed it is possible to attach rar/zip files, so here goes:

p.s. I have asked same questions on lib-av user list; it says my posts awaiting authentication; so far no replies yet. This whole h264 thing is a real puzzle to me.
Attachments
640x480.zip
640x480 decoder resolution approx duration 3.7 seconds
(170.74 KiB) Downloaded 144 times
160x120.zip
160x120 decoder resolution approx duration 3.7 seconds
(187.89 KiB) Downloaded 121 times

taansari
Posts: 81
Joined: Fri Sep 28, 2012 6:18 am

Re: UDP streaming and sws_scale problem

Post by taansari » Wed Dec 19, 2012 6:36 am

Following is partial dump of FFmpeg.exe:

Code: Select all

[dshow @ 007df1c0] Estimating duration from bitrate, this may be inaccurate
Input #0, dshow, from 'video=A4 tech USB2.0 Camera':
  Duration: N/A, start: 3746.601000, bitrate: N/A
    Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 160x120, 25 tbr,
10000k tbn, 25 tbc
[libx264 @ 02675900] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE
4.2
[libx264 @ 02675900] profile High 4:2:2, level 1.1, 4:2:2 8-bit
Output #0, h264, to 'a1.h264':
  Metadata:
    encoder         : Lavf54.37.100
    Stream #0:0: Video: h264, yuv422p, 160x120, q=-1--1, 90k tbn, 25 tbc
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo -> libx264)
Press [q] to stop, [?] for help
Please notice where it says:

[libx264 @ 02675900] profile High 4:2:2, level 1.1, 4:2:2 8-bit

So far I have never been able to achieve high422 profile. With ultrafast preset, I only get constrained baseline profile, and with veryfast preset and others, maximum I get is:

[libx264 @ 05071a80] profile High, level 3.0

That could be a reason. Now comes the question how can I set this profile; I've tried it through code, but does not seem to work:

Code: Select all

if ( codec_id == AV_CODEC_ID_H264 ) 
	{
		if ( av_opt_set(c->priv_data, "preset", "veryfast", 0) == 0)
		{
			fprintf(stderr,"\nS: Preset set\n");
		}
		else
		{
			fprintf(stderr,"\nE: Unable to set preset\n");
		}
		if ( av_opt_set(c->priv_data, "profile", "high422", AV_OPT_SEARCH_CHILDREN) == 0 )
		{
			fprintf(stderr,"\nS: Profile for h264 set\n");
		}
		else
		{
			fprintf(stderr,"\nE: Unable to set profile for h264");
		}
	}
Code does not complain anything, but above output from console proves it is not working.

taansari
Posts: 81
Joined: Fri Sep 28, 2012 6:18 am

Re: UDP streaming and sws_scale problem

Post by taansari » Wed Dec 19, 2012 7:51 am

Ok, so I am able to set High 4:2:2 profile, with pixel format as yuv422p; still get similar compression rates for all resolutions, so that's not it.

Post Reply