Welcome!

Welcome to the official BlackBerry Support Community Forums.

This is your resource to discuss support topics with your peers, and learn from each other.

inside custom component

Native Development

Reply
Developer
danielleCoder
Posts: 178
Registered: ‎04-16-2011
My Device: torch 9800
My Carrier: verizon

Some ffmpeg advice needed

Hi all,

 

I need some advice from somebody who has used ffmpeg with Blackberry devices.

 

My problem is I have a loop which processes video and audio AVFrames correctly from http stream in mp4 and yuv format but when I try to make each video frame fit the screen (either upscaling or downscaling) using sws_scale I get slow motion video performance.

 

I can get and have tried various resolutions for the streamed video but sws_scale is the problem. If I keep sws_scale in the loop but don't actually change the destination width or height then the video plays fine (both sound and video). I have to use sws_scale though otherwise the video is too big or too small.

 

I am only creating sws_context once before entering loop which I have seen may be a common issue for others when searching for answers.

 

Is there a way to make sws_scale perform better (otherwise why is it included in ffmpeg?).

 

I would really appreciate any info as searching for sws_scale seems to provide very limited information (maybe because it is new?).

 

Thank you.

Please use plain text.
Developer
mreed
Posts: 1,041
Registered: ‎07-16-2008
My Device: ಠ_ಠ

Re: Some ffmpeg advice needed

Also realise this is a mobile device, so if sws_scale is good enough on a desktop, it doesn't necessarily translate to good enough for a device. Perhaps you might want to scale it server side before sending it down... if you dont, you'll be transmitting more than you need anway.

Please use plain text.
Trusted Contributor
cjonesy
Posts: 160
Registered: ‎09-13-2012
My Device: 9900
My Carrier: vodafone

Re: Some ffmpeg advice needed

[ Edited ]

Thank you for replying Martin.

 

I'm almost giving up on sws_scale as I have been experimenting with it for a couple of days  (starting from one thread to 3) and still there is performance issues when everything (audio/video reading/decoding)  is in seperate threads.

 

I then see that tuber (which is simialr to what im doing) doesn't display in fullscreen so maybe I'll have to stick with displaying video half the size of screen rather than fullscreen as server side scaling isn't an option for me.

 

I am now at a stage where I can stream/play video and audio and now I am trying to learn how to sync a/v from dranger tutorial which is designed for sdl when I'm using openal and native screen to play videos with sound.  It's not going well - the video is playing too fast and I know I need to somehow sync audio with video or visa versa but cant find anything similar to what im doing-  parsing videos/audio frames - adding them to PacketQueue structs and having two consumer threads to output to speaker and screen (I have never used sdl). 

 

If anyone knows of a more similar tutorial to what im attempting or can offer advice on best way to sync these packets I have decoded then I would be very grateful.

 

Thank you.

Please use plain text.
Developer
indy2718
Posts: 36
Registered: ‎01-16-2013
My Device: Z10
My Carrier: Telus

Re: Some ffmpeg advice needed

[ Edited ]

Maybe you can try to offload as much processing to the gpu.   Make a simple OpenGL ES context/surface, pass each frame as texture.  Then using opengl draw a rectangle/triangle strip passing texture coordinates 1,1 etc.  The graphics hardware display the frame and scale it for you, and even give linear smoothing between pixels.  I'm not sure if it will be fast enough for you....just trying to give you options.    I do it for my app, and it works fine.   I have a shader to convert NV12 to RGB as well. 
Maybe there is hardware scaling in the BPS window library or some other bundled graphics library...  I haven't looked into that, since I do all my work with Opengl ES 2.

The last time I used ffmpeg was in linux, and the sws scaling was fast.  But remember, they may have used Intel's vector optimization instructions on the cpu.   The ARM compilation for blackberry may not be opmtized at all!

edit:

To answer the other question...  I have not done this but I would guess you would gradually change the frame rate +/- to sync with audio.  

Please use plain text.
BlackBerry Development Advisor
smcveigh
Posts: 665
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Some ffmpeg advice needed

anything you are rendering to a screen window can be resized by the GPU.  likewise anything you are rendering to a screen pixmap can be rescaled during your blit.  using a software-scaler doesn't make much sense, since you will be passing through screen eventually (I presume).

 

Please use plain text.
Developer
danielleCoder
Posts: 178
Registered: ‎04-16-2011
My Device: torch 9800
My Carrier: verizon

Re: Some ffmpeg advice needed

Thanks guys for your advice.

 

The last couple of days have been spent trying to understand and copy the only tutorial (dranger) I can find to make sure the video isn't playing too fast and is in sync with audio, it's not going well as the tutorial is using SDL to process video/audio and my code is already set up to use openal and just blit to screen but I'll keep trying to sync today as im sure i have 90% in my code doing what dranger does with pts, delay, etc.

 

Then I'm going to come back to filling the screen with the video but by what I'm hearing from Sean that part is not going to be too difficult as I am blitting the yuv values to screen, I did wonder if I'm making this too difficult.

 

Would I be able to find a sample easily enough Sean on how to scale as im blitting or would it require a little support for a novice to native coding like myself?

Please use plain text.
BlackBerry Development Advisor
smcveigh
Posts: 665
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Some ffmpeg advice needed

Blits are easy.

 

Here's a snippet from one of my unpublished samples which was blitting a face from a pixmap onto a window buffer.

I had a thread which was detecting faces and updating my gBlit structure with the location of the face in the current pixmap (gPixBuf[]).  And I had another thread which was blitting from gPixBuf[i] to my screen buffer (screen_buf).

 

            int blits[] = { SCREEN_BLIT_SOURCE_X, gBlit.src_x,
                            SCREEN_BLIT_SOURCE_Y, gBlit.src_y,
                            SCREEN_BLIT_SOURCE_WIDTH, gBlit.src_w,
                            SCREEN_BLIT_SOURCE_HEIGHT, gBlit.src_h,
                            SCREEN_BLIT_DESTINATION_X, gBlit.dest_x,
                            SCREEN_BLIT_DESTINATION_Y, gBlit.dest_y,
                            SCREEN_BLIT_DESTINATION_WIDTH, gBlit.dest_w,
                            SCREEN_BLIT_DESTINATION_HEIGHT, gBlit.dest_h,
                            SCREEN_BLIT_END };
            int rect[] = { gBlit.dest_x, gBlit.dest_y, gBlit.dest_w, gBlit.dest_h};
            if (screen_blit(screen_ctx, screen_buf, gPixBuf[i], blits) == -1) perror("blit");
            if (screen_post_window(screen_win, screen_buf, 1, rect, SCREEN_WAIT_IDLE) == -1) perror("post");

 

The source face rectangle to extract from the pixmap is defined by the SCREEN_BLIT_SOURCE_* parameters.

The destination rectangle to blit it to in the screen buffer is defined by the SCREEN_BLIT_DESTINATION_* parametes.

 

You can see all you need to do is set up an array of blit attributes, call screen_blit(), and then post your window buffer (I was only updating the part of the screen that changed -- the destination x,y,w,h used by rect[]).

 

The only nuissance with blits is that since they are driven by the GPU, you will need to have the source buffer stored in a GPU-friendly location (eg. a screen pixmap buffer).  I am still unclear as to whether we support planar YUV yet (eg. YUV420P), so you may have to sort that out.  If the frames are being allocated by ffmpeg, you may need to provide an allocator function for ffmpeg to use which allocates the buffers from screen instead of just malloc().

 

Cheers,

Sean

Please use plain text.
BlackBerry Development Advisor
smcveigh
Posts: 665
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Some ffmpeg advice needed

oh, alternately, instead of blitting from a pixmap into a window buffer.. if there is nothing else in your window, you can simply adjust SCREEN_PROPERTY_SOURCE_SIZE.  The source size (buffer size) of your window does not need to match it's on-screen size (SCREEN_PROPERTY_SIZE).

 

With a SCREEN_PROPERTY_SOURCE_SIZE of 128x72 and a SCREEN_PROPERTY_SIZE of 1280x720, you will be scaling your 128x72 buffer 10x to fill the screen.

 

I'm guessing you are presently memcpy()ing from ffmpeg into a window buffer.  If that's the case, then you should just be able to do the above.  If you need to further optimize, then you need to look at how you can provide a buffer allocator to ffmpeg which will allocate pixmaps or window buffers for ffmpeg to directly fill.  (again, I don't know that ffmpeg supports any of the native accelerated formats that we use though).

 

Cheers,

Sean

Please use plain text.
Developer
danielleCoder
Posts: 178
Registered: ‎04-16-2011
My Device: torch 9800
My Carrier: verizon

Re: Some ffmpeg advice needed

Excellent suggestion Sean (source size), I now have fullscreen for videos without affecting performance.

 

Thank you.

Please use plain text.
BlackBerry Development Advisor
smcveigh
Posts: 665
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Some ffmpeg advice needed

cool, glad that's working for you.

here's another idea for you.. if want to give the user control over somethng like a 4:3 zoom to fit 16:9 (instead of sticking them with pillar bars), you can decrease the SCREEN_PROPERTY_SOURCE_SIZE even smaller than the buffers.

You can implement pinch-to-zoom using this technique too :smileyhappy:

Please use plain text.