Welcome!

Welcome to the official BlackBerry Support Community Forums.

This is your resource to discuss support topics with your peers, and learn from each other.

inside custom component

Native Development

Reply
Developer
Posts: 1,041
Registered: ‎07-16-2008
My Device: ಠ_ಠ

Re: Camera API NV12 frame to AVFrame (FFmpeg)

[ Edited ]

Currently I'm doing memcpy on the y, then memcpy for each width of uv. It'll probably change though. I'm only doing that because I was crashing if I tried to copy the parts between each uv section (whole buffer at once).

 

camera_buffer_t *_buf = (camera_buffer_t *) malloc(sizeof(camera_buffer_t));
memcpy(_buf, buf, sizeof(camera_buffer_t));
_buf->framebuf = (uint8_t*) malloc(uv_offset + ((height / 2) * width));
memcpy(_buf->framebuf, buf->framebuf, uv_offset);
for (uint32_t i = 0; i < height / 2; i++)
{
    int64_t doff = uv_offset + (i * width);
    int64_t soff = uv_offset + (i * stride);
    memcpy(_buf->framebuf + doff, buf->framebuf + soff, width);
}

 

BlackBerry Development Advisor
Posts: 668
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Camera API NV12 frame to AVFrame (FFmpeg)

The Y memcpy in your post will not do the right thing.  The source Y extents are height*stride.  Also, uv_offset is not guaranteed to always be height*stride.

 

I suggest if you're going to do a straight copy of a camera_buffer_t that you do something like:

camera_buffer_t* _buf = (camera_buffer_t*)malloc(buf->framesize);
memcpy(_buf, buf, buf->framesize);
_buf->stride = _buf->width; // remove stride
_buf->uv_offset = _buf->stride * _buf->height; // recompute and pack planes _buf->framebuf = (uint8_t*)malloc(width * height * 3 / 2);
// copy Y plane
for (int i=0; i<height; i++) { memcpy(&_buf->framebuf[i*_buf->stride], &buf->framebuf[i*buf->stride], width); }
// copy UV plane
for (int i=0; i<height/2; i++) { memcpy(&_buf->framebuf[_buf->uv_offset + i*_buf->stride], &buf->framebuf[buf->uv_offset + i*buf->stride], width); }

Note that uv_offset and stride will be different between buf and _buf.

 

The thing that sucks is that if you now pass this into your previous code which constructs an AVFrame, you've effectively copied the UV data twice. This is okay for an intermediate proof-of-concept implementation, but not for final performance.

 

I would go right back to constructing a 3-plane AVFrame in your callback function.  Your most recent implementation was allocating data[1] and data[2].  Just add an alloc for data[0] of size width*height and set linesize[0] = width.  Do your Y copy as outlined above and do yout U/V de-interleave copy as described previously.

 

Cheers,

Sean

 

Developer
Posts: 1,041
Registered: ‎07-16-2008
My Device: ಠ_ಠ

Re: Camera API NV12 frame to AVFrame (FFmpeg)

I moved the encoding back to the callback since setting the thread count on the encoder makes it faster than the copying of the buffers anway. Even when changing the size down to 288x512 the FPS only went up from ~4 to ~11, so thats still no where near 30. I can't seem to get FFmpeg to fill in the missing frames unless I keep passing each frame multiple times to make up for the dropped ones, but then that slows it down and causes more dropped frames. The fact that the camera drops them instead of the user makes this difficult and kinda useless for recording anything.

BlackBerry Development Advisor
Posts: 668
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Camera API NV12 frame to AVFrame (FFmpeg)

I wonder if you are approaching this with unreasonable expectations.  I'm going to comment on a few of your griefs and ask some questions which may help us understand what is possible.

 

  • There is zero chance you will be able to encode a 1080p-sized video in software with no hardware acceleration at 30fps on a 1GHz processor, even using both cores.  You are talking about compressing over 62 megapixels per second at that resolution.
  • What is your compression codec?  MPEG1?  MPEG2?  and what is your target output bitrate?  For comparison, our highest bandwidth h/w encoder bitrate is set to 16Mbps (2MB/sec).  If you are over-driving the flash disk, that will be a bottleneck.  MPEG2 is more algorithmically intensive than MPEG1, so expect lesser performance.  Are there additional codec parameters you can tune?  eg. disable various filters, reduce quality, 1-pass vs 2-pass, etc?
  • Have you tried profiling your callback performance using some start-of-function and end-of-function call timestamps?  Regardless of how many threads you may have configured ffmpeg to use, are you certain that the input stage to the encoder is fully nonblocking?  I would be interested to know how much of the time allotted to your callback is spent generating an AVFrame and how much additional time you have to do the encoding.  Can you dump timestamps after your malloc()s and after you do the Y-copy and U/V-deinterleave?  Don't even bother passing the frames into the encoder.  Just benchmark that step
  • Regarding dropped frames -- there is a reason the OS is dropping them, and that is because we only have a limited amount of dedicated memory which can be shared between the camera hardware and the A9 CPU cores.  It doesn't matter how many buffers we have if you cannot consume buffers as fast as they are produced.  Even with an additional 10 h/w buffers, you would still hit the pipeline wall after a second or two.
  • Regarding passing in missing frames.. isn't there a way to just tell the encoder "repeat last frame"?  Seems odd that an encoder wouldn't support this.
  • Have you tried an offline encoding?  eg. run ffmpeg without the camera attached and see what is the highest throughput you can get?
  • If you want to post your callback code, I could offer some more suggestions with respect to minimizing copies, etc.

 

What was your end use case again?  Just curious.

 

Cheers,

Sean

 

Developer
Posts: 1,041
Registered: ‎07-16-2008
My Device: ಠ_ಠ

Re: Camera API NV12 frame to AVFrame (FFmpeg)

[ Edited ]

* 1080 wasn't expected, thats just what it defaulted to because I didn't know the camera had to have the dimension properties set twice.

* MPEG2 because MPEG1 was having issues.

* I haven't timed it. But I either have to copy the buffer and pass it into a background thread, or encode in the callback. I can't just make the AVFrame and pass that to the background thread because the frame is just pointing to the original buffer for the Y plane, which means the Y data could get freed by the camera/OS if I don't use it right away. Regardless, both will take longer to do anything with the callback data than the callback has time for.

* Not sure if there is a way to tell the encoder to repeat the last frame. I've been searching, but haven't found anything.

 

My intention was to stream video up to a server, and I wanted to go directly from the camera to a socket without having to read from the camera's output file as it records and saves. I had time to spare so I gave it a shot... but I think I'm done with this now.

 

Thanks for all the replies!

BlackBerry Development Advisor
Posts: 668
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Camera API NV12 frame to AVFrame (FFmpeg)

Ok.  Since this is a streaming thing, you might consider trying to work at a reduced framerate as well.  eg. 5-10fps.

I'll have h.264 callbacks implemented some time in the next couple of weeks so you can take advantage of the hardware encoder on the platform, but can't speculate as to when it will be available.

 

Cheers,

Sean

Developer
Posts: 1,041
Registered: ‎07-16-2008
My Device: ಠ_ಠ

Re: Camera API NV12 frame to AVFrame (FFmpeg)

[ Edited ]

For the h.264 callbacks you mention, would you be able to set a fixed frame size for streaming? I know the mp4 has an index at the end of the file and the encoder edits some bytes at the beginning of the file when it finishes, so streaming the current mp4 won't work. I worked in the past with someone that was sending down an mp4 stream to the device, and we had fake headers to append to beginning, and it used a fixed frame size so it worked for playback without having an index or a full file.

 

Or, are there plans to support other formats?

BlackBerry Development Advisor
Posts: 668
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Camera API NV12 frame to AVFrame (FFmpeg)

I haven't got into the meat of it just yet.  I doubt that "playback as an mp4 file" will be an early goal.  It will likely just be straight H264 frames.  We are looking to replace an existing internal implementation with an equivalent 3rd-party-usable API.  This would be the same interface used by the VideoChat app when finished.

 

Cheers,

Sean

Developer
Posts: 1,041
Registered: ‎07-16-2008
My Device: ಠ_ಠ

Re: Camera API NV12 frame to AVFrame (FFmpeg)

[ Edited ]

Okay, I gave it another shot and worked though some bugs I had, and here is the latest:

* I switched back to multithread

* I copy the buffer in the callback and pass it to the background thread

* However, I'm using the code you posted and destriding everything so less has to get copied

* I switched down to 288x512

 

With this I can get 30 FPS. Attached is the output video file. I'll commit the new changes to GitHub in a bit.

 

EDIT: MPEG1 works in QuickTime with the lower dimensions

EDIT: The sample code is updated here:

https://github.com/hardisonbrewing/HelloVideoCamera

BlackBerry Development Advisor
Posts: 668
Registered: ‎11-29-2011
My Device: developer
My Carrier: other

Re: Camera API NV12 frame to AVFrame (FFmpeg)

Awesome!  I'll take a look at it some time soon and see if there is any further tuning that can be done to achieve higher framerates.  On Dev Alpha, you should be able to enable NEON instructions in the codec, which should help performance as well.  Might be able to speed up the U/V copies too by optimizing for the size of the cache line.

Cheers,

Sean