09-04-2012 11:47 AM
I had build ffmpeg library using following configure script
when i link this library with simple program i get SIGABRT with message "Malloc Check Failed: :/builds/GR2_0_1-Worldbuild/latest/svn/lib/c/alloc
Here's Debug stack from IDE.
test [BlackBerry Tablet OS C/C++ Application]
Thread  (Suspended : Signal : SIGABRT:Aborted)
SignalKill() at 0x1f6c5d0
raise() at 0x1f5ca80
__malloc_panic_str() at 0x1f57b5c
_list_release() at 0x1f58d4c
__free() at 0x1f5a6ec
av_freep() at 0x788407f0
av_expr_free() at 0x7883af1c
av_expr_parse_and_eval() at 0x7883c244
set_string_number() at 0x78842068
This error occure when i call avformat_find_stream_info(pFormatCtx,NULL)<0)
Here is some portion of my app code..
int i, videoStream,rc;char media_file="shared/videos/rockon.AVI";
if(avformat_open_input(&pFormatCtx,media_file, NULL, NULL)!=0)
return -1; // Couldn't open file
if(avformat_find_stream_info(pFormatCtx,NULL)<0) //HERE ERROR COMES
}// Couldn't find stream information
09-04-2012 11:55 AM
You'll want to post to the devs of FFmpeg about it. I've seen a malloc check failed with them before and it was when using the h.264 format. It failed when setting the defaults for the codec context. As far as I could tell the error was with FFmpeg and not the NDK. You can set breakpoints and step through to see exactly where it fails. Can't help you with more than that.
01-31-2013 06:27 AM
Although we didn't get the rebuilding of ffmpeg working, the base library is doing fine for us.
Anyone have any luck with audio and ffmpeg, playback especially?
05-23-2013 04:45 PM - edited 05-23-2013 04:49 PM
I've also been working through a lot of the same issues and this thread has been extremely helpful. My particular wrinkle is I'm attempting to convert nv12 to RGB. Which is working... sort of. I've only been drilling on it for a day or so and plan on working on it some more but I wanted to post a screen shot of the issue I've got just in case if someone has seen this before. The attached screen shot is me attempting to get a shot of the Z10 box :-)
05-23-2013 05:01 PM
cool! I bet you could sell this effect to Instagram!
since the luminance seems to be about right, my guess is that you aren't walking through the UV plane correctly.
remember, in NV12, the UV plane is going to be 2x2 subsampled. This will mean that it is as many bytes wide as the Y plane, but that the U,V values are interleaved. It will be 1/2 the height of the Y plane. Also, be aware that the UV plane may be separated from the Y plane by an alignment gap.. use the uv_offset value in the nv12 frame descriptor to figure out where the UV plane actually starts in memory. Also, be aware that the UV plane stride may differ from the Y plane stride (see uv_stride in the nv12 descriptor).
so for any (x,y) pixel in the image, the Y value will be found at:
framebuf[y*stride + x];
the U value will be found at:
framebuf[uv_offset + y*uv_stride/2 + (x/2)*2]
and the V value will be found next to the U value, at:
framebuf[uv_offset + y*uv_stride/2 + (x/2)*2 + 1]
I leave optimization as an exercise to the reader
I should also mention that the camera's video viewfinder supports RGB mode now, in case you want to use that. It is not fully documented, as there are some gotchas involved. (eg. you cannot have a visible viewfinder window at the same time). An alternate method I could suggest would be to create a pair of screen pixmap buffers. One in NV12 space, and one in RGB space. Then you can more or less memcpy() into the NV12 pixmap and use the GPU's colour-space conversion unit by just blitting the nv12 pixmap into the rgb pixmap.
In fact, you may find your performance is better if you memcpy() first anyways, since the camera buffers are mapped with caching disabled presently.
Question though: why do you need RGB? Just messing around?
05-24-2013 08:40 AM - edited 05-24-2013 05:12 PM
Thanks for the reply! The receiving end of what I'm trying to push data into is expecting RGB888. Also, not having the viewfinder window visible isn't going to be an issue so that particular gotcha isn't a problem. So having the viewfinder pumping out RGB data that would be fantastic as I wouldn't have to perform the expensive task of shuffling bits around. Which version of the runtime has this enabled? 10.0 or 10.1?
05-24-2013 10:52 AM
If you are using video viewfinder mode, you should be able to configure RGB8888 mode.
You'll need to disable the vf window, and also not bother passing in a window group & id.
Note that RGB is not reported as a valid frametype if you query for it -- this is mainly because we have not put in the necessary regression testing across all use cases.
The code below is used in my OpenGL spinning-cube texturing demo:
camera_set_videovf_property(handle, CAMERA_IMGPROP_CREATEWINDOW, 0, CAMERA_IMGPROP_FORMAT, CAMERA_FRAMETYPE_RGB8888, CAMERA_IMGPROP_FRAMERATE, 30.0, // note: native orientation gives best performance CAMERA_IMGPROP_ROTATION, orientation, // note: can use camera_get_video_vf_resolutions() to find your favorite CAMERA_IMGPROP_WIDTH, 480, CAMERA_IMGPROP_HEIGHT, 640);
-work in the lowest resolution you are okay with. 1080p textures are not going to upload fast enough to maintain high framerates.
-don't expect all resolutions to be available on all devices.. use the appropriate query function.
-the orientation angle I am passing in here is the complement of the angle returned by camera_get_native_orientation(). This should bypass a rotation & blit step in the OS in some cases. Just be aware that the video will be sideways. To orient it correctly, you would rotate your texture clockwise by the value returned by camera_get_native_orientation().
-the colour format is called RGB8888, but on a little-endian device, you'll find that this is bytewise represented as BGRX. (essentially, whatever the native RGB format of the GPU is).
I have sample code which shows how to get the camera buffer into an opengl texture that I will be publishing in the next couple of weeks. I just have to clear it through our open-source folks...
RGB support should work in 10.0, if my memory serves me.