10-19-2011 08:50 AM
my question is how to record to a file with RIMM file format?
In the documentation provided on this site:
there is a field in file's header called "descriptor location" whose value is either 0 (if recording to a file) or 1 (if recording to a stream).
I have a piece of code that records video:
Player player = Manager.createPlayer("capture://video?encoding=vid
eo/3gpp"); player.start(); RecordControl recorder = (RecordControl) player.getControl("RecordControl"); recorder.setRecordStream(outputstream); recorder.startRecord();
Using this configuration, I record to an outputstream which I store in a file in the end. It contatins data formated according to the RIMM file specification with descriptor location set to 1 (recording to a stream).
How can I make recorder to set this value to 0 (recording to a file)?
If I call this function:
instead of setRecordStream I get a file in a standard 3gpp format without RIMM's structure.
I ask beacuse I need to have the descriptor part before data frames, not in the file's footer.
10-24-2011 09:54 AM
Did you manage to make it work?
I am facing the same issues here - when I record the output stream to a file then it won't play with the player.
10-25-2011 05:35 AM
Unfortunately not... I had to temporarily give up my video project.
Btw.: has anyone been able to record and play audio/video streams simultaneously (i.e. on OS 7 devices)?
10-25-2011 06:23 AM
10-26-2011 11:20 AM
If a stream is being recorded by the device, the descriptor field will not be present in the header but rather in the footer. This is because it contains information such as duration and the number of audio/video frames that cannot be known in advance of the recording.
However if it the RIMM header is in a file that will be played back, this information should be in the header since it is needed for playback. The descriptor location byte (byte 8 in the header) indicates the location of the descriptor, so try making sure byte 8 of your stream is set to 0 (file format rather than 1=stream format) and move the 75-byte descriptor block to the header (after byte 8, before the frame blocks) rather than after the frame blocks at the end of the file.
10-29-2011 08:25 AM
10-31-2011 10:33 AM - edited 10-31-2011 10:37 AM
Let me throw some terms out that will help understand what's going on here:
A container is something that holds actual media content. It isn’t the media data itself, but rather it holds information about the media including things like metadata, and information about how systems can understand and read the media content found within the container. What you're calling RIMM format is actually just a container, 3GPP is another example
A codec defines how systems encode and decode the media data itself, including how to translate the data into sound or video (the term actually comes from a shortening of the words “compress” and “decompress”). The data encoded according to the codec specification is usually found within a container but isn’t necessarily required to be. Specifying an encoding of H.264 is essentially defining the media data found inside the container in your case.
So in summery, your RIMM container contains H.264-encoded video inside of it. You can convert this into another container (3GPP or MP4 for example) if you understand how those containers are defined. The RIMM 'Descriptor' block contains information including the # of a/v frames, duration of audio and video, etc. that other containers compatible with H.264 would likely also need to have. The RIMM 'Frame' blocks are essentially just the H.264 a/v frames themselves, wrapped in a bit of extra data to define whether each frame is audio or video, its size, etc. Between these two it should be as simple as plugging in the data in the new container format (again, provided you know what that format looks like).
Wikipedia has a good link that illustrates the difference.
The reason for using 3GPP for recording to a file and RIMM for recording to a stream is because some data such as video duration, # of frames, etc. cannot be known in advance of the recording. In the former 'file' case once the recording is committed it is possble to update the header at the start of the file with the missing information. In the latter streaming case, that is not possible to do, so a format is required that allows a header at the end of the stream - which RIMM does.
Hope that helps!
10-31-2011 10:43 AM - edited 10-31-2011 10:45 AM
Wow thanks this was really useful and opened my eyes for a lot of the things which were unclear and now I know why there is that difference you explained in the last paragraph.
One last(hopefully ) question:
Since the frame blocks are actually the h.264 audio/video frames then it should be possible to parse the output stream while the camera is still recording and extract data frame by frame and pack it in RTP packages(the structure of which is defined here http://www.ietf.org/rfc/rfc3984.txt) and send them to the network i.e. something like a live video stream to another bb device which will have to unpack them, wrap them into rimm frames and do the playback?
10-31-2011 10:52 AM
I can't think offhand why that wouldn't work... I do know of some developers successfully transcoding MP4-encoded video between RIMM and MP4 containers so I know that it's possible. I haven't tried it personally though, so I'm not sure I can offer any guidance on that unfortunately.
10-31-2011 11:12 AM
Hi dx22 and BVP,
thank you guys for continuing this discussion. The last question above is actually the reason I started this post...
Yes, it does not seem to be a problem to extract every audio/video frame from the RIMM container in output stream and send it as a RTP packet - using any kind of piped stream or circular byte buffer can do the job.
The real question is, how to pass incoming RTP audio/video packets into the player and do the playback?
This is why I asked about the RIMM's descriptor part and the difference between writing to a file and a stream.
My tests indicated the player will not start until it gets the descriptor block (what makes sense) with the exact number of frames, their size, etc. Of course, we cannot serve this information while streaming over internet.
I thought, I could create a RIMM file with a faked descriptor - with max. available number of frames, their size and so on. It didn't work for me though....
The question remains opened: how to pass audio/video frames from RTP packets into the player?