Welcome!

Welcome to the official BlackBerry Support Community Forums.

This is your resource to discuss support topics with your peers, and learn from each other.

inside custom component

Native Development

Reply
New Developer
lmuller
Posts: 20
Registered: ‎03-27-2012
My Device: Z10 LE, PlayBook & Dev Alpha C
My Carrier: Vodafone
Accepted Solution

Porting openFrameworks (need help with the audio/QSA part)

[ Edited ]

Hi!

 

I'm currently porting the popular creative coding framework openFrameworks ( http://www.openframeworks.cc/ ) to the PlayBook and so far managed to get the graphics and multitouch input system working.

 

openFrameworks WIP

openFrameworks (oF) relies on libraries such as boost, freeimage, glu (I'm using a patched version of this) and poco lib. Details on how to compile those can be found on the openFrameworks forums ( http://forum.openframeworks.cc/index.php/topic,9189.0.html ). I plan to release the port (ofxQNX) as an addon on GitHub when I have completed the basics (video, audio and input).

 

Below a screenshot of my work so far:

 

http://epicwindmill.com/blackberry/IMG_1620s.jpg

 

Audio

For the audio part, openFrameworks normally relies on rtAudio or portaudio. Unfortunately those libs only support regular ALSA. Since the PlayBook Native SDK uses the QNX Sound Architecture (QSA) I'm trying to write my own implementation to provide audio support (to play generated audio streams from for example Pure Data).

 

I've read the docs on the native SDK website and also looked into the examples provided on the QNX website and BlackBerry github repo (especially the one from the SDL port)

 

After messing around a bit with the code, I managed to import one of the openFramework audio examples into a standalone project which works in some way. The code can be found at the end of this post.

 

Questions

When setting up and using the PCM device, I noticed a few things.

 

1. It seems that it ignores my requested buffer size (which I want to be as small as possible) and instead always uses this weird number "2820".


2. I'm able to write the buffer using snd_pcm_plugin_write and the audio seems to play (although the sound initially is a bit choppy).

 

3. The app allows you to change the waveform by touching the screen, if you touch on the device on the left, the volume should be higher on that side. After a few seconds though, the audio dies and snd_pcm_plugin_write keeps returning 0.

 

4. In this example I've placed snd_pcm_plugin_write in my main loop. In openFrameworks however, I will periodically need to call updateQNXAudio() (minus the while loop). When using openFrameworks with rtAudio or portaudio, both implementations use somekind of callback. So I'm guessing I should put the audio part in a new thread, but how often should it call updateQNXAudio() and snd_pcm_plugin_write()? Should I place them in a whileloop and call them as fast as possible? or do i need to add a sleep of X ms?

 

 

Hopefully an expert could take a look at the code and explain why points 1,2,3 are happening and perhaps give some advice on how to implement point 4 properly :smileyhappy:.

 


Debug output

The debug output when I launch the app:

 

Audio Example
Requested format: Signed 16-bit Little Endian
Requested sample rate: 44100
Requested buffer size: 512
Requested output channels: 2

 

Settings according to setup value: (Mixer Pcm Group [Wave playback channel])
Format: Signed 16-bit Little Endian
Frag Size: 2820
Total Frags: 85
Rate: 44100
Voices: 2

 

out_float_buffer size: 5640 (stores generated audio)
out_buffer size: 5640 (this is submitted to the pcm device)

 

Code

Complete code (needs the following libs when linking: bps, pps, screen, m and asound):

#include <assert.h>
#include <bps/bps.h>
#include <bps/event.h>
#include <bps/navigator.h>
#include <bps/screen.h>
#include <fcntl.h>
#include <screen/screen.h>
#include <input/screen_helpers.h>

#include <sys/asoundlib.h>
#include <string.h>
#include <stdlib.h>
#include <math.h>

#define TWO_PI   6.28318530717958647693
static bool shutdown;


void onTouchInput(unsigned int id, int x, int y, int pressure);

static void
handle_screen_event(bps_event_t *event)
{
    int screen_val;
    screen_event_t screen_event = screen_event_get_event(event);
    screen_get_event_property_iv(screen_event, SCREEN_PROPERTY_TYPE, &screen_val);
    mtouch_event_t  mtouch_event;
    int touch_id;

    switch (screen_val) {
    case SCREEN_EVENT_MTOUCH_TOUCH:
    {
		// Handle a finger touch event here
		screen_get_mtouch_event(screen_event, &mtouch_event, 0);
		touch_id = mtouch_event.contact_id;
		onTouchInput(touch_id, mtouch_event.x, mtouch_event.y, mtouch_event.pressure);
    }
    break;

    case SCREEN_EVENT_MTOUCH_MOVE:
    {
        // Handle a finger move event here
		screen_get_mtouch_event(screen_event, &mtouch_event, 0);
		touch_id = mtouch_event.contact_id;
		onTouchInput(touch_id, mtouch_event.x, mtouch_event.y, mtouch_event.pressure);
    }
    break;

    case SCREEN_EVENT_MTOUCH_RELEASE:
        break;

    default:
        break;
    }
    fprintf(stderr,"\n");
}

static void
handle_navigator_event(bps_event_t *event) {
    switch (bps_event_get_code(event)) {
    case NAVIGATOR_SWIPE_DOWN:
        fprintf(stderr,"Swipe down event\n");
        break;
    case NAVIGATOR_EXIT:
        fprintf(stderr,"Exit event\n");
        shutdown = true;
        break;
    default:
        break;
    }
}

static void
handle_event()
{
    int rc, domain;

    bps_event_t *event = NULL;
    rc = bps_get_event(&event, 0); // -1);
    assert(rc == BPS_SUCCESS);
    if (event) {
        domain = bps_event_get_domain(event);
        if (domain == navigator_get_domain()) {
            handle_navigator_event(event);
        } else if (domain == screen_get_domain()) {
            handle_screen_event(event);
        }
    }
}

// Audio code

snd_pcm_t *pcm_handle;
snd_mixer_t *mixer_handle;

snd_pcm_channel_info_t pi;
snd_pcm_channel_params_t pp;
snd_mixer_group_t group;
snd_pcm_channel_setup_t setup;

int bufferSize		= 512;
int sampleRate 		= 44100;
int outChannels 	= 2;	// Stereo output

int rtn;
int card = -1;
int dev = 0;

int fragsize = -1;
int num_frags = -1;
int bsize;

float * out_float_buffer = NULL;	// Wave is created using floats
short * out_buffer = NULL;			// But our pcm device needs integers (so it will be converted)

// for the simple sine wave synthesis
float 	pan;
float 	volume;
float 	targetFrequency;
float 	phase;
float 	phaseAdder;
float 	phaseAdderTarget;

void setupWave()
{
	phase 				= 0;
	phaseAdder 			= 0.0f;
	phaseAdderTarget 	= 0.0f;
	volume				= 0.1f;

	// Initial values
	int x = 128;
	int y = 500;
	onTouchInput(0, x, y, 0);
}

// Set phase based on touch position
void
onTouchInput(unsigned int id, int x, int y, int pressure)
{
	// Setup our wave
	int width = 1024;
	pan = (float)x / (float)width;
	float height = 600;
	float heightPct = ((height-y) / height);
	targetFrequency = 2000.0f * heightPct;
	phaseAdderTarget = (targetFrequency / (float) sampleRate) * TWO_PI;
}

// Some magic to create a wave, fill it into the output buffer
void
audioOut(float * output, int bufferSize, int nChannels) {

	float leftScale = 1 - pan;
	float rightScale = pan;

	// sin (n) seems to have trouble when n is very large, so we
	// keep phase in the range of 0-TWO_PI like this:
	while (phase > TWO_PI) {
		phase -= TWO_PI;
	}

	phaseAdder = 0.95f * phaseAdder + 0.05f * phaseAdderTarget;
	for (int i = 0; i < bufferSize; i++) {
		phase += phaseAdder;
		float sample = sin(phase);
		output[i*nChannels    ] = sample * volume * leftScale;
		output[i*nChannels + 1] = sample * volume * rightScale;
	}
}

// Setup our PCM device (display requested settings and actually settings according to "setup")
int
openQNXAudio()
{
	fprintf(stderr, "Audio Example\n");
	fprintf(stderr, "Requested format: %s\n", snd_pcm_get_format_name (SND_PCM_SFMT_S16_LE));
	fprintf(stderr, "Requested sample rate: %d\n", sampleRate);
	fprintf(stderr, "Requested buffer size: %d\n", bufferSize);
	fprintf(stderr, "Requested output channels: %d\n\n", outChannels);

// Open PCM
	if ((rtn = snd_pcm_open_preferred (&pcm_handle, &card, &dev, SND_PCM_OPEN_PLAYBACK)) < 0)
	{
		fprintf(stderr, "device open");
		return -1;
	}

// disabling mmap is not actually required in this example but it is included to
// demonstrate how it is used when it is required.

	if ((rtn = snd_pcm_plugin_set_disable (pcm_handle, PLUGIN_DISABLE_MMAP)) < 0)
	{
		fprintf(stderr, "snd_pcm_plugin_set_disable failed: %s\n", snd_strerror (rtn));
		return -1;
	}

// get pcm info
	memset (&pi, 0, sizeof (pi));
	pi.channel = SND_PCM_CHANNEL_PLAYBACK;
	if ((rtn = snd_pcm_plugin_info (pcm_handle, &pi)) < 0)
	{
		fprintf(stderr, "snd_pcm_plugin_info failed: %s\n", snd_strerror (rtn));
		return -1;
	}

// setup pcm channel
	memset (&pp, 0, sizeof (pp));
	pp.mode = SND_PCM_MODE_BLOCK;
	pp.channel = SND_PCM_CHANNEL_PLAYBACK;
	pp.start_mode = SND_PCM_START_FULL;
	pp.stop_mode = SND_PCM_STOP_STOP;
	pp.buf.block.frag_size = bufferSize; 	// Trying to set it to 512

	// pi.max_fragment_size;

	if (fragsize != -1)
	{
		pp.buf.block.frag_size = fragsize;
	}
	pp.buf.block.frags_max = num_frags;
	pp.buf.block.frags_min = 1;

	pp.format.interleave = 1;
	pp.format.rate = sampleRate;
	pp.format.voices = outChannels;
	pp.format.format = SND_PCM_SFMT_S16_LE;	// Signed 16-bit Little Endian

	strcpy (pp.sw_mixer_subchn_name, "Wave playback channel");
	if ((rtn = snd_pcm_plugin_params (pcm_handle, &pp)) < 0)
	{
		fprintf(stderr, "snd_pcm_plugin_params failed: %s\n", snd_strerror (rtn));
		return -1;
	}

	if ((rtn = snd_pcm_plugin_prepare (pcm_handle, SND_PCM_CHANNEL_PLAYBACK)) < 0) {
		fprintf(stderr, "snd_pcm_plugin_prepare failed: %s\n", snd_strerror (rtn));
		return -1;
	}

// Check setup
	memset (&setup, 0, sizeof (setup));
	memset (&group, 0, sizeof (group));
	setup.channel = SND_PCM_CHANNEL_PLAYBACK;
	setup.mixer_gid = &group.gid;

	if ((rtn = snd_pcm_plugin_setup (pcm_handle, &setup)) < 0)
	{
		fprintf(stderr, "snd_pcm_plugin_setup failed: %s\n", snd_strerror (rtn));
		return -1;
	}

	if (group.gid.name[0] == 0)
	{
		fprintf(stderr, "Mixer Pcm Group [%s] Not Set \n", group.gid.name);
		return -1;
	}

	if ((rtn = snd_mixer_open (&mixer_handle, card, setup.mixer_device)) < 0)
	{
		fprintf(stderr, "snd_mixer_open failed: %s\n", snd_strerror (rtn));
		return -1;
	}

	fprintf(stderr, "Settings according to setup value: (Mixer Pcm Group [%s])\n", group.gid.name);
	fprintf(stderr, "Format: %s \n", snd_pcm_get_format_name (setup.format.format));
	fprintf(stderr, "Frag Size: %d \n", setup.buf.block.frag_size);
	fprintf(stderr, "Total Frags: %d \n", setup.buf.block.frags);
	fprintf(stderr, "Rate: %d \n", setup.format.rate);
	fprintf(stderr, "Voices: %d \n", setup.format.voices);

	bsize = setup.buf.block.frag_size;

	// prepare pcm buffers
	out_float_buffer = new float[bsize * outChannels];
	out_buffer = new short[bsize * outChannels];
	fprintf(stderr, "out_float_buffer size: %d\n", (bsize * outChannels));
	fprintf(stderr, "out_buffer size: %d\n\n", (bsize * outChannels));

	setupWave();

	return 0;
}

int
updateQNXAudio() {

	// Main loop (keep playing tone until user quits app)
	while (!shutdown) {
		// Handle user input
		handle_event();

		// collect synthesized audio data
		audioOut(out_float_buffer, bsize, outChannels);

		// convert float to int
		for(int i=0; i < bsize*outChannels ; i++) {
			float tempf = (out_float_buffer[i] * 32767.5f) - 0.5f;
			out_buffer[i] = tempf;
		}

		// Write audio to pcm device
		int written = snd_pcm_plugin_write (pcm_handle, out_buffer, bsize);
		fprintf(stderr, "written %d\n", written);
	}

	return 0;
}

void
closeQNXAudio(){
	fprintf(stderr, "closeQNXAudio\n");

	snd_pcm_plugin_flush (pcm_handle, SND_PCM_CHANNEL_PLAYBACK);
	snd_mixer_close (mixer_handle);
	snd_pcm_close (pcm_handle);
}

int
main(int argc, char **argv)
{
    const int usage = SCREEN_USAGE_NATIVE;

    screen_context_t screen_ctx;
    screen_window_t screen_win;
    screen_buffer_t screen_buf = NULL;
    int rect[4] = { 0, 0, 0, 0 };

    /* Setup the window */
    screen_create_context(&screen_ctx, 0);
    screen_create_window(&screen_win, screen_ctx);
    screen_set_window_property_iv(screen_win, SCREEN_PROPERTY_USAGE, &usage);
    screen_create_window_buffers(screen_win, 1);

    screen_get_window_property_pv(screen_win, SCREEN_PROPERTY_RENDER_BUFFERS, (void **)&screen_buf);
    screen_get_window_property_iv(screen_win, SCREEN_PROPERTY_BUFFER_SIZE, rect+2);

    /* Fill the screen buffer with blue */
    int attribs[] = { SCREEN_BLIT_COLOR, 0xff0000ff, SCREEN_BLIT_END };
    screen_fill(screen_ctx, screen_buf, attribs);
    screen_post_window(screen_win, screen_buf, 1, rect, 0);

    /* Signal bps library that navigator and screen events will be requested */
    bps_initialize();
    screen_request_events(screen_ctx);
    navigator_request_events(0);

    int myret;

    if(openQNXAudio() != 0)
    {
    	fprintf(stderr, "Error: Could not open QNX audio\n");
    }
    else
    {
    	if((myret = updateQNXAudio()) != 0)
    		fprintf(stderr, "Error in update loop\n");
    }

    closeQNXAudio();

    /* Clean up */
    screen_stop_events(screen_ctx);
    bps_shutdown();
    screen_destroy_window(screen_win);
    screen_destroy_context(screen_ctx);
    return myret;
}

 

 

 

Please use plain text.
BlackBerry Development Advisor (Retired)
epelegrillopart
Posts: 99
Registered: ‎10-03-2009
My Device: Not Specified

Re: Porting openFrameworks (need help with the audio/QSA part)

Cool.  Keep us posted. I'll add you to the next "Elsewhere" post at Open BB News.

 

You going to BB10 Jam?  We should have several Open Source activities there...

Please use plain text.
BlackBerry Development Advisor
RSperanza
Posts: 142
Registered: ‎03-08-2012
My Device: Z10
My Carrier: Bell

Re: Porting openFrameworks (need help with the audio/QSA part)

Hi.

 

I have not had a chance to try to build your sample app to perform a deep dive just yet but after examining sections of the code and reviewing the API documentation, I may have some advice that may be helpful.

 

Answers to your questions:

 

1. Based on the API documentation. 2820 is the minimum fragment size supported by the audio hardware.  It is recommended that you call  snd_pcm_channel_setup() to query the minimum frag_size supported by the driver before trying to configure it yourself, as most developers will want to use a larger buffer size for smoother playback and a larger buffer should be a multiple of the minimum size and less than the maximum size.

 

2. .I believe the choppiness is the result of your small buffer size coupled with the fact that you are currently driving the audio from your event loop which is not gauranteed to execute quickly all the time.  I recommend making some changes to the way you are processing the audio.  See my answer to question 4 for more details.

 

3. According to the documentation, it sounds like a channel underrun is occurring.  This is a result of the playback device running out of buffer data.  Apparently, the only way to correct it is to close and reopen the channel.  My suggestions in the answer for the next question may help resolve this as well.

 

4. Your guess is correct.  The audio playback loop should be placed in its own thread.  In fact, I suggest you do that with this sample app.  In addtion to this change, I suggest the following:

 

a. Increase your buffer size so that it is a multiple of the minimum buffer size that represents approximately 0.25 seconds in duration.  If the maxium buffer size is less than this, use that as the buffer size.

b. In your touch handler, don't just recaclulate the frequency.  You should also generate your integer sample points for the new frequency for the buffer at that time.

c. Write the whole buffer as often as possible in another thread with small sleeps, maybe 50 milliseconds in duration.  Feel free to adjust it to tune audio quality.

 

Good luck and hopefully these suggestions are helpful.  Please feel free to post any additional questions that come up.

 

 

Please use plain text.
New Developer
lmuller
Posts: 20
Registered: ‎03-27-2012
My Device: Z10 LE, PlayBook & Dev Alpha C
My Carrier: Vodafone

Re: Porting openFrameworks (need help with the audio/QSA part)

[ Edited ]

Thank you for your explanations and suggestions Roberto, I really appreciate it!

 

I will try out the things you listed and let you guys know about my progress.

In the meantime I have pushed my addon code + examples onto GitHub (developPlayBook branch) for those who might be interested:

https://github.com/falcon4ever/openFrameworks/tree/developPlayBook/addons/ofxQNX

 

 

Examples:
https://github.com/falcon4ever/openFrameworks/tree/developPlayBook/examples/qnx

In the addon directory the README.txt explains some details on how to import the project.

Please use plain text.
New Developer
lmuller
Posts: 20
Registered: ‎03-27-2012
My Device: Z10 LE, PlayBook & Dev Alpha C
My Carrier: Vodafone

Re: Porting openFrameworks (need help with the audio/QSA part)

Another update, ofxQNX is almost working perfectly on the PlayBook platform. I still have to fix the issues with the audio, but all graphics related examples are fully working.

 

Photos of the result:

 

qnxAudioOutputExample (audio is a bit broken)

 

qnxFontsExample

 

qnxGraphicsExample

 

qnxImageExample

 

qnxInputExample (4 touches max)

 

qnxPolygonExample

 

qnxTouchExample (accelerometer is fully working)

 

qnxVBOExample (runs at 60fps)

 

Real multitasking on the playbook

Please use plain text.
BlackBerry Development Advisor (Retired)
epelegrillopart
Posts: 99
Registered: ‎10-03-2009
My Device: Not Specified

Re: Porting openFrameworks (need help with the audio/QSA part)

Very, very good.

 

I talked with our audio folks and they are ready to help.  Mail a bit later today (after my next meeting :smileyhappy:)

Please use plain text.
Trusted Contributor
sucroid
Posts: 195
Registered: ‎03-12-2012
My Device: PlayBook
My Carrier: None

Re: Porting openFrameworks (need help with the audio/QSA part)

Very interesting work.  I would like to see PortAudio ported to the PlayBook.  Then all kinds of things like JACK, Audacity, etc. can then be ported.

Sucroid.com
Sweet Apps for the Fans
Please use plain text.
BlackBerry Development Advisor (Retired)
epelegrillopart
Posts: 99
Registered: ‎10-03-2009
My Device: Not Specified

Re: Porting openFrameworks (need help with the audio/QSA part)

Willing to contribute some time?  I know somebody that is interested i JACK audio

Please use plain text.
New Contributor
seths
Posts: 8
Registered: ‎04-09-2012
My Device: Playbook
My Carrier: ATT

Re: Porting openFrameworks (need help with the audio/QSA part)

[ Edited ]

We'd also love to see PortAudio ported to the PlayBook as well.

 

lmuller and I were finally able to get some real-time audio synthesis working using the SDL library port: https://github.com/blackberry/SDL. We'll have some more news to post in coming days. I imagine, the PlayBook code from SDL could probably be integrated simiarly into PortAudio.

 

PortAudio would probably be better than SDL when it comes to audio and it offers input/output (while SDL implements output only). In terms of our openFrameworks port, PortAudio would make it a lot easier since openFrameworks already is built on PortAudio for sound.

 


Please use plain text.
Trusted Contributor
sucroid
Posts: 195
Registered: ‎03-12-2012
My Device: PlayBook
My Carrier: None

Re: Porting openFrameworks (need help with the audio/QSA part)


epelegrillopart wrote:

Willing to contribute some time?  I know somebody that is interested i JACK audio


A resounding "yes"!  I haven't looked at the JACK audio code yet.  But I have started looking at PortAudio.  So many things use the PortAudio library.  CSound is something that I really want to see on the PB though that sounds more like a dream at the moment.

 

BTW, I really hope to see USB midi supported natively by the Tablet OS.  The PlayBook is more powerful than the DAW I have 10 years ago and I could do 16-track simultaneously at 44.1/16 with that dinosaur.  It would be nice to be able to have a DAW in the PlayBook form factor.

Sucroid.com
Sweet Apps for the Fans
Please use plain text.