05-29-2012 08:32 AM
I am new to Blackberry playbook native development, I feel some difference(in handling touch events) with existing platform like android, iPhone and nokia, in existing platforms we just register a callback with an event and callback got called as soon as there is some event and all that stuff is parallel where as in blackberry you have to query device if there is some event present or not and this is somewhat serialized and due to this users has to experience some delays for their input to work.
So to handle this scenerio I have decided to create a thread and in thread routine there would be an infinite loop that will keep on querying device and as soon there is some event available code will handle this.
Now I have two questions
1. Is this is the best way we can handle this scenerio, If No how could I efficiently handle this kind of scenerio.
2. If this method is right to cater this kind of scenerio then I am having trouble in getting screen events in child thread, I have created screen context, init egl and requested screen events in parent/main thread.
Please guide me
06-01-2012 12:15 PM
Unfortunately, every situation is unique. One event handling technique might be great for an application, but might be terrible for another. So I'm not sure as to why you might be getting a delay. There could be a variety of reasons.
Might I ask, when you call bps_get_event(), do you set timeout_ms to be something other than 0? if you set this to 0 then bps_get_event() will return immediately even if there is no event available. In this case event will be NULL. In this case, there will be no delay.
If you do it this way, you can probably handle events on the main thread and if bps_get_event() sets event to NULL, you can do some work.
If my explanation is not clear, you can have a look at our ndksamples in the BlackBerry seciton of github to see how we handle events. For example, FallingBlock is quite responsive to touch.
Hope this helps,
06-01-2012 01:00 PM
I believe you will run into trouble if you start consuming bps events on multiple threads. eg. if your main thread (A) is consuming screen events, and another thread (B) decides to also start consuming screen events, then you have a race between which of those threads will receive which events.
The proper way to do this is to have one single lightweight event-consuming thread which wakes up other threads when blocking or long-running calls need to be made and then as quickly as possible goes back to waiting for more events.
(my understanding of BPS is limited though, so I could be wrong and it could be thread-aware and keeping copies of events across multiple threads).
06-01-2012 01:23 PM
actually, a bit of docs searching leads me to believe I am partlye mistaken. you can fire up multiple bps channels in different threads.
see the bps docs here:
It sounds like bps_initialize() creates a default channel on whatever thread is is called from. Calls to bps_get_event() will block waiting for events on the currently active channel. bps_create_channel() / bps_set_active_channel() can be used to hop between channels if desired. bps_channel_push_event() can be used to pass events between channels as well.
When calling screen_request_events():
this binds the screen events to the currently active bps channel. There is a note here stating that you cannot call this function more than once, nor can you use get_screen_event() while this is in use. This jives with my assertion that if you consume an event on one channel (thread), it will not be available on another channel (thread).
So the trick is really to make sure that your various APIs are bound to unique channels -- using bps_initialize() / bps_create_channel() / bps_set_active_channel() for channel management, and then calling the appropriate _request_events() API (eg. screen_request_events(), navigator_request_events(), etc.) to bind specific service events to a specific channel. Then you are free to block in bps_get_event() on multiple threads waiting for non-overlapping API events.
Hopefully this clears things up for you.