02-28-2013 11:53 AM - edited 02-28-2013 05:01 PM
Hi. I am seeing strange performance characteristics. How are people measuring their frame-time/frame-rate on real hardware with the native SDK?
I have tried using
now = clock();
but it gives rates that vary quite a lot. I have also tried posix style
struct timespec ts; int res = clock_gettime(CLOCK_REALTIME, &ts); _Uint64t now = timespec2nsec(&ts); float frame_time = (now - past)/(float)NSECS_PER_SECOND; float frame_rate = 1.0/frame_time; past = now;
It gives more stable numbers, but they still vary much more that I would expect from a nanosecond accurate clock. The frame *time* varies wildly in the 2nd significant digit on a real playbook, even when it is doing the same nothing each frame. That is rather strange.
My program is OpenGLES2.0 based. It uses the egl swap function with an interval of 1. It is my understanding that blackberrys block within the egl swap function to enforce the egl swap interval. In that case I should see a rock-solid frame rate that never varies (assuming my code runs faster than the swap interval).
What function to get the time should I be using?
03-03-2013 04:09 PM
The clock resolution affects clock_gettime. On my Z10, it's pretty high.
To get this value, use: clock_getres(CLOCK_REALTIME, &res)
I never tried this, but I think if you want to temporarilty time your code, you can change the clock period:
to get higher resolutions.
Opengl ES 2.0 on the Qualcomm may have some interesting performance and timing functions. I know there was glBeginQuery(GL_TIME_ELAPSED_EXT) on NVIDIA desktop cards for timing, but I don't think there's one for the Blackberry. But here are some Opengl ES extensions that may be useful in your case that are supported on my Z10:
p.s. I use CLOCK_MONOTONIC so system time changes don't affect my timer. (not sure how changing the clock period would affect this one)
03-03-2013 07:23 PM
I will try to put your suggestions into practice. Changing the resolution for testing does sound like a way forward (if the system will let me). Odd that the system seems to be relying on counting high level software interupts for it's timing. I was expecting to access a counter internal to the processor or nearby RTC hardware.
The resolution you quote certainly looks to be in the right range to cause the granularity issue. 60 fps equates to a frame time of 0.0166's. If the timers have a granularity of 0.001s: then that is the second significant digit I see. It's a rather useless resolution for testing; I am surprised not to see lots of threads note it on the forum, if this is the clock source people normally use.