06-10-2012 09:45 PM - edited 06-11-2012 11:50 AM
Hi everybody, I know this probably seems stupid but I really, really, need help.
When you run the simulator there's an arrow, Does that simulate the position of your finger and when you click is like you pressed the screen? or is like it was a mouse connected to the playbook??
The arrow can go out of the screen(into the black area) which makes you think that it's not a "mouse" but it's simulating a finger and before running an app it works fine for everything.
BUT when I statrt an app it doesn't seem to work, I've tried the exmples that come with the NDK(gestures) and other examples that I've found on the internet but Nothing happens when I click the screen,.
I don't know if I'm missing something obvious or something but I've been looking for this for a while and haven't found anything.
greetings, any help is greatly appreciated.
Solved! Go to Solution.
06-11-2012 11:40 AM
Sounds like you are on the right track with the simulator. You are correct in that when you are in the simulator then the mouse arrow is used to represent your finger on a real device. I think you need to be aware of a couple of things though.
Because the simulator is a Virtual Machine you first have to click on the Simulator window so that you give focus
to the simulator. This means that your first click will not do anything on the simulator but once you have done this it will act like your finger on the device. This does however mean that you have to click CTRL + ALT keys to return control to the host computer. This can be an issue if you are wanting to access the scroll bars to move up and down the simulator window to reach them, we have found it much more intuitve by using a second monitor in portrait mode for the simulator window, this means that once you are in the simulator you can stay there until you need to go back to Momentix on the host computer which is much more inutuitive than having scroll bars.
The other thing to remember is that the black area at the edges of the app is still touchable. For example if you are in an app and you want to close it down you can click on the black area at the bottom, then the app will be minimised, exposing the X icon to close the app. Other areas of the black zone will provide other functions so it is important that the mouse can access all those areas to replicate your finger.
Please let us know if this resolves your issues or if you have more questions.
06-11-2012 11:46 AM - edited 06-11-2012 12:09 PM
I already knew that you had to give focus to the VM by clicking it a first time and about the black areas. I'll take your word about the 2nd monitor, that could be helpful.
But my real question was on how to get the point you are 'touching' on the progrm code, because it doesn't seem to be working. There's an example in momentics that should print something to the console when it detects a gesture. for example 'double touch', However if I run that example, i doesn't matter how many times I double click, swipe or click or anything, there's no console output.
06-11-2012 11:54 AM
Not familiar with that example - my understanding of the current status of gestures was that they were not available yet.
Certainly I am using sample apps such as WeatherGuesser or StampCollector with no issues at all.
06-11-2012 12:11 PM
When you say that gestures are not available yet, that goes for a single touch as well??
How do you program for a device which only has one input method if you can't simulate that method?
06-11-2012 01:14 PM
Gesture support in playbook is different from cascades.
Simulator comments are the same, but playbook simulator is usually displayed landscape so usually fits in people's screens as-is.
06-11-2012 01:16 PM
On something as basic as touch handling, the best thing is usually to try out the samples. Let us know if you still can't find the info you need.
06-11-2012 03:40 PM
Thanks for the answer, I've tried the examples but they don't work for me, first the 'gestures' one which prints to the console when it detects touch input, but when I run it it doesn't print anything.
And the other 'goodcitizen' example which I understand changes the color of the displayed box with some buttons, however when I run it there are no buttons, just the box, I tried clicking around but nothing happens
I'm running the playbook simulator 2.1 and Momentics Version: 2.0.0
where can I find a working example on touch handling??
06-12-2012 11:31 AM
GoodCitizen comes up with the rotating cube with no buttons. Click on the top (the black area) to access the drop down menu. This lets you selected to change the cubes colour to each of the four colour options. Clicking the main screen withdraws the clour menu as intended.
I am running on Windows 7 system with the current SDK and simulator from the website, and it all looks good to me, what platform are you running on?