04-11-2013 05:46 PM
Hi guys (and eyeing a particular person on the boards that helps everyone for his insight),
I have a big problem now that is undermining the launch of my entire app due to performance issues (i'm not going to release an app that is really sluggish especially if there is a way to fix it).
I've done various tests but here's the overall lowdown:
I have a main table (let's call it stores), it's your standard table with a INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL as the only index (set up automatically with int primary key through SQLITE) and including the primary, it has 22 fields. Let's call the primary key StoreID for now.
The table only has currently 5 rows of data (for testing purposes), and I am getting VERY fast selects from the tables using the Primary key (like 1k seperate selects in well under a second without issue).
Here's the problem I'm having:
Let's say the 10th field in the table is a field called employeecount (INTEGER NOT NULL).
When I try to run an update:
Set employeecount = 15
Where StoreID = 3
It takes a very long time to complete. I was able to find this out using testing and run this update 100 times. It took well over a second for 100 times. This makes absolutely no sense to me, you are talking an ordered index scan of 5 elements (and then an update of a non-indexed column with no expansion of data especially on the other 99 updates. You are talking maybe 6-15 read ops max, and then a single write op (x 100 for 1500 + 90 writes should be nothing close to a second on this hardware) . This should NEVER take one second even if it was run on a pentium 1! (ok maybe but I think it could get it done in 1 second).
I have turned off journaling completely to verify that it isn't the performance issue (it helped only marginally with hardly a difference). So I need help and maybe those with experience (jtegen peering at you hehe), can lead me the right way with a possible solution.
What I cannot do first, is set the DB to ascync (this will cause issues all over the place for stability). It shoud not be needed either unless writes take an insane amount of time in general.
1. Since this other field is an int, is it possible SQLITE unintelligently set up a index for this field? I can completely understand if this is what is happening and it's redoing the index why it could be taking so much more time. How would I check to see which indexes are present on a particular table?
2. Is it possible with 1024 as pagesize that I need to increase it because the rows are larger than this?
3. Do i need to reposition more likely to change integers closer to the beginning of each row? (like have all text and variant fields last and all int fields and static space fields in the beginning?)
4. Am I just spoiled with normal SQL DB (mysql/MSSQL etc) execution speeds that this will be this slow regardless because of the technology SQLITE?
Any help would be great. I'm heavily leaning towards SQLITE putting on several indexes against my wishes and causing massive slowdown on updates on the table. I'm really hoping this is the case because it's by far (except #2) the easiest thing to work with.
Thanks so much for the help everyone.
04-28-2013 07:01 PM
Hey Uberschall, I appreciate your help. Maybe I just have a lack of understanding of how fast SQLite will function on the PB/BB10 os layers with flash added on top of it. I have finally spent some time on this:
1. Currently, in order to test responsiveness to users changes in the application, I have created a routine that will call 1000 updates in order to deteremine if any optimizations are cutting the severe lag down. Keep in mind please (VERY IMPORTANT): these updates would occur in normal situations in SINGLE or at most 10 cases maximum. I am just simply putting them in a huge 1k loop to determine if the changes I am making are working.
2. A primary index was created for the ID field. I read some material that seemed to note that these indexes are created automatically for the primary key field for each table but I checked the index list (using get/load schema) and there were at least no user indexes (maybe SQLite saves indexes internally as well). This did not help at all with the current problem (although I am sure it helped to scan faster).
3. BEGIN and COMMIT come back as invalid commands from flash via exceptions. I did however use SQLConnection.begin() and SQLConnection.commit() and neither worked whatsoever. In addition, I could not figure out how to turn off journaling (just for testing purposes) as PRAGMA was returned as invalid by flash exceptions as well.
4. Changes in page size, and table row count did not affect the update times whatsoever. I also removed all text data types because from my understanding they are a variable sized field and could have large space allocated to them.
1. Journaling is absolutely slaughtering my updates to death. (see later)
2. I am not calling the internal BEGIN and COMMIT commands properly. (see later).
3. SQLite is taking an enormous amount of time determining and compiling my query into execution plan and executing it.
4. Flash layer is impacting the query with disastrous results.
1. Is there a way IN AS3 to turn off journaling for testing purposes (or at least reduce level of it?)
2. Is there a way IN SQLite code to run the BEGIN and COMMIT statements, i'm using the below code and it's failing.
3. Is this just the natural slowness of SQLite on a PB or BB10 device?
Here is an example of the begin/commit code I tried:
SET EMPLOYEES = 1
WHERE STORE_ID = 3;
This does not work and gives: "BEGIN is not allowed in SQL."
I would really appreciate any help on any of the last bullet points if you have any experience with these.