08-27-2013 02:28 AM
In my application I need to read data from an input stream. I have set the current buffer size for reading as 1024. But I have seen in some Android applications buffer size has been kept as 8192 (8 KB). Will there be any specific advantage if I increase the buffer size in my application to 8KB?
Any expert opinion will be much appreciated.
Solved! Go to Solution.
08-27-2013 04:29 AM
Can we clarify the way that this buffer is being used? For example are you reading from SD Card or network? Once you have the data read, what happens to it?
08-27-2013 05:30 AM
I am reading from network. Once I establish socket connection with the server, the server will send me notifications one after the other. I need to read the notifications/data from the inputstream available in the socket connection. For this I have a background thread which checks anything is available in the inputstream and if something is available, it will read with the help of a buffer and then passes the read data to a StringBuffer.
I am not sure how much should be the ideal size of this buffer. Will 1024 which I have given currently a small size, do I need to increase its size?
08-27-2013 06:24 AM - edited 08-27-2013 06:25 AM
First thing to note is that the method "isAvailable()", in my experience, does not work correctly on OS 5.0 and earlier. It is fixed in OS 6 (at least from my testing).
Because isAvailable() was broken, (and for other application reasons) what I have implemented for a socket connection is that each message is preceded by a length. So in the socket connection, I read the length of the next message, and then the actual data. This is done with no blocking - in other words I read the entire message, regardless of size. I recommend you do the same. The message must exist in full somewhere so it makes no difference if it is in some memory managed by the socket connection, or in some memory managed by you.
Note also, until OS 5.0, when you did the read you would get all the data to fill the buffer you had - in other words it was blocking. After OS 6.0, the read can complete without giving you all the data.
In your case, you might be working in a post OS 6.0 only, so you could use isAvailable() - create a buffer of that size, and read everything. I can't see that it makes any difference whether you have the bytes in memory managed by the socket, or memory managed by you.
But in fact, I would argue that the best approach is the one that makes your processing simplest. So for example, if you know that the next message is 200 bytes, then read 200 bytes, and then process that message. Then read the next message.
You could spend a lot of time attempting to manage the buffers to match the underlying socket buffers. I don't know exactly how the underlying BlackBerry socket processing code works, but it doesn't put data directly into your buffers. So let it manage its buffer size to optimize the network, you manage your buffer size to optimize your processing. That will work best for everyone.
08-27-2013 07:30 AM
Thank you Peter for that useful information.
I would like to know if there is any relation between the buffer size and bytes. Normally I have seen buffer sizes in multiples of 1024 (1024, 2048, 3072...). What if I give a size something like 1132 (which is the max size of data I am expecting through inputstream)?
08-27-2013 07:40 AM
Go for it.
If you are talking communication streams, then AFAIK, these are normally binary boundaries (2**10, 2**11, etc) because in the packet they have set aside a specific number of bits for the data length, and/or there are specific components in the communication path that have memory limitations.
In older OS'es, especially ones that 'paged', then the page was also usually a binary boundary size.
But for Blackberry Java, which has a single tier memory model (no paging) and an abstraction from the communication layer, I can't see any advantage in trying to use a specific buffer size to match something in the underlying hardware or software. But I can see advantages in using a size that is easy for your application to process with.
08-28-2013 06:47 AM - edited 08-28-2013 06:50 AM
On re-reading one of my posts, I see I use the term blocking in two different ways, I use to mean the data is blocked (comes in chunks) and that the processing is blocked - in other words stopped.
In addition, I was not clear on the OS levels I was referring to
To remove one of these uses and the OS confusion- I would like to change this
"Note also, until OS 5.0, when you did the read you would get all the data to fill the buffer you had - in other words it was blocking. After OS 6.0, the read can complete without giving you all the data."
to the following:
"Note also, until OS 6.0, when you did the read you would get all the data to fill the buffer you had - in other words it waited till the buffer was full. In OS 6.0 and later, the read can complete without giving you a full buffer."