hello folks.
Small question, pain in my ass.
Anyway, the reason i've asked this question is due to this sample code,
Client sided:
output.write(packet.getId()); output.write((packet.getBuffer().length >>> 24) & 0xFF); output.write((packet.getBuffer().length >>> 16) & 0xFF); output.write((packet.getBuffer().length >>> 8) & 0xFF); output.write((packet.getBuffer().length >>> 0) & 0xFF); output.write(packet.getBuffer());
Packet, is a wrapper i use to help me construct packets,
public Packet(int id, byte[] buffer);
Well, to make sure im writing the correct byte, i check the length, within the method.
System.out.println(packet.getBuffer().length);
Now an example client code:
System.out.println(input.availible());
Now' ill get significantly smaller numbers in comparison..
The server will write a byte array of 160291 bytes(Confirmed by outprint), and the clien will read somthing like 17391 as avail,
*Other notes,
After I call the reading off all the avail bytes, the input fills up with another byte array with another substansilly small amount of the buffer size, eventually all the incoming byte array's lengths add up to the amount of bytes I sent in the first place,
This means:
**I can work around this small pain, but Im just curious why the input stream isn't being filled with in the 160291 byte's in the first read.
The above is where my question lyes,
All possible answers welcomed!