I'm working on an application, who's task is to download and display pictures. I'm planning to track the progress of download and display it. Hence, I've written the following code:
// java.net.URL imageUrl initialization here URLConnection connection = imageUrl.openConnection(); try (BufferedInputStream is = new BufferedInputStream(connection.getInputStream())) { int size = connection.getContentLength(); // Total size of file, in bytes int chunk = size / 100; // We divide file in 100 parts ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); // Stream to write bytes to BufferedOutputStream os = new BufferedOutputStream(outputStream); // Wrapper to increase efficiency // Standard process of reading from input stream and writing to output stream byte[] buffer = new byte[chunk]; int read; while ((read = is.read(buffer)) != -1) { os.write(buffer, 0, read); // Parts for tracking progress, it does not influence the download process if(model != null) model.setOperationProgress(model.getOperationProgress() + 1); } // Converting the downloaded bytes to BufferedImage and then returning it ByteArrayInputStream stream = new ByteArrayInputStream(outputStream.toByteArray()); BufferedImage image = null; while (image == null) image = ImageIO.read(stream);
As you can see, I divide all the download job into 100 chunks, then do them sequentially, notifying the model (another part of application, it's not related to the question and the line with model can be ignored) in process.
It works, but I wonder, whether it's the most efficient way to do things? Does it lead to significant overhead during downloading? This approach seems to be pretty "brute-forcy", so I have some doubts about it. Are there better solutions?