Originally Posted by
MariaNiku
... fine-tuning if that is possible...
No, it's not a matter of fine-tuning. It's a matter of finding another algorithm. That's kind of the point of having a class in algorithms, right? Not optimizing a particular scheme for a particular compiler and a particular computer, but trying to find a better scheme.
Consider the following: Suppose you have an array of 999999 distinct integers in the range 1...1000000. (That's your problem for the value of
biggest = 1000000.) You want to find the single missing integer in that range.
Now, here's a possibility:
Calculate the sum of the first million positive integers. There is a closed formula said to have been discovered by
Princeps mathematicorum Karl Friedrich Gauss when he was in elementary school (he was seven years old at the time). See, for example
Math Forum: Ask Dr. Math. You do
not need a loop to find the sum of the first N integers. You
do need a loop to calculate the sum of the integers in the array. Subtract the sum of the elements in the array from the sum of the first N integers, and you have the missing number at your fingertips.
Note that the sum of the first 1000000 integers is something like 500000500000, so you will need a long integer to do the math in Java. (Since the largest integer value is 2147483647.)
The method is still O(n) in time and O(1) in space. It might, indeed, be faster than the given algorithm since there is only one loop rather than two, but the time complexity in Big-O notation is still O(n).
On the other hand It might not be very much faster, since the loop involves adding into a long integer sum rather than just doing an integer exclusive-or of the individual elements.
Bottom line: Try it.
Then go back and compare run time for this scheme versus the run time for the original. With my 32-bit Linux system, it cut the run time for an array size of 999999 about in half. (10-15 milliseconds versus 20-30 milliseconds for the original).
Now, whether it is sufficient to satisfy the inflexible requirements of RoboGrader, I have no idea.
I also have no further interest in the problem unless there is feedback from a real, live, knowledgeable human being who can offer some insight.
[
/begin Rambling non-sequitur]
Interaction with RoboGrader is one-way (no opportunity for rebuttal), and there is no way to know what is required other than messing around around hoping to get lucky enough to stumble across the specific answer that was programmed into that genius. (How slow is "too slow"? How is "slowness" measured? If speed is measured for the three inputs that you are using for evaluation, are you absolutely sure that the speed advantage scales consistently for a million inputs? Those are the kind of things I would ask of a real, live, knowledgeable human.)
Unlike things like, say Project Euler, where each problem is guaranteed to have exactly one correct answer,
one-way responses from RoboGrader like "it's too slow" just leave me cold.
I mean, sometimes algorithms that are marvelously more efficient for 10000000 inputs are actually slower than "worse" algorithms when tested with fewer inputs. And vice versa.
Even a blind hog finds an acorn once in a while, but I don't particularly enjoy being treated like a blind hog. It helps that I'm not taking that course for credit. I have a choice. You probably don't.
[
/end Rambling non-sequitur]
So: Good luck.
Cheers!
Z