I realize that the form factor is considerable different between the iPhone and the iPad. However guessing that a not too power reduced A4 chip will most likely be in the next iPhone this year the performance increase in two and a half years is as Hockenberry says 'Holy crap!'. I would like to see how this compares with the Moore's law performance increase in desktop computing over a similar period.
More here:
Benchmarking in your lap by Craig Hockenberry
Native performance: Original iPhone vs. iPad
Test | iPad/3.2 | iPhone/2.0 | Faster by |
---|---|---|---|
100,000 iterations | 0.000035 secs. | 0.015 secs. | 428x |
10,000 divisions | 0.000010 | 0.004 | 400x |
10,000 sin(x) calls | 0.000012 | 0.105 | 8,750x |
10,000 string allocations | 0.004321 | 0.085 | 20x |
10,000 function calls | 0.000338 | 0.004 | 12x |
The most remarkable change is when you compare the original iPhone to the iPad. Using the numbers from my original tests and the results above reveals an improvement of several orders of magnitude in just over 2½ years. I believe the technical term for this is “Holy crap!”
Note: I don’t remember if the original tests were optimized builds, or if it was even possible to get gcc to do them with a jailbreak toolchain. Even if they weren’t optimized like the current tests, the performance increases are still stunning.
All-in-all, a remarkable achievement by Apple’s engineers, especially when you consider that the battery life of these devices has gone up, rather than down.