Saturday, 28 February 2015

Memory management myths in Apple circles

Nothing gets my hackles up more than people perpetuating memory management myths. Apparently there is a new trend in town thanks to Apple, who are deprecating their garbage collector on OS X in favor of Automatic Reference Counting (ARC).
News website Cult of Mac say:
iOS is twice as memory-efficient as Android. Here’s why... According to Glyn Williams over on Quora, iOS devices run better than Android devices with twice the RAM because Android apps use Java, and need all the extra RAM to do something called garbage collection.”
Another news website, Redmond Pie, say:
“That was basically the same question put to Quora, the social website that gives people a way to ask questions and then have them answered by people who are experts in their respective field. The upvoting system adds a spot of authority tracking to the answers that are provided, and we have a clear winner as far as the question around why Android phones have so much more memory than iPhones.

Enter Glyn Williams.

The response, upvoted by over 2,600 people, included a handy graph and an explanation that involves garbage collection and Java. Basically, Android needs more memory because of the way it handles things.

You can head on over to the Quora question and check out Glyn’s explanation yourself, but what it boils down to is this: Android apps use Java, and as a result Android does something called garbage collection which involves memory being recycled once applications are finished with it. That’s all well and good, and actually performs really well when given plenty of memory to work with. The problems arise when the system is starved of memory.”
They are both referring to the same answer on Quora by a guy called Glyn Williams. His answer is as follows:
“Servicing RAM uses power. So more memory = more power consumption.
Android apps using Java, recycle released memory using garbage collection.

What this diagram shows is that garbage collectors are really awesomely fast if you have a relative memory footprint of 4 or 8.

In other words, you need four or eight times more memory, than you are actually using to be super efficient. But when the memory becomes constrained, that performance goes way down.

This is why Android devices have all that RAM.

iOS does not use this style of garbage collection and does not slow down in constrained memory environments.

So 1GB for iOS results in more performance than 3GB for Android.”

Some problems are immediately obvious with this. Firstly, RAM typically only accounts for a fraction of a percent of total power consumption in a mobile phone so power is not an excuse for skimping on RAM. Secondly, the graph is one of seven graphs from a ten-year-old research paper that compared various toy garbage collectors with an alternative scheme that used trial runs to deallocate memory aggressively. There are many problems with this. The other six graphs in the paper do not substantiate Glyn’s claims, i.e. he cherry picked his graph. Most of the garbage collectors (e.g. Cheney semi space, stop and copy, non-generational mark-sweep) are not representative of anything used on Android. The most realistic garbage collector on the graph is the generational mark-sweep collector that outperformed all of the others but even this is not as sophisticated as the concurrent garbage collector employed by the latest Android's run-time, ART. Thirdly, Glyn asserts that GCs must “have a relative memory footprint of 4 or 8” to be really awesomely fast when this graph clearly shows a relative footprint of 2.5 for the only realistic GC provides the best possible performance. Fourthly, Glyn implicitly assumed that ARC has optimal performance and memory overhead when, in fact, reference counting can be 10x slower than tracing GC and reference counts take up a lot of room. Ten times slower is literally off the chart here. Fifthly, Glyn asserts that garbage collection is the reason why Android devices have more RAM but there is no evidence to support this. Finally, Glyn asserts that this (garbage collection) is the reason why a 3GB Android device performs like a 1GB iOS device when there is clearly no evidence to support that conclusion which is, in fact, pure speculation.
I took the opportunity to ask Glyn himself what had given him the impression than Android needs more RAM to attain the same performance as iOS and why exactly he thought that was due to garbage collection. The only concrete evidence Glyn offered was this video showing two devices being switched by hand between a variety of different applications. Most of the applications are written in C++ and the Android device actually won the benchmark.
So this news-worthy gem turned out to be pure speculation.


Alessandro said...
This comment has been removed by the author.
Alessandro said...

One commentator on Reddit reply(not me, just found interesting):

troy harvey said...

You are kidding right? Reference counting essentially has no overhead.

Lets talk about how Apple's reference counting works:
First the complier statically analyzes at compile time when memory should be released, and inserts the equivalent of a "free()" into the code where needed.

Second, the "malloc()" function equivalent add a small header to the memory object that counts the references on each copy (Ref++), and decrements on each Free() (Ref--). It then frees memeory when the references hit zero

That is it. there is no more overhead than manual memory management other than the one-cycle Ref++ counting. One memory scans, traces, etc, etc. What you are thinking of reference counting is not the same thing.