7 Mar 2012

Understanding and Contrasting the Android and iOS Rendering Models

Here's an interesting look at the details of the Android and iOS UI rendering models and how they differ. You should read this if you care about responsiveness and buttery smooth animation, or the various tradeoffs involved, like that between doing something on the CPU vs doing it on the GPU, or smoothness vs memory use, or simplicity of the programming model vs the above factors.

Disclosure: I work for Google, and I'm more familiar with iOS than Android.



iOS Rendering Model:

First let's start with the iOS rendering model, since it's simpler to understand. Every view on screen, whether a button or a label or a list, is represented by an instance of the UIView class. UIViews define a drawRect: method that draws their contents. This method is invoked when your view is first shown on screen, and the pixels produced by executing this method are cached in a layer object. In other words, views are all drawn into offscreen buffers that persist as long as the view itself persists. Once drawn, the buffer is re-used till the view's state changes; your drawRect: won't be invoked.

Consider a toggle control on screen. Its layer is created when you add it on screen, and is reused whenever the control needs to be drawn, say if it's embedded in a view which scrolls. The GPU draws it for each frame of the animation, usually at or close to 60 frames per second (unless something goes bad [1]), keeping the CPU free. This is a big part of what makes iOS so smooth and responsive.

The downside is enormous memory consumption. If a 100x100 view consumes X amount of memory, a 100x100 view with a 100x100 subview consumes 2x the memory. These backing stores can take up around 80 - 90 MB of space, resulting in an out-of-memory crash.

So you're advised to keep your view hierarchy shallow to the extent possible. Doing so also speeds up the rendering since the GPU has fewer buffers to composite together. For more iOS rendering optimizations, see this post.

There's no way to tell the system to create a layer for a given a view and all its subviews together. After all, if you have a bunch of views that are never animated separately from each other, you don't need separate layers for each of them; a single combined buffer will do. But iOS doesn't give you that flexibility. You get a separate layer per view whether you like it or not.

Note that a view's layer does not includes it subviews' contents. The GPU draws the view hierarchy by walking down the hierarchy and blitting each view to the screen, parents before children.

In addition, iOS lets you rasterize a view, in effect creating a single buffer that includes the view and all its descendents. You do this by invoking [view.layer setShouldRasterize:YES]. This buffer is created in addition to the layer used by each view in the hierarchy rooted at the view being rasterized. That is, the layers of all the views in question are blended together. This makes the memory situation worse, but can optimize the rendering, since the GPU can just composite one pre-blended buffer rather than having to composite each view in that hierarchy. But don't rasterize a view that changes for each frame of an animation, because the rasterized image has to be recreated every frame, which defeats the point of having a rasterized image and in fact makes things worse.

Android Rendering Model

There are two parts to the Android system -- rendering a view, and caching it in some form for later use.

Let's first look at rendering a view for the first time. Remember we said above that an iOS view is rendered by invoking its drawRect: method? Android has a similar draw() method, which like on iOS, does its job by ultimately invoking various graphics primitives, like drawing rectangles, paths, circles, etc. Except that Android gives you a choice -- you can have these primitives execute on the CPU or on the GPU. In other words, if your draw() function draws a 100x100 rectangle, you can have the CPU loop through all these pixels and set each one to the desired color, or you can have the system issue an appropriate OpenGL command to the GPU to do the task for you. In many cases, the GPU can do the job faster and with energy consumption (important on phones), but not always, and it doesn't support all the drawing operations the CPU does. This is the reason that Android gives you a choice of where the drawing should take place.

The second choice Android gives you is: once a view has been rendered, should the system cache the rendered image in some way? You have three options here:
  1. Don't cache. This means that your draw() method will be invoked whenever your view needs to be drawn, including as it animates, so you better make sure that you can render smoothly in CPU at 60fps. This means that it should complete in 17ms and god help you if the garbage collector kicks in while you're rendering.
  2. Cache as a display list. If you choose this option, the system records the graphics primitives you are invoking. Not the output of the operations but the operations themselves, in a vector format. (Remember Windows metafiles, which contain instructions to, say, draw a rectangle, rather than the pixels that form the rectangle?) This list of operations is called a display list. Then, whenever the view needs to be rendered, the display list can be executed by the system, which makes it an order of magnitude faster than running your drawing code again. In a way, this is the best of both worlds, you have caching and quick rendering like on iOS without the huge memory cost. Note that a view's display list includes its descendents views' display lists.
  3. Cache as a layer. This is similar to the iOS model -- it tells the system to save the actual pixels of the view, so that it can drawn much quicker, an order of magnitude quicker than executing a display list, which itself is an order of magnitude quicker than invoking your draw() method. Note that rendering can happen in CPU or GPU, as discussed above, so you can choose between a software and a hardware layer.

Android layers are dissimilar from iOS layers in one important aspect -- in Android, a view's layer contains that view's image and all its subviews', unlike iOS layers, which contain only that view's image, and like rasterization in iOS. [2]

This means that if a view is animating in some way, you shouldn't enable layers for any of its ancestor (at least not during the animation). If you do, the layers of all its ancestors will have to be recreated every frame, which again defeats the point of layers and make things worse. This is the exact same reason why in iOS, you shouldn't rasterize a view whose descendents are animating in some way. Just keep in mind that an Android layer is equivalent to an iOS rasterized image.


So, to summarize the whole discussion, Android lets you decide how to render a view for the first time (in CPU or in GPU) -- a choice iOS doesn't give you. Once the view has been rendered, you have the following choices regarding caching:
  1. Don't cache -- invoke draw() each time it needs to be redrawn. This option is available only on Android.
  2. Cache it as a display list. This option is available only on Android.
  3. Cache an image of the view alone, excluding its subviews. You have no choice here -- Android does not support this option, and iOS forces you to use this option.
  4. Cache an image of the view and all its subviews together as a single buffer. This is optional on both platforms and, on iOS, is complementary to (3) above rather than exclusive.

Of course, general advice applies to both platforms:
  • Keep hierarchies shallow.
  • Android and iOS are fill rate limited. Which means that the bottleneck is the number of pixels that need to be drawn on screen per frame at 60fps. Translucent views need to be composited with their underlying views, so each visible pixel on screen needs to be created by compositing multiple pixels, with the result you're burning up more of your fill rate budget. Exceed your budget, and the GPU can no longer composite each frame in time, leading to jerky scrolling or other animations.
  • For more iOS rendering optimizations, see this post.



Credits:
  1. WWDC 2011 talk on UIKit Rendering
  2. Google I/O talk on Accelerated Android Rendering.
  3. Email discussions with Romain Guy.

[1] I've encountered a case of janky scrolling on the iPad. It turns out that some of the views were translucent, which means that they have to be blended with views in the background, on every frame of the animation. The iPad GPU wasn't up to this task, so the scrolling was jerky. The solution was to get rid of the translucent view, by manually blending it with the background view, and creating a single opaque view, which you then hand over to the system to display.

[2] This is because it's impossible in Android to ask a view to draw itself without drawing its children. This is a deliberate API design decision. A view can define:
void draw() {
  drawChild(0)
  draw stuff
  drawChild(1)
  draw stuff
  drawChild(0)
}

No comments:

Post a Comment