Update: Since writing this post, I’ve changed my mind and decided against immediate mode GUIs rather strongly.  At this point, I would not advise going down this path, but I’m going to leave the post here for posterity.

I just got this hot link from the c0de517e blog (now on my blogroll):


The video’s 40 minutes long, so—let’s be honest—you’re not going to watch it. I’ll try to give you my best explanation of what I heard, but I’ll probably butcher the idea.  However, I’m planning to try this in my next research project, so maybe I’ll have some more concrete experiences to write up in the future.

Ok, here goes. A traditional GUI system, such as TCL/TK, QT, FLTK, Cocoa, Carbon or the Windows GUI system works something like the following: Create a window object.  Then, create widgets (GUI elements/objects e.g. buttons, text-boxes, sliders) which are registered as children of the window.  Keep making more widgets, and organize them in a hierarchy.  Now, Every time you create a widget, you have to do a bunch of things: (a) You have to construct the object in some fashion.  (b) You have to set some parameters. (e.g. button text, position, size) (c) You have to register a callback—code that is invoked when (for example) the button is clicked. (note: I’ve been using callbacks for drawing the triangle in my GLUT/OpenGL triangle series) Finally, (d) you have to tell the widget to display itself.

This is a lot of trouble and code. The rampant use of callbacks ends up fragmenting your UI code across multiple lexical scopes.  It can also introduce concurrency issues, since you have no control over when these callbacks get invoked.   Dynamically adapting your GUI to your user’s behavior (e.g. switching menu screens in a game) is difficult, since UI elements have to be created, destroyed, registered, unregistered, hidden, displayed and so on.  This proliferation of seemingly decoupled widgets can also encourage spreading your GUI code across many different files and modules—which can often be a detriment to understanding.

The key idea of immediate mode GUIs is to centralize all of your GUI code in a single DoGUI() function that gets called every frame.  DoGUI() should work just like Draw() does right now. You, or some system just calls DoGUI() as the first item in your render loop.  Inside of DoGUI(), you call routines like DoButton(). Just like with a DrawTriangle(…) or DrawLine(…) function, you pass in the vital data for the button in question, and the DoButton() function returns a boolean telling you whether the button was clicked this frame or not.  Using a simple conditional, you can react to a button push: if(DoButton(…)) { handle button click }.  DoGUI() is just a list of all the GUI widgets you “do”.  It’s up to you how you choose to organize the data for the widgets.  That data belongs to you and your application, not the GUI library.

This approach makes it really easy to remove or add GUI elements.  At about 22:30 into the video Casey (the Molly Rocket guy in the video) explains, “I can do culling… I don’t have to delete the item, I don’t have to disable it. I don’t have to hide it. I just simply don’t call it that frame. I don’t have to know how many of these there are. I don’t have to create [UI elements] for it ahead of time.”  That’s really cool. You can basically change the entire GUI from frame to frame without having to fight the GUI toolkit.

Casey makes some more interesting points. This approach integrates well with 3d and components that don’t know about or care about UI stuff.  Take for example a rendering module/library. Talking about 3d objects, he points out that “almost always they have some way of doing ray picking. And they can always render themselves. That’s all you need to turn them into an immediate mode widget.” So now we can have 3d buttons and interface elements exposed via the same API as our 2d GUI elements.

Something I did not know, but I’m very glad to know about now:  Speaking about traditional GUIs in modeling/design applications, Casey says “…to the point where even in professional 3d applications, often you have two totally separate systems: one for the 2d user interface, one for the 3d user interface. That’s crap.” Yes. Yes it is.

Immediate mode GUIs sound like a really good idea for real-time applications in particular (design/games), since you’re already redrawing every frame. You can do crazy stuff like floating buttons over characters etc.

I haven’t looked at it, but someone at Nvidia has made an immediate mode GUI library available that they use for their graphics demos:

So if you want to try it out, maybe that’s a good place to start looking.