Ben Blum-Smith asked the question I’ve been pondering since my last post
I’m a little confused… if a real matrix is not symmetric, there is no guarantee of a real eigenvector (e.g. take a 2-d rotation matrix), so what did you mean “I expect that other arguments along similar lines are possible for non-symmetric real matrices”?
As it turns out, I’ve been spending quite some time repairing my own understanding of linear algebra. What I should have said is, I expect to find similar arguments for normal matrices.
Normal is a terrible name of course. Are other matrices weird? non-conformist?
Unfortunately, I haven’t figured out how to pin down a similarly elementary flavored proof for normal matrices. Instead, I’m going to present an argument for another branch of the family tree: (so to speak) orthonormal matrices.
But first, I realized my last post might have jumped in a bit too deep all at once, so I’m going to try to err the other way and cover some more basic linear algebra this time as well. Bear with me and let me define orthonormal matrices before I talk about their eigenanalysis. I’ll give two definitions and suggest why they’re equivalent.
Read the rest of this entry »
I’m a firm believer that if you can’t understand some concept without resorting to complex numbers and invoking the fundamental theorem of algebra, then you don’t have a deep understanding of the concept. This is particularly a problem for teaching undergraduate mathematics (particularly to scientists and engineers), since some of the most crucial techniques—eigenanalysis and Fourier transforms—make heavy use of complex numbers. The result of the “complex” arguments presented is that most students have a very difficult time with these subjects. Ultimately, many treat them as magical tools that are hardly worth the time invested to understand.
Such a perspective is of course hogwash.
I know for a fact that a much more geometric (and in my opinion illuminating) interpretation of eigenanalysis is possible. However, I had never seen a clear exposition from first principles that one could reasonably expect to present in a beginning linear algebra course. Every explanation of the geometric perspective that I’ve seen has been post-hoc or relied on machinery like Lagrange multipliers—an unwelcome diversion.
I would not be surprised if someone else has come up with this proof before. However, I was not able to quickly find a reference for it. It makes no appeals to (a) the characteristic polynomial, (b) matrix polynomials (c) complex numbers or (d) Lagrange multipliers. I have only treated the real-symmetric case, although I expect that other arguments along similar lines are possible for non-symmetric real matrices. Read the rest of this entry »
Update: Since writing this post, I’ve changed my mind and decided against immediate mode GUIs rather strongly. At this point, I would not advise going down this path, but I’m going to leave the post here for posterity.
I just got this hot link from the c0de517e blog (now on my blogroll):
The video’s 40 minutes long, so—let’s be honest—you’re not going to watch it. I’ll try to give you my best explanation of what I heard, but I’ll probably butcher the idea. However, I’m planning to try this in my next research project, so maybe I’ll have some more concrete experiences to write up in the future.
Read the rest of this entry »
Demetri Terzopoulos gave the opening keynote titled Biomechanical and Artificial Life Simulation of Humans for Computer Animation and Games, after which there were sessions on simulation, animation and GPU techniques.
I hadn’t heard before, but Demetri’s group recently released this ambitious paper: Comprehensive Biomechanical Modeling and Simulation of the Upper Body
Some particularly cool papers from day 1:
Jonathan Cohen, Sarah Tariq, and Simon Green. Interactive Fluid-Particle Simulation using Translating Eulerian Grids. link
The talk started with some very useful high level observations and motivation. Jonathan emphasized that our goal should be scalable physics, not just fast, realistic physics. Sequential algorithms are insufficient. Parallel algorithms alone are also insufficient. We want parallel algorithms that produce better looking results as hardware gets better. There were some cool ideas here: Use Gallilean invariance to replace translations of the simulation grid with fluctuating boundary conditions (modeling the entrance/exit of matter from the volume). Apply the fluid simulation to particles with less influence as you move towards the edge of the grid. This gets around the simulation/fluid in a box problem. Cool tidbit: They used this technique for jetpacks and one of the guns in the videogame Dark Void.
Christian Miller, Okan Arikan and Donald Fussell. Frankenrigs: Building Character Rigs From Multiple Sources. link
(Full disclosure: I know all of the authors fairly well from my time at UT Austin.) This is an end-to-end system for (semi-)automatically rigging and skinning character meshes. I didn’t know this, but rigging and skinning is tedious for experts and nearly impossible for everyone else. The idea was to take a data-driven approach and leverage previously skinned and rigged character meshes (of which game and movie studios have plenty lying around). The input meshes are cut into limbs and then appropriate limbs are matched to the target mesh and stitched together. Hence the name: Frankenrigs. One very interesting observation: The method handles non-manifold, non-closed meshes, which is the way almost all artists’ meshes come. Christian actually ran into this problem when comparing to previous work: He couldn’t because the past work didn’t work on non-closed meshes.
Samuli Laine and Tero Karras. Efficient Sparse Voxel Octrees. link
This paper had a really killer demo. They managed to voxelize and render at reasonably high fidelity the NVIDIA street scene and the Sibenik cathedral scene. Along with the Gigavoxels paper, this makes it look like adaptive resolution voxel grids may be gaining a lot of traction. The contributions here were mainly an efficient ray marching algorithm, and effectively handling and filtering normals. The authors have open sourced the code base, so if you’re looking to get your hands dirty with some voxel rendering and/or research, this might be a good place to start.
I’m in Washington D.C. this weekend for Interactive 3D (I3D) 2010. I’ve promised myself to try and blog some comments on the whole affair. Hopefully I’ll be able to do this every night, but failing that, I’ll post a summary afterwards.
Also, there is a lot of snow here:
This is the fourth post in a series of OpenGL “tutorials” explaining how to get a triangle on your screen using as few lines of code, in as many languages, and on as many platforms as possible.
Sorry for the long wait, but I think I have something of a gem for you in this installment: Ocaml. There are actually two reasonable options out there for you to get OpenGL up and running in Ocaml. I used a somewhat old library called lablGL. It provides GLUT bindings as well (yay!) but only exposes OpenGL 1.2 (boo!) So writing shader programs could be a tad difficult. The other option out there is called GLCaml and it supports through OpenGL 2.1. However, it looks like the GLCaml interface was built by just automatically scraping some C files, which sounds a bit sketchy to me. Read the rest of this entry »