Filed under SIGGRAPH
I started off the day in a technical papers session entitled “Image Collections & Video” which included 4 papers. This is probably my favorite session at SIGGRAPH…it always has new tech that will probably impact me in the near future.
The first paper was “Factoring Repeated Content Within and Among Images.” This is a new image compression technique that looks for repeating patterns in an image (an “epitome”) and only stores the pattern once. A “transform map” reconstructs the original image from epitomes.
Results of this technique compared to JPEG 2000 show significant improvements in image quality using the same amount of data (although images also had significant repetition). Also, this technique is useful for minimizing RAM utilization because you can render parts of an image without completely recreating the full image (see “Requires XXX for rendering” in photo above).
Photo Tourism takes a large collection of photos (from Flickr, for example) of a subject and combines them to allow 3D interaction.
The Finding Paths paper presents more natural ways to interact with these photos.
The best navigation depends on the types of photos. If photos are taken from many locations of the same subject, then “orbit” is useful. If photos are taken from mostly a single location, of various subjects, then “panoramas” are used.
Path planning used for the transition between photos takes into account routes where photos exist. Thus, the transition from outside a building to inside doesn’t go through a wall, but instead follows the path where people took photos via walkways.
Appearance stabilization fixed issues with different colors of photos of the same subject.
Check out the video above to see this in action…it is amazing.
Last year at SIGGRAPH, a paper presented a technique called “seam carving” that retargets the dimensions of an image without distortion. The least “important” parts of the image were removed to shrink the image, or duplicated to grow the image.
This year, seam carving is applied to video in the paper called “Improved Seam Carving for Video Retargeting.”
With this technique, you always have video that fits your screen without black bars (letterbox) or distortion (scaling). I prefer letterbox on my TV, but on a small screen (like a cell phone), I can see where this technique would be useful.
The last paper in this session was “Unwrap Mosaics: A New Representation for Video Editing.”
This technique captures a 2D texture representation of an object in a video (an “Unwrap Mosaic.”
Once you have the texture, you can update the texture to add/remove/change features. The modified unwrap mosaic is then reapplied to the video so that the changes look like they are part of object. Check out the video clip above to see it in action.
These types of operations typically require recreating a model in 3D to make the changes and then reintroducing the changes back to the video synchronized with the original object. Unwrap Mosaic greatly simplifies this process because all the work by the user is in 2D using a image editor.
Coming to your favorite compositing package soon (hopefully!).
There is a lot of speculation about what Larrabee is and isn’t and how it may or may not change the graphics hardware industry.
From what I understand, Larrabee is a graphics card. But unlike all other graphics cards, its GPU is based off an x86 CPU.
The advantage of Larrabee: Software that works on a PC can be compiled to work on Larrabee unchanged. For GPU’s (like Nvidia’s GeForce series), PC software must be rewritten to take advantage of the GPU or avoid its limitations.
As you can see from the chart above, traditional GPU’s (DX8-DX10) let you program 3 stages of the graphics pipeline (vertex shading, geometry shading, and pixel shading).
Larrabee, on the other hand, allows full programmability of the graphics pipeline or a completely different graphics pipeline.
Of course, everything sounds great in theory. It will be interesting to see how perception changes once we see some real hardware.
Here’s my prediction for Larrabee…
- GPU’s will be faster than Larrabee for graphics that fit in the standard graphics pipeline
- Larrabee will be faster than GPU’s for tasks that don’t fit into the standard graphics pipeline (like ray tracing)
- Larrabee will be faster than a CPU for tasks that involve lots of processors
- Larrabee fits nicely in between a GPU at one extreme and a multi-core CPU on the other.
- Larrabee will be most important to people that use racks of PC’s now to do their work
Larry Seiler from Intel happened to sit at my table for lunch one day at SIGGRAPH. He presented the Larrabee paper at SIGGRAPH (pictured on the left).
I asked him where the name “Larrabee” came from.
He said Larrabee was from the TV show “Get Smart.” The chief’s assistant, Larrabee, didn’t get much respect. The Larrabee project has been in the works at Intel for a while and didn’t get much respect initially.