Johannes Kopf, Chi-Wing Fu, Daniel Cohen-Or, Oliver Deussen, Dani Lischinski, Tien-Tsin Wonghttp://johanneskopf.de/publications/solid/index.html
Solid Texture Synthesis from 2D Exemplars,
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2007)
Wednesday, July 22, 2009
Subscribe to:
Post Comments (Atom)
a) what did you find interesting or novel about the paper?
ReplyDeleteThis effect is mind-blowing.
I enjoyed the histogram matching part, because I did some image histogram specification for a statistics project. It involved modifying an image's histogram to match a reference histogram from another image. Very simple compared to this.
b) what aspects of the paper were most difficult to understand?
Everything was extremely confusing. (Except the video)
c) was the paper well written?
It's hard to say. It seemed well-structured and concise, but was too far over my head to know if their explanations were effective.
d) could the methods have been improved?
The authors recognize that the search phase (finding similar neighborhoods in the example texture for x, y, and z) is the most time-consuming part of the algorithm. Improvements in the search would have the most significant impact on running time. They optimized it in ways I don't understand, but maybe it's possible to do better?
e) what possible applications does this have?
This could be an excellent aid in creating destructible objects in games. As far as I know, games that have destructible objects (such as breaking a piece of wood in half), simply have pre-created, pre-textured objects such as WOOD_FULL and WOOD_HALF. Each subdivision of the object has to have a mesh created and textured. The ability to apply a single texture to a single mesh, and define some rules about how it can be divided into pieces would immensely speed up the process and reduce the amount of memory the game occupies (on disk, at least, perhaps not in RAM). Current games that have deformable terrain usually suffer from awkward texture stretching/seaming problems.
Non-divisible, transparent object texturing could also benefit greatly from this method.
--
mwc
Very interesting idea with impressive results. The methods described seem to do a very good job expanding a 2D image to a 3D texture, especially with the use of histogram matching. The solid textures take quite a while to compute, but I imagine this is far easier than attempting to create one by hand. Efficiency is always desired, but since it is a pre-rendered texture it's not critical for application. The scalability to additional texture channels with such close correlation also produced impressive results.
ReplyDeleteThe paper itself was well written. A lot of this is new to me, so parts of it took a while to understand, but the basic exemplar matching algorithm appears similar to papers we've looked at. The benefit of using histogram matching was immediately clear in the examples they gave. I was less clear on how the feature maps came into play, though they clearly had an effect.
The authors also had some good ideas for future developments. Using different kinds of histograms or adding slices could improve results or introduce novel controls. The idea of using this method to produce a set of Wang cubes is especially interesting, as it would then be possible to tile these into arbitrarily large solid textures with little visible periodicity.
Their method for generating textures that properly use all the features of an example image is very impressive, especially considering that it makes a higher quailty result while speeding up the process. Their advances in 3D seem to rely on this and builds on previous work.
ReplyDeleteI found myself on google far to often looking up things like MRF and PCA. I found they did a fairly good job at explaing the variables they used in the equations, but beyond that you would really have to know more about the specifics to do anything with them.
I felt the paper was extremely well written - especially considering none of the participants native language appears to be English, and they come from varying parts of the world.
I believe their methods can be improved and even found some papers showing examples where they have been - one using Wang cubes, and one using 3D exemplars.
Unfortunately this really only applies to objects that are consistent through out their interior. Honestly that's not too many objects. Looking around my room I see wood as the only truely solid material, and even there the exterior is generally polish where as the interior is not. Plastic and glass solids would be good objects for this technique.
I found the idea of forming solid textures from
ReplyDelete2D samples interesting. When I looked at the pictures,
the solid texture made the objects more realistic (for
example, the wood texture on the carving).
The equations were difficult to understand, but
I liked how the authors incorporated pictures
with some of their explainations.The layout of
the article was easy to follow, but I did
have some trouble understanding the concepts.
This idea of creating 3d solid textures from 2D
exemplar opens the door for programmers to use
more realistic textures. This, in turn, will
expand what programmers can do in image synthesis.
In this paper the authors present a new approach to texture synthesis that shows great promise not only in efficiency but also in better results. The method consists in an algorithm for synthesis of solid textures based in 2D models which I consider is novel despite the authors have based their approach on Wexler's technique for non-parametric texture synthesis, technique that I am not familiar with.
ReplyDeleteIt is interesting how they use histograms in order to statistically preserve the global matching of the image with the texture map, combined with neighborhood matching techniques in order to make sure that the image locally matches the texture model and avoid unrealistic effects. I am not too familiar with the techniques used by the authors, therefore it took me time to understand their ideas, however I consider that the document was well organized, they describe their steps/phases, they included a lot of images that show the visual results (impresive results) of their method.
Some improvements can be done in this area, as the authors showed, there were cases in which their method did not represent the texture appropriately, this can be a field of future investigation, addressing the neighborhood matching phases that are used to preserve the local similitude with the texture image when mapping.
Lastly, the most obvious applications of this technique are digital image processing/editing, video games and movie films. Other field of application might be bio-medicine, where computer graphics can be applied in order to map a texture in a human/animal figure representation.
Of all the papers we've read, I found this one to be the most impressive. The authors' technique appears to be a powerful and flexible tool for acheiving effects that are otherwise difficult to come by. I liked the way they were able to make complicated geometry out of simple textures, while still preserving the visual characteristics of the textures. (I.e. Figure 1 middle, figure 9).
ReplyDeleteI didn't quite understand how they got from two-dimensional to three-dimensional. It appears they had a synthesized texture along each of the three axes and they somehow mapped it onto the surface of the model, but beyond that I was lost. (Something about voxels, too.)
Which brings me to, I wonder how limiting it is that their technique operates on voxel geometry? Rasterizing a triangle is one thing, but what is it like to "voxelize" a full model? (I suppose you could take slices, projecting and rasterizing them, and then stack to slices to get a voxel model.) One obvious limitation of their method is the running time - 10 to 90 minutes on 2.4GHz. This precludes not only real-time construction of the solid textures, but also casual "poking around" with solid textures on a modeling package like 3ds max. 3ds max could come preloaded with a set of solid textures (which look to be highly reuseable), but it would not be convenient to casually make solid textures.
I thought the paper was reasonably well-written. It seemed like they went beyond their goal to present a few other uses and effects their technique could acheive. Their images were good-looking and they knew it, so they included a lot of images. Perhaps it's because I watched the video first, but I didn't feel that all the images were necessary.
I found this to be a very interesting paper. Much like the paper about Interactive Rendering using the Render Cache, these authors took existing ideas and combined them to create something that performs significantly better than anything else in its field. The thing that really stood out was how they can take a cross section of a 3-D object and the texture will look correct, like with the rabbit made of stone.
ReplyDeleteThere were a few concepts that they referred to which I am not familiar with, such as clustering. Also, the math equations appeared quite complex.
Tihs may have been the best written paper so far, mainly because they had so many supplementary materials. The video and the downloadable application really show off how good their approach is.
Seeing how good their results are, I don't see how they could improve their methods.
The most obvious application is 3-D rendering, like in 3-D Studio Max. Also, as hardware gets more powerful and rendering times decrease, this would be awesome in simulating nature, like rock slides. It could eventually be used in gaming, cloud computing could do the rendering quickly and send the images to the client.