WALTER, B. J., DRETTAKIS, G., AND PARKER, S. 1999. Interactive rendering using the render cache. In Rendering Techniques '99 (Proceedings of the 10th Eurographics Workshop on Rendering). Eurographics, Springer-Verlag, New York.
http://citeseer.ist.psu.edu/old/255560.html
Wednesday, July 15, 2009
Subscribe to:
Post Comments (Atom)
It is really interesting how the authors proposed an algorithm that implements ray tracing and path tracing methods in order to accomplish rendering in a non-expensive (computational time) way without the use of 3d hardware by placind the rendering process outside the feedback loop between the user and the image. They have designed what they call the "display process" that efficiently takes care of several tasks, one of them is that it directs the renderer's future pixel sampling, in this way the renderer is able to reproject and estimate what would be the next image. Their projection ste uses the -zbuffering method. It specially kept my attention the way how they implement depth culling and smothing/interpolation methods in order to handle the cases where objects are shown when they might be overlapped by an occluding surface. It was also interesting their method to create a "priority image" in order to grow the final image from it (using interpolation with the pixel's neighbors).
ReplyDeleteI consider that the paper was well written, the authors methodically explain the steps of their algorithm. In general, the organization of the document made the information fairly easy to understand.
I would look for improvements in the heuristics they used for setting the pixel's priority when generating the "priority image". It is not clear for the the efficency of the dithering algorithm used in this step.
Beside the well known applications of these methods(films, games, simulators) this algorithm provide a different approach that can be used in digital images and architecture since it addresses the case of scenes that have camera motion (depth culling and interpolation for occluding surfaces).
I thought the paper was good introductory work in this area. At the time of this paper's publishing the techniques still needed improvement (the artifacts, though mitigated by their techniques, were still quite substantial). I wonder what advancements have been made in the past 10 years. Furthermore I'm curious as to why we don't see this technique more in games, especially with graphics hardware becoming more general purpose.
ReplyDeleteIt was interesting that the authors were trying to break out of the framerate paradigm (though apparently they weren't the first.) After all, the idea of frames comes from physical film where each frame must be projected in quick succession. However, computers need not work that way. Granted, the monitor has a certain refresh rate, but that doesn't mean the whole frame must be rendered for every monitor refresh. All in all, I like their asynchronous approach to rendering.
To me the paper was surprisingly readable and understandable. I usually zone out when someone starts talking about a cache, but I made it to the end of the paper appreciative of their approach. However their results were somehow impressive and unimpressive at the same time. I was impressed by the realtime reflection and refraction in figure 8, but the artifacts and general image quality were disappointing. Granted, I suppose their results must be considered in the context of 1999.
I imagine interactive raytracing is one of the holy grails of game programming. That being said, if someone can improve, or has improved, the technique to yield better results at closer to 30 fps, there could be some pretty amazing looking games.
I think this work has become dated by the availability of high-performance multi-core processors, and relatively inexpensive programable graphics cards with hundreds of special purpose processors. Real-time raytracing has been demonstrated using 3 PS3's and on an 8-core intel machine http://www.youtube.com/watch?v=blfxI1cVOzU . From what I gather their idea simply is not feasible in a realworld implementation where quality matters. It seems as if the tried to use the techniques that video codecs use to make their files smaller and faster to process by only rendering change, but the video has the benefit of knowing what's coming next and doesn't have to ask if change has occured.
ReplyDeleteThough there are areas where quality doesn't matter like level editors or scene editors where you are looking for the general idea as you move things about, but as you leave the screen sitting on one location it gets an improved look.
As far as the quality of the paper goes, I liked the flow of the paper. They left concepts that were difficult to explain simply as a reference - which I think greatly improves things. I don't sit here trying to comprehend a quick explanation of a topic that really requires some deep understanding. I do hate when proof reading fails though, "to reuse previously results," someone either forgot to delete the ly or removed the word rendered from that line.
Their idea of "why think of things in fps?" is puzzling. I mean for something to look realistic the frame must be updated with current information at a rate fast enough to trick our eyes. I think they are saying you don't need to look at the whole frame as needing to be rendered, so in this very special case we still need to render the change fast enough for it to appear smooth, and this concept of gradually making the image correct just won't cut it in smoothness.
The interesting and novel topic presented in this article is the idea of rendering interactive scenes quickly. Rendering large scenes dynamically could run very slowly, so I find it interesting that these programmers have found a way to minimize the time it takes to render a dynamic scene.
ReplyDeleteThe area that needs improvement is being able to produce higher quality scenes. Although users love speed, they also love higher quality products. The programmers should find a way to maintain a high-quality image while still reducing the time it takes to render that image.
This article was well-written. It flowed smoothly and I was able to follow the author’s points, which made this article easier to understand. The terms that I wasn’t familiar with were defined and the processes used for the render cache were explained well. The authors’ also addressed how their algorithm could be improved.
First off, I thought the article was very well written (if not proofread). The topics flowed well in a logical order through the pieces of the system, and it was relatively easy to read. The one part that was really difficult to understand was the sampling system, since the dithering algorithm was never defined (it does provide a reference, so anyone wanting to implement it could find the details in the source paper).
ReplyDeleteThe approach the paper describes is very interesting, and the resulting images they provide do look pretty good compared to the graphics of the time (I can think of several games from the mid to late nineties which were not fully 3D and looked only slightly better). This is especially true considering that they used software rendering only.
Most of the things I could think of to improve on the described methods were mentioned in the paper. A lot of the processing could be done with graphics hardware, especially since zbuffering, projection, etc. have dedicated hardware. The authors also mentioned leveraging SIMD instruction sets, though in the context of CPU SIMD extensions. With the growing trend of opening graphics architectures to general purpose computing, far more powerful SIMD operations are available for tasks which can't be performed with the typical graphics pipeline.
While I was reading the Walter paper last night I tried to envision the effects of using a render cache on a slow machine. The transitional images they provided were decent enough, but it's much better to see these sorts of things in action. So I found this demonstration Java applet: http://www.graphics.cornell.edu/research/interactive/rendercache/ which was very helpful in seeing the effects of the render cache under various
ReplyDeleteconditions (phong shading
The authors promoted the idea of using a render-cache as an integral part of a 3D model editor, which I'll agree was probably the best forseeable use at the time. However, the visual artifacts created while using a render-cache when the camera position/angle changes rapidly could be considered a neat visual distortion effect in their own right.
The paper was incredibly easy to follow, using mostly simple English in their descriptions and explanations. Not once did I find the meaning to be obfuscated by esoteric maths. Any questions that might have arisen regarding their meaning seem to have been handled pre-emptively (in parethesis). For instance, I found the reminder on the definition of a bijection to be well placed.
JCS
I thought the idea of taking an old concept such as a cache and applying it to rendering was quite clever. Having done some video rendering in the past, I can see the application that this paper has. Allowing a user to change the camera's perspective while rendering without any performance hit is very useful, as the artist can make changes a lot faster.
ReplyDeleteThe paper was not hard to understand. There weere a couple of items about sampling and cache management that I didn't fully get, but I was able to get the general idea.
This was an extremely well written paper. Unlike the previous one, they did not rely on equations. They also had a good use of pictures to illustrate each method. The captions were very helpful as well.
I don't know enough about the subject to suggest any improvements, other than the ones they mentioned under future work, such as better anti-aliasing.
a) what did you find interesting or novel about the paper?
ReplyDeleteIt's a clever method. The idea, boiled down, is simply to not recalculate points that don't /really/ need to be recalculated. This is done by reprojecting the points that are (likely) still within the field of view. Image quality (while in motion, or if the scene is changing) is sacrificed in order to drastically increase framerate.
b) what aspects of the paper were most difficult to understand?
I didn't find it very difficult to understand. When I finished the paper, I realized it hadn't used any mathematical symbols or equations to describe any of the concepts. It would have been nice if they'd included equations for their algorithms, without giving up the clear English descriptions.
c) was the paper well written?
There were a few minor typos (understandable), but still very clear.
d) could the methods have been improved?
It may be possible for a better heuristic to be developed for predicting which cached points are still in a new frame and not occluded, to cut down on the spotty gaps and 'old' pixels.
e) what possible applications does this have?
I'm curious if it would be possible to apply this, or something similar, to a framebuffer for rasterization. My guess is no... but perhaps if you created a "triangle-buffer"?
--
mwc