Hybrid Rendering for Interactive Virtual Scenes

 

Dan Morris Neel Joshi
Stanford University Stanford University

Stanford University Technical Report CSTR 2006-06


A screenshot demonstrating an OpenGL object rendered among raytraced points.

 

Abstract

 

Interactive virtual environments used in conjunction with haptic displays are often staticviewpoint scenes that contain a mixture of static and dynamic virtual objects. The immersive realism of these environments is often limited by the graphical rendering system, typically OpenGL or Direct3D. In order to present more realistic scenes for haptic interaction without requiring additional modeling complexity, we have developed a technique for co-locating a prerendered, raytraced scene with objects rendered graphically and haptically in real-time. We describe the depth-buffering and perspective techniques that were necessary to achieve colocation among representations, and we demonstrate real-time haptic interaction with a scene rendered using photon-mapping.

 

Paper

Adobe Acrobat PDF (140 KB)

Video

Real Video (211 KB)