VR then and now
My interest in panoramas or 360 degree images goes back a few years and I am pretty sure it started way back when Quicktime VR was all the rage. QuickTime VR (also known as QuickTime Virtual Reality or QTVR) was an image file format developed by Apple in 1994, coincidentally the same year that I started working in VR professionally using head mounted displays (HMD) in Houston, Texas. Back in the days a good many things were called Virtual Reality, including navigating real-time 3D models and spinning 360 images around on your computer screen, ie QTVR. The latter was very similar to what you can do today on Facebook with 360 photos and videos. In 2017 however simply looking at a 360 image or film on a screen does not constitute a VR experience. In order for the user to feel immersed in the environment the image has to be viewed inside of an HMD, and the view must be updated quickly enough to match the rotation of the users head. It can be an expensive head mounted display tethered to a PC or one made out of cardboard with a smartphone as screen.
I got interested in panoramic photography and in particular high resolution panoramas much later. Such photography allow one to capture natural and built surroundings by taking a lot of overlapping images in all directions and then stitching the photos to form one image later using special software. This technique was also the way to go if you wanted to create panoramas in 3D applications. It was a cumbersome process. Today photographic panoramic images can easily be captured using one fisheye lens, or if a higher resolution or stereo image is needed, using specialized panoramic cameras or rigs that roate the camera automatically. Because of the resurgence of VR, creating panoramas in CAD, BIM and 3D applications has also become much easier, which coupled with inexpensive cardboard VR headsets makes for a good entry point into the world of VR for architects, designers and artists - for anyone really.
Viewpoints, not framing
Virtual panoramic photography is essentially what you do when rendering a 360 image from within a 3D software. It differs from ordinary photography since you are no longer concerned about creating a composition within a frame. The frame is gone when you are capturing everything you can see. However this doesn’t mean that placing the camera is not important. Just like in real life, finding an interesting spot, a viewpoint from where to see a space or a building is key. Where you put the camera in relation to the ground is also important if you want to craft a good User Experience in VR. While you can perfectly well render aerial panoramas, once you get close to the ground you likely want a human viewpoint, taken at the average eye level of a standing human being. This is in order for the user to both feel grounded and for the scale of the space to feel right, and this is of particular importance if you are creating a stereo panoramic image.
Let’s say you have a 3D model and want to render it as a panoramic image. In order for us to view an image that contains everything you can see from one vantage point the image has to be rendered and deformed onto a flat canvas which later can be projected or wrapped back onto 3D geometry for viewing in VR or on a 2D screen. There are two major types of projections that are in use, the equirectangular one and the cubic one. If you are an architect like me you are probably using one or more BIM and CAD applications such as Revit, SketchUp or Rhino, or if you are into rendering, 3d Studio Max, Cinema 4D or Maya to name a few. Luckily these applications and more allow for fairly easy workflows for doing 360 imaging.
Cloud powered panos
If you are using Revit, Autodesk has a cloud render feature called A360. It is as easy as selecting 'Render to Cloud', then selecting 'all views', choose a view by hovering over it, select 'Render-As > Stereo Pano' and clicking 'Render'.
If you are viewing this page on a smartphone and have a Cardboard here is a 360 stereo panorama example from Revit rendered by Grape Architects:
Rendering stereo 360 images is a tad more complicated than mono although Autodesk has hidden the complexity from the user with A360. Using more advanced rendering engines like V-Ray, Corona, Maxwell or Arnold will allow you to create more realistic panoramic images and most will leverage your GPU, your graphics card, to render images much faster. If you are using 3d Studio or Maya with V-Ray, Chaos Group has written a very handy guide that I contributed to. Even if you are not using V-Ray you will benefit from reading the guide since it will make you understand stereography for a 360° image or video and the challenges and limitations of 360° images.
In 2009 I gave a talk at SIGGRAPH titled Universal panoramas: narrative, interactive panoramic universes on the internet. I demonstrated how one could create narrative experiences of spaces using panoramas with simple interaction, sound and animated elements as a web experience. With VR we can now do the same with the addition of it being an immersive experience using an HMD. However, unless you have a Gear VR, you likely want to use an app or a web-based service to view your panoramas on a smartphone with a cardboard. One cross platform app is IrisVR Scope and a web based service is VRto.me, and Chaos Group Labs have written an excellent post with videos on both solutions:
Paper or plastic?
The most challenging part of creating 360 VR is how to view your panoramas if you have a smartphone with a cardboard and not a professional mobile headset such as Gear VR or Daydream. There are literally hundreds of different cardboard designs available from the cheapest cardboard ones to fancy plastic devices with headstraps. With the excemption of those that has a button, they are basically all the same, a container for your smartphone with a pair of magnifying glasses. My current favourite is the tiny, foldable Homido Mini VR glasses that clip onto your phone. I am also a fan of the “I am cardboard” brand and have likely purchased a couple of dozen that I have given away.
There are in principle two different ways of creating content for VR from 3D applications and I have touched on the inexpensive, easy entry one, pre-rendered 360 imaging. To take VR in architecture or any other field to the next level you will need to create interactive real-time 3D models running on a game engine using fairly expensive VR hardware. While I am going to discuss this in a later post, a lecture I crafted on design visualization workflows for Autodesk University in 2016 discusses the definition of VR, introduce what workflows are best suited for various practices and work as well as the needs and requirements in a VR experience for different types of end users.
There is also a shorter version of this lecture made for Escape Technology in London:
Finally there is the Autodesk Area article of mine on choosing a VR headset when developing for architectural design and visualization based on the previously mentioned AU talk:
Best of luck in creating your own architectural panoramic images!