SV – Creating Spherical 360 Video

SV – Creating Spherical 360 Video

I had first seen spherical 360 video, which I’ll refer as SV, made by Jim Watters, It was an accidental on line find. My intent was to research a way to use 2 cameras to create 3D video.

He had used a device that he created called a multi camera panoramic video rig. It was a cluster of inexpensive Mobius HD 1080p key chain cameras that were mounted and pointed at various angles. The rig used a total of 14 cameras that had the lens housings removed. This allowed for easier operation of the digital cameras. Jim then used PTGui ( and VideoStitch to assemble the individual footage from each camera into a single SV. The results are impressive and I consider him a pioneer for bringing this into the mainstream.

Since then, I have seen a variety of commercial shrink wrapped products that do the same thing. Most, if not all are too expensive to justify the money spent. These are not considered novelty toys and are targeted to professionals that will resell their services of SV creation. Because of this, the hobbyist, tinkerer, and curious are left out in the cold.

However, there is still a modest approach to the fascinating world of SV. Anyone with 2 or more video cameras and some software can create SV. I’m going to use 3 inexpensive 808-16 key chain cameras that record in 720p. These cameras come with the 120 degree fish eye lens already connected. I’m going to review the process of video stitching using 2 different software programs. These program are called Autopano Video and VideoStitch Studio. I’ll be running the programs on a Windows 7 64 bit quad core system with 8GB RAM, a solid state drive, and a NVidia video card with 128 CUDA GPU cores. Let’s begin.

First, I’ll cover the hardware and some considerations on its operation. As I mentioned, anyone can create SV using 2 or more cameras. The cameras should be identical in make and model. I’ll be using three 808-16 key chain cameras as a demonstration. The cameras should be positioned so frames from adjacent cameras contain the same objects. This is needed by the software in order to stitch the video together later on. I’m using a cutout of some corrugated plastic board as a base. The cameras attach to the board using sticky backed velcro. It’s simple and quick to build. The positioning of the cameras is approximate with some pitch differences. This becomes visible when our stitching process completes. I’ve seen examples of rigs that are made from wood, cardboard, or 3d printed plastics. Rigs can cost $500 or more dollars for professional setups, I’ll keep the $495 and do it on the cheap.

Operating the cameras requires interaction with each camera module. I’ll have to turn each one on and then start the shoot individually as well. A professional rig would centralize this control. Having a single point of control also lessens post editing. I could get technical about this and setup an arduino to send commands to the camera modules from a single input device, maybe another time.

Once the cameras are on and recording, there are two methods the software uses to time synch the video. The software uses either sound or motion. I would recommend doing both, just in case results aren’t producing expectations. You can also synch them manually, but I won’t cover that here.

Subjects that are close to the rig that move from one field view to another could generate distortion. This is due in large part to lens distortions that the software has to compensate for. It can do a fairly good job. However, if you have more overlap and use more cameras, this artifact is less likely to occur. For my rig, it will be noticeable.

Another thing to note are the stitch points. You have to think ahead on how the software has to work with the scenes you shoot. When the rig is powered on, the beginning frames will be used as a reference for stitch points. Don’t block the view of the cameras, otherwise the software will have a difficult time aligning. Also, try to set you overlap on definable objects. Some good examples would be a building, structure, or a large solid object like a boulder or tree trunk. Avoid objects like clouds, tree branches, grass, or any other object that fine or prone to parallax distortions due to differences in the camera angles.

The cameras can be moved through the field. This will destabilize the video and the end result is a shaky video. The software can correct this and anchor the footage. Again, I won’t cover this here. So for my test purposes, I’ll be shooting footage with the cameras stationary.

One other thing is extreme lighting differences throughout the field. I’m choosing not to shoot in direct sunlight or other stark light differences. The cameras I am using will auto expose and change the color and contrast. This will create another artifact that the software can correct. Since this is an introduction, I’ll avoid that condition.

Lastly, my camera settings can be programed. Not all cameras can, but mine do. I’ll verify that all camera modules have the same parameters. It’s always good practice to bench test the rig before heading out to a location. I happened to make the mistake and not do that, only to find out that my memory card was corrupt. This simple step will help you avoid wasted time. Now, lets get some footage and stitch it together.

No matter how well you plan your shot, be prepared to pre edit your video before attempting to stitch it. I use OpenShot to trim the beginning and ending of my footage. The stitching programs use the initial frames for control points. It’s common for the beginning of a recording to be shaky, noisy, and otherwise just bad for use as a control point reference. You should have clean source footage at the get go. Once you do, go and start the stitching tasks.

The first software program we’ll use is AutoPano Video. You can find more information about it on line at Here are the steps I used to get my final stitched video.

1. Open AV
2. File menu – Import Videos
3. Synchro from tool bar
4. Set Synchronize method, audio or motion, mine was audio
5. Click apply
6. Stitch from tool bar
7. stitch at “Lens model”
8. Lens properties, 15mm and Fisheye lens type
9. Render from tool bar
10. Set Size, Output format, and path to save file to
11. Click render
12. Render uses CPU, process time averages 1fps

The second software program I’ll step through is VideoStitch. You can find more information about it online at Here are the steps I used to get my final stitched video.

1. Open VideoStitch Studio
2. File menu – Open videos
3. Window menu – Synchronize
4. Press Audio, Motion, or Flash – Audio for mine
5. Window menu – Calibration
6. Lens type fullframe fisheye, HFOV is 120 degrees
7. Calibrate geometry
8. Process button
9. Set ouput file
10 Set width and height
11. Press process now button
12. Save project
13. Processing occasionally locks up, have to cancel and try again

Conclusion on the software is mixed. I’m not going to review and rate the different software options. I’ll just point out some key differences that you should take into consideration. VideoStitch has a processing advantage over AutoPano because it utilizes my video card GPU cores. Here are the time differences between the two programs.

AutoPano Video – 2.13 seconds per frame
VirtualStitch – .24 seconds per frame

To put that in perspective VirtualStitch is over 8 times faster than AutoPano Video. However, you are dependent on a GPU supported architecture in order to use VirtualStitch. It will not install without GPU support. So in this regard, AutoPano offers greater platform support.

I’ve just skimmed the surface of 360 video. Consider this more of an introduction into the subject. It’s been interesting to revisit the subject and view it from the creative point of view. It has been rewarding for me to cover this topic. I hope you have enjoyed this and found it useful.

Comments are closed.