Presentations in 3D

Presentations in 3D

Earlier in the week on Facebook, my cousin Berry posted a picture of the house he grew up in.  I recalled our Uncle Herb had built a model of the house, so I asked if they still had it.  Berry said he had it on his mantel and so I mentioned we could make a 3D model of it from pictures.  My sister Beth chimed in and asked if it could be printed in 3D.  That got the gears going and sure enough it can.

AutoCAD’s 123D Catch is an online service that will build 3D models from a photo series.  The pictures get uploaded and processed by AutoCAD’s cloud system.   When the processing is complete, the finished 3D render is made available for download.  They have more details about the ins and outs of doing it.

I think this will be a useful method to present complex items for analysis.  The visual data can be fairly rich in detail.  So I took a stab at it and got this.

Here’s the devil in the details.  I took a series of images with an iPhone, encircling the helmet.  then I took another series from a higher latitude, all from the same distance.  I then imported them into Autocad 123D Catch and let their cloud services do all the number crunching.  After a few moments, I got an email that it was done rendering and I downloaded the finished file from their server.  Next I cleaned up some back ground artifacts, then created an animation from the render, which you see above.

The process has limitations.  Some image conditions really throw off the processing piece.  You can find more details of Do’s and Don’ts on the Autocad 123D website.

One of the biggest limitation not documented was how the images are gathered.  I find it a bit awkward to hover around a subject snapping pictures.  I can only image that I look like a foolish madman with a camera, or worse.  So I thought that taking a video instead would be more ideal.  It takes less time, I don’t have to look as goofy, and I have more images to improve the rendering results.

So I decided to do a side by side comparison to see if it would work, here’s what I found.  It took almost 3 minutes to take image shots with the iPhone, were the video capture only took 20 seconds.  If you want to be discrete, then it holding an iPhone in front your head for 3 minutes in a public place isn’t going to cut it.  The 808 seems better suited since it can be held more naturally.

I was surprised by the results.  The artifacts in the video source were extensive and the distortion of the subject made it difficult to recognize.  There were some factors that led to the unexpected results.  The video camera is equipped with a fish eye lens.  In the source video, the field of view is distorted on the right side.  Also, the helmet has a light mounted on it.  This is visible throughout the shot.  If both of these items were removed, then the results might be more favorable.  In contrast, the images shot with the iPhone and resulting render are impressive.

So I felt compelled to do more testing.  This time I did the same iPhone image rendering as before, but also did the iPhone video.  In addition, I also removed the head lamp from the helmet camera and did a render from that source.  The iPhone video source results were a pleasant surprise, but the helmet render was grossly distorted.  My conclusion is the wide angle lens is causing the issue.  Detail is key with video rendering, so I’ll need to use at least a 1080p camera, like the 808 #26 or Mobius.  Here is a comparison of the renders from the iPhone.  The first render is from the image source, while the second is from the video source.

A secondary test was to use the helmet camera and do a drive by shot of an object.  Again, the wide angle lens caused trouble for the rending process.  However, the results of the render were good enough to demonstrate here.  This next video shows the brief video source footage followed by the render.

Comments are closed.