If it seems like this series has been a bit unfocused, well, it has been. While we’re pretty good at running a project car series, we’re total amateurs at building a home machine shop. 

What 3D Scanning Can and Can’t Do

A few installments ago, we wrote, “There’s no print button for a part you’ve drawn on the computer.” 

And then we proved ourselves wrong by setting up a 3D printer. But while there’s an easy path out of the computer, there’s no easy path into the computer. 

Except, that is, for 3D scanning. The technology works exactly how it sounds: Point a magical device at your part, then watch it appear in a digital environment. In theory, hours of laborious measuring and modeling can be completely removed from the workflow.

Why would you want this? Because very few parts–especially very few parts built in a home shop–are designed in the computer with the necessary context around them. 

Think of how aerospace engineers design an airplane part, perhaps the landing gear. They don’t take a few rough measurements of the fuselage, draw a plain box in the computer, then design a folding wheel mechanism that’s about the right size and slap it in there. 

Instead, they have the entire plane already modeled in software from the start, which means the landing gear can be designed to fit perfectly, interact properly with the existing parts, be built with serviceability in mind and more.

We know, we know: You’re not building a landing gear. But you’re probably building alternator brackets, or turbo manifolds, or suspension parts, or even just planning an engine swap. 

And unlike those aerospace engineers, you can’t exactly open up a detailed CAD model of your car to see precisely how much room there is under the hood or in the wheel well. 

Unless, that is, you have a 3D scanner. In theory, all you have to do is point it at your existing parts, then suddenly you’ll have as much context as you’d like instantly appear in the computer. Or, if you’re just copying existing parts, a 3D scanner promises to be a real-life copier, unlocking a world of cheap and easy replacement parts. 

But the reality isn’t quite this simple. 3D scanners don’t actually model real parts in the computer. Instead, they generate point clouds. And that’s a key limitation you’ll need to know about before going any further.

What’s the difference? Well, let’s take that landing gear as an example. If you were going to draw it from scratch, you’d tell the computer, “It’s a cylinder the diameter and thickness of a wheel, with a rotating joint connecting it to a second cylinder the diameter and thickness of a landing gear support.” 

CAD people: We know that’s an oversimplification. And plane people: Do you really think we’d spend all our free time dragging home old industrial equipment if we actually knew anything about planes? Just bear with us for the rest of this example.

So let’s say you wanted to take a 3D scan of that landing gear. You’d point your scanner at the part, then the computer would display: “It’s a cylinder the diameter and thickness of a wheel, with a rotating joint connecting it to a second cylinder the diameter and thickness of a landing gear support.” 

Right? Wrong. 

The computer would actually display a few thousand little dots, each at a specific coordinate in three dimensions. This is called a point cloud.

Zoom in, and it looks like somebody spilled marbles all over the floor. But zoom out, and you’ll realize this cloud is in the exact same shape as that landing gear. And after scaling it to match the real world, the points will almost perfectly correspond to the real world. 

Resolution, by the way, usually refers to the number of points and the distance between them. More points closer together is higher resolution. Fewer points farther apart is lower resolution.

But sadly, a point cloud isn’t nearly as useful as a real parametric model. Dimensions can be a bit fuzzy, interfaces between parts can be tough to algorithmically determine, and errors and noise can occur. So while this 3D scan would give you a really good idea of what that wheel looks like, it wouldn’t actually give you a perfect wheel model that you could send to the machine shop. To get that, you’d probably need to draw the wheel from scratch, using the point cloud as a guide to make sure you were drawing it in the correct location and at the right size. 

Look back to the 3D scan we did of our LS-swapped 350Z a few years ago for a real-world example. Though the scan came out great, it still required dozens of hours of modeling to turn that data into a usable model of our car.

So while 3D scans are great for context, they’re not actually the silver bullet the uninitiated often think they are. Still, though, they’re an immensely powerful tool. So here’s how to do it in your home garage for just a few bucks. 

Types of 3D Scanners

We don’t have the space for an exhaustive breakdown of every type of 3D scanner (see Wikipedia for that), but we’ll cover the basics. 

The most common type of 3D scanner you’d find in shops isn’t what most people would even consider a scanner: Contact 3D scanners, like coordinate measuring machines, have been around for decades and record points by touching them individually on the part. 

A FaroArm is an example, and they’re great for answering questions like, “Where are the bolt holes for my bumper mount located?” But because points are collected one at a time, contact 3D scanners aren’t particularly useful for surfaces, especially complicated ones. You may be patient enough to scan your hood with a FaroArm, but it would probably take you years to scan your whole engine bay. 

Plus, there’s price: Budget many thousands of dollars for a new FaroArm. For those with the means, though, it’s an industry standard.

So let’s move on to the world of contactless 3D scanners, which are what most people picture and work great for surfaces. At the consumer level, there are a couple ways to determine the shape of an object without touching it: active scanners and passive scanners. And we’ll return to oversimplifying to explain what these terms mean. 

Let’s think about our landing gear again, and this time we’re not worried about scanning. We’re worried about two animals: a bat and a human–or at least the way we understand bats and humans. We’re not zoologists or doctors, but we did grow up watching Steve Irwin.

So let’s talk about how a bat sees that deployed landing gear. Basically, it screams into the void, then uses its ears to listen for the sound of its cries bouncing off obstacles. This process is called echolocation, and you understand the basics if you’ve ever clapped your hands in an empty building and heard the echo. 

The bat is determining the depth of its environment by sending out signals, then listening for how long it takes for them to bounce back–if they bounce back at all. That’s how it knows that there’s landing gear deployed from our plane but clear air on either side of it. 

Replace the bat with a laser or lightbulb, and the bat’s ears with a camera that can see the light you’re emitting, and congratulations: You’ve built a digital bat. Move the laser or light around fast enough, and you’ll be able to increase the resolution immensely, collecting more and more points. Congratulations: You’ve built an active 3D scanner, and this is how lidar sensors and most commercial 3D scanning tools work. 

But human airline workers aren’t walking around screaming at the landing gear. Or, if they are, it’s only because Bob didn’t put air in the tires like he was supposed to. Instead, they use their eyes to determine what the landing gear looks like. And notice we didn’t say “eye.” Sure, somebody with an eye patch will still see, but our depth perception comes from our pair of eyes spaced a few inches apart. They record the natural light bouncing off objects, then compare the two images so our brains can determine depth. 

Replace each eye with a camera, and your computer can compute depth just like your brain. Or you can just use one camera and move it around a little bit. You’ve built a digital eye with depth perception. But why stop at one eye? Computers are smart, so why not use 1000 eyes at once–or move your camera 1000 times around a stationary object? Congratulations: You’ve built a passive 3D scanner. This is how a process called photogrammetry works. 

Building a 3D Scanner at Home

We’ve talked about why 3D scanning is useful and about a few theoretical ways to accomplish it, and we can hear you screaming, “Just tell me how to do it!” Fine, fine, here’s the answer:

Spend $20,000 to $50,000 on a 3D scanner. See? It’s that simple.

We kept encountering price tags like that as we shopped the market of commercial scanners. Even the low end of the market, with scanners aimed at hobbyists like us, started just under $1000 and only went up from there. At that price point, we just didn’t see sufficient capability for the projects we wanted to tackle.


Different 3D scanners for different tasks. The FaroArm, popular in motorsports and other industries, can very accurately measure various points in space. But one isn’t in our budget. Photography Credit: Chris Tropea

Haven’t we scanned cars in the past? Yes. Under the direction of Morlind Engineering, we scanned our 350Z with photogrammetry. On our end, that meant shooting 1000 photos and uploading them to a server. On Morlind’s side, though, that required software that was, again, more expensive than our car. Another dead end. 

Clearly, we’d need to figure out our own 3D-scanning solution. So we raided our video game cabinet. Yes, seriously: Microsoft sold an accessory for years called the Xbox Kinect, which would track the player’s movements in the room. How? By constantly 3D scanning its environment with a lidar sensor. Perfect. 


What about building a low-buck 3D scanner? We tried that using an Xbox Kinect hooked to a laptop. Photography Credit: Tom Suddard

We paired our second-generation Kinect with a USB adapter, an extension cord, a Windows laptop in a backpack, and some free software from Microsoft, then walked around the garage and started scanning. 

Assuming you already have a Windows laptop, you could duplicate this setup for a few hundred dollars. But don’t, because it just didn’t work.

Over the course of a few days, we tried scanning big items like cars and small items like a power steering pump, and we just couldn’t consistently get usable files. 

We encountered two issues: On small items, the Kinect’s resolution just isn’t that great. Remember, this device was designed to map rooms, where a centimeter or two of error doesn’t matter. And on big items, the Kinect seemed to work fine, but our run-of-the-mill PC couldn’t really keep up and we encountered frequent crashes and lost data. We instead added more cables to connect the Kinect to a faster desktop gaming PC, which helped, but the setup wasn’t really portable enough to use in the garage. 


We eventually got the Xbox Kinect to scan our Z, but only part of it before crashing–and without the high resolution we hoped to get.

We’re pretty sure that with enough time and effort spent sorting out bugs, the Kinect could be a decent 3D scanner for applications where accuracy isn’t that critical and you don’t mind dragging an unwieldly hardware stack around. But it became more and more obvious that this wasn’t the perfect solution we thought it would be.

Still thinking we needed a handheld lidar scanner, we tried another approach: throwing money at the problem. Our iPhone was due for an upgrade, anyway, so we drove to the Apple Store and gave them far too much money for the latest and greatest. 

For a few generations now, Apple’s flagship smartphone has included a lidar sensor, and there are a few third-party apps that leverage its abilities. We paired our $1200 phone with a free app called Polycam, then nearly sprinted to the garage with excitement. Finally, we had an accurate, affordable, easy-to-use 3D scanner. And we know, we know, it’s $1200, but odds are about 50/50 you’re reading this story while holding a modern iPhone anyway. 

Don’t do this, either, because it didn’t work. Like the Kinect, the iPhone/Polycam lidar combination just wasn’t that accurate. For big things it worked well–we got good enough scans of a car or two–but zooming in on the details, it was clear that the iPhone’s sensor was intended to scan rooms, not cars. 

Hey, at least the new phone’s camera was really good. Which got us thinking about photogrammetry again. Maybe, just maybe, we’d been barking up the wrong tree in our quest to find an accessible, active 3D scanner. Polycam has a toggle switch, and we swiped it from “Lidar” to “Photogrammetry” and started scanning. 

Yep, we’d finally solved the puzzle. It took even lighting and some sprayable chalk on reflective surfaces, but we were soon producing high-resolution, repeatable, usable 3D scans in our home garage with nothing more than an iPhone and a free app (though you’ll inevitably spend $17.99 per month or $99.99 per year for unlimited captures once you burn through the free tier). We’re not sponsored by Polycam or anything, so we’ll tell you that we only subscribe every few months when we have some scanning to do.


A more reliable solution: our iPhone plus an app called Polycam. Spray the car with removable chalk paint, scan with the phone, and then open accurate 3D scans.

And even better, you don’t need the latest and greatest iPhone to do this at home. In fact, almost any smartphone, or indeed any camera, will work. You can upload photos at Polycam’s website if you don’t want to download the app. 

Just make sure you have even lighting (a few portable worklights are plenty) and order some washable sprayable sidewalk chalk via Amazon. Stick a piece of plastic or a drywall anchor on the end of each spray can, and you’ll have a dribbling, spitting colored liquid that will create a perfect pattern for scanning on anything you’d like. And we haven’t found anything it won’t wash off of yet–aside from convertible tops. 

Now that we’d finally cracked 3D scanning, a whole new world of fabrication was unleashed. But that posed another challenge: We were again able to design more complicated parts than we could manufacture. We’ll address that in the next installment of this series.



LEAVE A REPLY

Please enter your comment!
Please enter your name here