I've been working on a full-body homebuilt 3d scanner.
Here you can play with a virtual version of me:
This is a low-poly 3D model. These files are usually too big for web, so I reduced it to around 4k vertices. The original was 100 times bigger.
My homebuilt 3d scanner is comprised by 3 Kinect sensors and two lighting poles. I used a led strip for the poles. They are used to set an even lighting around the 3d object.
The scanner itself is kinda simple: Just a bunch of sensors (Kinect, Intel RealSense cameras or any other 3D sensor). The tricky part is processing the data you get from the sensors. I've been experimenting with several software for capturing and editing the model. I didn't find any tool that fits nicelly on the whole process.
After capturing the data, we end with a point-cloud representing all the colored points captured by the scanners. Here's a video showing the point-clouds for several scans of my head, each scan done from a different angle.
After getting the point-cloud, you need to merge them and convert them to a single mesh, using the color data to create a texture. This is time-consuming, but some scanning software, like ReconstructMe (Free for non-comercial) take care of this.
However, you usually end with a ugly 3D model after this process. I always have to go back to MeshLab or Autodesk MeshMixer to fix the model. Or even use a 2D (raster) image editing software (Photoshop, GIMP, etc) to fix the textures (Ajust lighting, hide seams, etc). But MeshLab is a great piece of software. It contains a lot of algorithms to ajust the meshes, close holes, etc.
The plans for the scanner will, someday, be on my GitHib page.