I recently learned of Spherical Harmonics and immediately proceeded to render them using my path tracer. For each render, I used marching cubes to generate a mesh for ray-intersection tests. But I used the gradient of the harmonic function to compute the surface normals for a nice, smooth appearance. The axes are modeled using a capsule-shaped signed distance function. I like how even the axes cast shadows.
I have yet to truly grok the applications of spherical harmonics. I really only learned enough to be able to render them.
Ever since getting into woodworking as a hobby about a year ago, I wanted to build myself a new computer desk. It was a fairly ambitious project, certainly the most complex of my woodworking projects so far, so it took me a while to dive in and get to it. I had the perfect opportunity a few weeks ago when I was off of work in between jobs.
In college, I had a pretty nice desk that had drawers on both sides and a nice, big desk top with plenty of space. I wanted something like that again, so that was the inspiration behind the plans that I came up with.
I drafted the plans myself, mostly on paper, after taking some measurements of my current desk. I didn't fully plan out every detail, and some things evolved as the build took place.
I still need to apply polyurethane, at least to the top for protection, so it's not in my office quite yet. But otherwise it's a done project.
After working on the Grand Canyon visualization, I started wondering about 3D mesh simplification algorithms. This is because the naive approach in generating a mesh from height map data results in a very large number of triangles even in areas with little detail.
So, I did some research and found what seemed to be a good algorithm for mesh simplification - Surface Simplification Using Quadric Error Metrics, SIGGRAPH 97. It took some time to wrap my head around it, but I was successfully able to implement the algorithm in Go.
My code provides a simple Go API for simplifying a mesh as well as a command-line binary for simplifying an STL file by some percentage.
The bunny example above shows a mesh with 270,000 triangles and a simplified version with just 2,700 triangles (1% of the original).
I still wonder if there's a more specialized approach in converting height maps to meshes that directly incorporates simplificiation...
This week, I attended the FOSS4G conference here in Raleigh. While there, I learned about Mapzen's new elevation tiles. These tiles come in a variety of formats including terrarium, normal map, geotiff, and skadi. This is really nice, not because this data wasn't available before, but because it's more easily accessible and easier to deal with in the form of tiles. So I decided to see what I could do with this new resource.
I already had some Go code that can download and stitch tiles from any tile server into a large image. I stitched together three different sets of tiles: terrarium, normals and satellite imagery. The terrarium data is basically height map data that I used to generate a 3D mesh. The normal map is used to provide higher resolution lighting on the lower resolution mesh. And the satellite imagery is used for color information on the terrain. I used MapQuestOpen.Aerial for the satellite imagery, found on the Leaflet providers site. Here are the three stitched images...
Again, the terrarium image was used to generate a 3D mesh that I saved as an STL file. You can see some of that code here. The other two images were used as textures for OpenGL. I used my Python OpenGL library, pg, that makes it really easy to put together simple OpenGL apps. I used it to create the video above.
While doodling and brainstorming, trying to think of something else to draw with my XY plotter, I thought about generating random eyes.
I started by generating a random ellipse for the eye outline and another random ellipse for the pupil. Of course, these all looked like boobs, so I decided to add a nose (yet another ellipse). It started looking better, so then I added an arc for the mouth.
I generated several such random faces and placed them in a grid. I'm quite pleased with the result and I'll probably try more variations on this theme soon.
Probably my favorite part are the "mistakes." Sometimes the pupils go outside of the eyes, and the eyes become more like a pair of eyebrows. Sometimes the mouth is so close to the nose, it looks more like a mustache. When the eyes intersect the nose, it looks like a pair of eyeglasses. So I left these quirks in intentionally.
I think this exercise also shows that even something quite simple can produce fun and interesting results.
A few days ago, @barrelshifter posted some screenshots of a sphere fractal from her ray tracer. I thought it looked cool, so I decided to replicate it in my path tracer. Here's how that turned out...
That was cool, but then I had the idea of animating it!
I used some easing functions to control the elastic bouncing as each sphere appears and the quick fade out at the end. Each sphere within a population appears with a random offset that follows a normal distribution. I manually specified the mean entry time for each population to make it feel less regular and more spontaneous.
Path traced images can appear very realistic, particularly with well-modeled scenes, as they simulate how light actually behaves in the real world. This results in realistic indirect lighting and shadows.
But it's also very slow. This animation took about 24 hours to render. Most of it was rendered on my home computer but toward the end I launched an Amazon EC2 instance with 36 x 2.9 GHz cores to crank through the remaining frames that had a lot of spheres in them. Even with a k-d tree to accelerate ray intersection tests, the final frames with thousands of spheres were taking 12+ minutes each on my computer. The 36 core instance took 2-3 minutes on each.
To make using EC2 easier, I used Fabric to automate setting up the instances and fetching rendered images from them. Here's my fabfile.py.
Of course, while creating the animation I was able to test it at a much smaller resolution and with far fewer samples per pixel. Then I could render the whole animation in a minute or two.
If you follow my Twitter feed, you know that I've been making heavy use of the Makeblock XY Plotter that I recently purchased.
Seeing as how I've done a lot of 3D graphics before, I wondered how I could draw some 3D scenes with the plotter. What I came up with mostly resembles a ray tracer, but it sort of works in reverse. Basically, the scene is made up of solid objects like spheres, cubes, cylinders or meshes. Each solid has some 3D polylines on its surface. The engine finely chops up these lines, and casts a ray from each point toward the camera. If nothing intersects the ray, then that part of the polyline is visible. The visible portions of the lines are then transformed to 2D space and the output of the engine is a set of 2D polylines which can be saved as SVG, rendered to a PNG, or sent off to the plotter. Cool!
I based the code off of my Go path tracer. I ripped out a bunch of stuff that was no longer needed and then added the code to do what I just described above.
It's pretty easy to use, and I put some good examples and docs up on GitHub. Check it out!
This is probably my favorite 3D drawing so far...