I only had time to do about half of what I’d envisioned, so when I look at it I just see everything missing. But what is there:
I knew I wanted it to go from and to a frame of white and for the visuals & audio to be loopable. The first scene is the last one I did & pretty quickly, just as a perfunctory device out of a white frame.
I’d had an image in my head of an old prison door in an empty desert, then I kind of stumbled upon grass while experimenting so I joined the two together. I also wanted the challenge of creating a 3D scene with a lot of depth and a more or less infinite horizon line.
That certainly had to come from many fragments of surreal images from trips to and my photography from Burning Man.
The ending going to the frame of white was a late game compromise I’d planned a much more involved and elegant route with other scenes.
How long did I spend?
It’s hard to say since I worked on it in pockets of time between sessions & bookings, at 3 facilities over a span of about 5 weeks. I guess if you stacked up all the bits of time it might be about 2 and a half weeks total?
With the 3D Action scenes, the base comps were fairly flat and simply lit, but very long renders, especially the grass so I wanted to minimize the big 3D renders and do the creative as post comp. So with each 3D scene I rendered out various Action outputs, including the flat base comp, object mattes, AO, ZDepth, Normals, Specs, Emissives, etc., and did relighting & stylizing as post comps. The fire particles were also extremely crude & I enhanced them post comp.
I did a lot of post comp heavy relighting & exposure adjustments like flickering, rolling exposure waves, pops, etc. This couldn’t have been done so easily before. We were living like BEASTS before floating point.
I did this last & only had a day & a half for it, so I just took a flat white surface, attached a hijacked animating position map to it for the spikes extrusion, and scaled it up in Z. I used a Substance Noise as the matte for the surface, so spikes have different levels of opacity. I used Atomize
with the same position map for the bits flying out, which I then ran through Stylize to give more of a twinkling debris feel, and ran that through Depth of Field with blooming for the glints.
This is where most of my time was spent. I ‘stumbled onto grass’ while experimenting with hijacked position maps (ask Philippe), and found that with a flat surface set at high polygon resolution, with tiny matchbox dots mixed into the position map, you get lots of fine 3D spikes. I developed a 3d mohawk and some pretty decent fur, but could find no use for it in this spot, so I went with grass.
The background ‘hills’ were just deformed spheres. A render error (user error) for the submission made them more saturated, brighter and sharper than they should be, and that kills me to see.
The grass field is about 20 to 30 tiled slightly overlapping 4k flat green surfaces. The upstream position map had a bunch of modifiers to it. I used an extended bicubic on the standard pmap to give the flat surfaces a kind of ‘rolling hills’ feel, not just flat plains.
The trickiest part was getting a feeling of wind, that wasn’t completely even in the grass. Once I understood how the RGB color values in pmap determine the XYX values in the extruded surface polygons, I experimented & used a few CC nodes with very slight cycling animations to push the polygons to the right, down, etc. I used a cyling left to right wipe gmask to mix that in, so it feels like a wind force is moving through it. Also used substance noise as another matte to make some patches of grass less tall, so there’s light/shadow variation in the field. Added a ‘constant turbulence’ cycling CC to the pmap so no blade of grass is ever completely static, even when not wiped by ‘wind.’
The base 3D comp was intentionally pretty flat & minimal, just to get the heavy lifting done. With linked cameras & all the Action output prerenders, I added lots to it, including relighting with normals, heavy AO, background clouds, Action light rays & particle fog, Matchbox zfog, depth of field, averaging on the grass, motion blur, etc.
Action normals can cause flickering and aliasing since they don’t allow for filtering (which I consider a bug), so I made a Normals hack using 3 directional RGB lights attached to the camera pointing in X, Y & Z. It worked quite well & got rid of the horrible flickering in the door shapes.
This was pretty basic, a flattened sphere with a couple of substance noises for the texture & displacement. I wanted to make better fire particles, but didn’t have time, so they were generic particle lines out of the sphere no particles presets used. In post comp I ran those lines through Stylize to give them different shape, added some averaging, then put it through Depth of Field to give them heavy blooming, and added motion blur.
The transition to the frame of white at the end was a compromised after thought, and I ran it through motion blur on top of motion blur.
I definitely spent the majority of the time coming up with the name for the title. “Various Images with an Obnoxious Soundtrack”
(Ok, it was while exporting for delivery, and seemed the most accurate description.)
Biggest challenge was finding the time between paid bookings as a freelancer. It was hard to keep momentum and cohesion of vision when the work was so sporadic and done at 3 facilities across town. Second challenge was definitely making blowing grass in Flame.
I learned a lot on the technical side through this, but the most important things I learned were: a) I would never want to be a 3D artist. I love what I do.
b) I realize how much I enjoy working with clients & collaborators. Working in a void by myself made me appreciate how much I thrive on human interaction & give and take.
Most importantly, I now appreciate the power and creativity of Flame more than ever. Name one other piece of software that alone could have generated this or the other submissions?
Hey there! I'm Lewis and I like nothing better than a hugely complex Batch tree, and here is one! It's all about trying to simulate fluids using common household objects, in this case PixelSpread and a lot of EXRs."
The last few years I've been mostly working at Oneofus (http://weacceptyou.com) doing pretty much everything related to getting a picture on a screen. Somewhere amongst the custom cameras, RAW imaging science, acoustics, VFX pipeline and a ton of shots I came across some tools by Theodor Groeneboom which seemed to do the impossible - make swirling fluids without using particles. I dug through a few headache-inducing SIGGRAPH papers and eventually whittled it down to some warping, some blurring, and a loop formed by Import and Export nodes.
It seems like simulation should be hard, but actually all you need do is start with a frame of random motion vectors, warp it with itself using PixelSpread, write it out, then read it back in and warp it again, write it out, read it back... and you've got a moving fluid! The thing that makes it feel nice and physical is to keep it incompressible, or divergence- free: real liquids don't move through themselves and they react when you try to squash or stretch them. I'm doing that quite crudely by looking at the vectors a few pixels away, working out how much they're pushing towards the current vectors, and trying to push them back the other way."
It's fun to play with but pretty hard to direct to do what you want! Maybe someone out there can tame it but I don't think it'll be replacing RealFlow any time soon... enjoy the mess :)
The Birth of Venus is a perfect subject for my One Frame of White contest entry because she is being birthed into existence from white. I used a subject from classical art to make a connection between fine art painting and Flame. I come from a painting background and I'm used to starting from white. I like Flame for its immense range of tools and possibilities. Node trees and processes can get so complex that it becomes almost organic. Accidents happen, just like a paint splatter might accidentally make its way onto the canvas. Sometimes I leave those accidents in because it's way cooler than anything I could have done on purpose.
My technique was to use a series of projectors arranged surrounding a basic geometry of Venus' pose. I used the timeline paint tool function Autopaint to record and animate my paint strokes over time. One difficulty I had to overcome was that paint node renders get bogged down quickly. With the paint node set to "from frame" it starts to get slow after about 5 layers build up. Because I have thousands of layers, I had to use a series of pre-renders in order to function.
When an action got too heavy, I would load the camera into a new action and add layers on top of the render. My objective was to lose the "geometry" look. I wanted to hide the tool, so to speak. So people aren't thinking about how I made it while they are watching. In order to cover the technical/digital look of things, I layered more and more layers of drawing.
My inspiration is William Kentridge. See my previous film here: https:// www.youtube.com/watch?v=Xvd8qB5roCU
And my works on paper and canvas: www.jakenelsonart.com
Thanks for watching.