“Scripted rendering” and linear workflow

I finally gathered enough courage to try alternative display drivers in 3Delight – basically, in this context a “display driver” means something a lot like “file export format”. Namely, I did what I long wanted to do and set up a script to render to EXR – a high dynamic range format.

redjay_tm_bckgr

I always use “gamma correction” when rendering – and it makes me very, very sad that a lot of DS users, including “published artists” (DS content providers for 3D shops like DAZ3D or Renderosity), are unwilling to adopt linear workflow (which is what all this gamma correction thing is about). I guess it’s somehow personal for me – because it was a major revelation when I finally understood what this is all about, and it has made my rendering life much, much more rewarding.

But I am the sort who is okay with those moments when you realise you have been doing it all wrong before.

I have the seminal (IMO) articles about gamma correction and linear workflow linked in my tutorial collection – but in case there are lurkers here who would like to hear a bit more… Well, I am actually a scientist IRL. I have spent six years studying applied physics and engineering in college, and I am trying to work towards my local PhD-like degree now; I do research, I publish stuff, etc. So even though mathematics is not a language I speak particularly well, I understand it well enough. And you can’t fight mathematics. And mathematics says that – if you are feeding your renderer with textures that are made of photos of real world objects (or painted to look similar to photos of real world objects on screen), you are giving it the wrong type of information to process.

It’s like using correct units of measurements – there are math models that will only accept, say, the centimeter-gram-second system, or the results you get will be all wrong, not simulating “real world” for real.

What DS’s “gamma correction” does is ensuring that the textures are linearised aka de-gamma’ed = converted to the linear system that the math model inside the renderer was designed to use.

But… How exactly does it do it?

As you may have noticed, 3Delight needs to run a special little process on every texture before it can use it at all (this is a very neat feature actually making use of mipmapping); this process is called tdlmake. There are special little switches that you can pass to tdlmake, so that it would know what sort of texture you’ve got there: whether it’s a photo-based colour map or a grayscale control map (like, for bump strength), or maybe something else (like, an HDR colour map). And it will treat your texture accordingly (doing all the boring linearising work behind the scenes! cool, eh?).

So when you turn “gamma correction” on in DS “Render options” (DS remembers if it’s on or off for a scene, and it retains the setting even if you are using render scripts), DS starts calling tdlmake with those options. And you can make sure it does it the way you want, if you click the texture selection dropdown box in the “Surfaces” tab and go to “Image Editor” (not “Layered Image Editor” – although this will also do, but it loads slower, and the gamma setting is harder to locate). Then you can correct DS when it makes a mistake about assigning gamma correction values (as every software, it sometimes makes mistakes): for colour maps, it’s best to set the gamma to “0”, which makes DS ask tdlmake to linearise it; and for control maps, this must be “1” – meaning those maps should be treated as already linear (it becomes crucial for displacements and transparency maps!).

This setting is saved per-image, so don’t worry if you have a thousand surfaces all using the same texture – it is enough to set its gamma correction value once.

It gets a bit trickier when some designer uses the same map as colour and strength – it’s not a very 21st century thing to do, anyway, but, well, things happen. Then you can set the correction value for one role, and then open “Layered Image Editor” to rename the texture (it will create a dynamic copy of it); then you can assign the correction value for its another role. This is when you will have to do some manual reassigning (like, set the bump channels to use the copy), but thankfully, it’s easy enough in the DS interface because you can select multiple surfaces across multiple objects/figures at once.

This is, basically, that. All that it takes to take advantage of linear workflow (okay, most likely DS does not correct its colour pickers, but I’d say these are easy enough to eyeball).

Generally, it also means that it’s best to set output gamma in the render settings to 2.2 (read the linked documents to see why – trust me, they don’t lie; maths and physics testify!). Then you’ll get a render which is closest to how light behaves in the real world AND how your eye sees it IRL (particularly important when using GI).

Okay, you may render at gamma 1 and then correct it in an image editor, but that’s not really a good idea for preview renders – let DS do the 2.2 thing for you.

Why 2.2, you’re asking again?

Pleeeeeeease do read the relevant articles linked here!

And try this presentation, too, if even John Hable’s (particularly lucid, IMO) article doesn’t help you understand it:

http://www.pixsim.co.uk/downloads/The_Beginners_Explanation_of_Gamma_Correction_and_Linear_Workflow.pdf

Okay. I assume you’ve browsed the presentation. And you may have noticed it gives an option of “tonemap instead” at the end.

Those who are using LuxRender know it lets you tonemap on the fly. 3Delight’s standalone i-display driver also lets you do some interesting things to your image while it’s rendering, but DS has its own driver which does not seem to be able to do anything like that. It just displays renders and saves.

Besides, it only saves those boring 8-bit-per-colour-channel (thankfully, not 8-bit-per-pixel), low dynamic range images! While 3Delight can actually give you your render in HDR. Something to, actually, y’know, tonemap.

It turns out it’s easy enough to set up a render script to make the built-in 3Delight engine render to file with whole floating point precision. The progress bar does not seem to display percentage properly in this case, but that’s okay with me.

What DS render scripts lack is good documentation, so I still have not figured out a viable way to make them read atmosphere shaders from cameras – but I have managed to find easy ways to use those shaders as “Interior” volumes (because RiSpec says volume shaders can work as either, and oh wow, DS complies!).

So there you go, the image in this post is a test render of a “nearly production” scene (okay, I shouldn’t probably be bandying around words like “production” because I’m a lousy hobbyist, but, well, bad production is still production… LOL): DM’s Lost Moments room with UberSurface2 shader, G2M (Teen Jayden with some extra morphs transferred from Genesis and G2F because I’m stingy) with my raytraced SSS shader, 3Dream&Mairy’s Orion hair with UberSurface shader and Mec4D’s AfterGym pants (which uses UberSurface by default, and I didn’t even mess with Cath’s settings much, but I killed the reflection map – these render differently in the raytracer). Oh yeah, and I’m using 3Delight’s dedicated raytracer (haven’t you heard that 3Delight is actually two render engines in one? and I don’t mean it in the sense that one of the hiders is a hybrid REYES/raytracer – I mean that there are two hiders, and one of them is a full raytracer). The volume is Age of Armour’s EasyVolume shader (my favourite!), performing as an interior shader attached to a cube; the light is a very cool shader called PhysicalSun that comes with 3Delight standalone installation, which I converted for DS via shader builder, and the very simple GI light is, again, mine. // I hope to be able to release my stuff this year – a kit with render scripts, that SSS shader, the GI shader and maybe something else //

Rendering this to EXR takes less than 5 minutes on my laptop (yeah, with GI, SSS and volume – the volume only sees the sun, since AoA implemented light categories very well – light categories are one of the coolest features of Renderman-compliant renderers that so few DS shaders actually utilise, sadly). Okay, it’s noisy because I’m only using 64 samples on the GI light, 4 samples on the PhysicalSun raytraced soft shadow and default EasyVolume settings. It also has aliasing on the bright border of the specular on his shoulder that I can’t seem to get rid of, whatever I try with filters and pixel samples – this sort of aliasing has plagued me for long, even with the REYES/hybrid hider.

But I still like it.

I tonemapped the EXR in Picturenaut and used Paint.NET to add an opaque layer behind the image (3Delight saves EXR with alpha, and there’s no geometry outside).

Advertisements

2 thoughts on ““Scripted rendering” and linear workflow

Comment here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s