picture me unwrapping the Dawn SE package

In the readme: yadda yadda yadda, 3Delight users might want to enable GC and set the output curve to ‘something like 2.2’ (c), yadda yadda.

Kettu: *hmmm promising*

In the surface tab, after applying a supplied 3DL mat: AoA Subsurface on everything; cornea with SSS ‘on’ but SSS strength 0 (here’s to hoping there is a check against that zero in the shader mixer spaghetti); pinked-out diffuse and similarly pinked-out SSS *blink blink wipe eyes blink blink*; param values with arbitrary precision (does a shading rate of 4-point-two-digits really make a difference?)…

Kettu: *what was that about that most common application of good intentions in civil engineering?*

No. Rly. Why.

It’s not even DAZ – there is no real need to force AoA Subsurface on users – it’s not a “Hivewire original”, y’know. Why not roll a custom network? Fresnel and all?

I’m not even going to look at the Iray preset. There might be all sort of maps in the ‘translucency’ channel.

But Dawn has nice shoulders by default. And nice hip morphs. And, well, she comes with all those morphs, a bikini and a hairstyle. And she shares UVs with Dusk. The SE textures are also quite pretty by themselves (totally pwning the original ones).

There’s a lot of good things about Dawn.

Weird mats, well… It’s just… sad. Sad that most folks are so clueless.

It’s not like we’re talking something as self-contained and quirky as, say, the Carrara built-in renderer.


12 thoughts on “picture me unwrapping the Dawn SE package

  1. It recently dawned on me (no pun intended) what an SSS scale value of 1 or more, in the existing Studio shaders) really means…depending on what it is, it’s actually calculating SSS deep within or even all the way through the mesh! Far deeper than any light that isn’t going to cause physical damage (like burn out one’s retinas) is ever going to get…

    It’s no wonder we end up with settings that need all these extra maps and what not.

    1. Yeah… translucent jelly zombies.
      Did I mention I think I managed to get a more or less “visually plausible” SSS out of Iray? After setting up an interior with a camera based on that thread you linked to, and putting Jensen-ish values in (bluish actual ‘translucency’ and orange-ish transmission, with their scales in a similar ratio to what the Jensen paper says). The ratio turned out to be important. With it intact, the values can be scaled all up or down a couple times, and look okay. No maps needed, and ears respond to geometrical thickness in a coherent way.

      1. That is perfect…

        And proof that the models work. Geometry + accurate scale should equal realistic looking renders…without all the extra crap that people who can’t grasp that simple fact want to slap onto everything…IF the models are anything more than a WAG (wild ass guess).

        Almost got into a big argument with a PA the other day on whether or not omnifreaker’s SSS is ‘broken’ at a fundamental level or if it is very finicky and very hard to control because it’s, well not the most ‘logical’ way of doing things (yes, there are problems with it…but not of the ‘it just doesn’t work’ kind). AoA replaced Ubersurface, because US and US2 are ‘broken’…not because AoA is new and shiny (and supposedly more ‘artist friendly’…but that’s another long rant..)

        More than 3/4ths the problems with US and US2 are the absolutely idiotic settings that are usually slapped into it. The same mistakes are being repeated, in AoA’s shader, Iray and just about everything else.

        Folks just don’t think in terms of ‘scale’…a bigger number there must be better, right? Hell, all those great renders done in Poser…the figures that look like they’d melt under the lights needed to photograph them, if they were real…have to be copied and are ‘real’, right? That’s what SSS is supposed to look like…

        That’s one of the things about an SSS based on DMP…it sort of accounts for scale. But other ways of doing it work too, if scale is accounted for.

        Something else I noticed, when running cloth sims in Blender…getting everything in Blender scale is faster…if it’s too tiny (Poser scale) it won’t run right or if it’s in Studio scale (comes in too large) it won’t run right (a dress literally becomes a tent..and all the sims are trying to deal with a tent, not a dress)! So scale DOES matter.

        So when doing effects that are dependent on scale, knowing what it is and accounting for it is key.

        And on a side note, one of the Luxrender devs is pushing to do away with SSS in Luxrender, entirely. He wants to move to a pure volume based material model. With it, the scattering will be accounted for, automatically, within the material parameters. Nothing else will need to be done. You make a skin material and it will have volume and feed the proper data to accurately calculate the scattering. Unfortunately, he’s kind of being outvoted on the idea…

        1. Scale is everything. I don’t know what is being taught in high schools around the world at physics lessons (I heard it’s possible to opt out of physics in the US? And, like, choose biology INSTEAD?!), or at college level actually, but in the particular Russian tech university I graduated from, the real-world scale effect is HAMMERED into you. Why you can’t build a tabletop-sized model of your mile-high structure, test it against some impact and just multiply the results up – because the same materials respond VERY differently to all the interactions when you have a small object and a scaled-up version of it.

          Broken? Hard to control? I’m speechless. Okay, ‘artist friendly’ is damn subjective, but ‘broken’… Not US2. And ‘finicky-schminicky’, even the weirdness of the original UberSurface is not that hard to counter.
          I mean, what did I write that SSS tutorial for? It’s been free for, like, over a year. Are PAs so self-assured they won’t stoop to reading freebie stuff?
          Another thing that bothers me about those PAs is that they will often argue about things they don’t fully understand on a tech level… even my favourite PAs have done it.

          Have you done fluid sim in Blender? I haven’t gotten to it yet, but I read SickleYield’s tutorial on it and it puzzled me why she said she can’t run a sim of anything over 4 meters wide. She has that monster powerful computer. And she had to change water params, too. Could it all be an issue of forgetting to account for scale?

          Volume-based model is great, I think. Must be a beast to implement, though.
          Oh, fun stuff: there’s a lot of people around believing that bones influence skin scatter (not transmission like fingers/ears, just scatter). This actually makes me think they never bothered to look up how far light can actually penetrate…

          1. I’ve done 2 fluid sims in Blender…and one worked nicely…and was huge. The other was a pain, but eventually got it behaving.

            If the light is strong enough to ‘see’ the bones…then why do we need x-rays when we break one?

            The entire omnifreaker line of products are not easy to get to behave properly, especially with the lack of solid documentation. But saying they are broken, the way so many seem to think they are, is just saying that 3Delight itself is broken. Just because these PAs have joined the ‘I love Iray’ fanboy club because even a blind monkey can make something look halfway decent by pushing a button, doesn’t mean that everything else is ‘broken’.

            1. And you used the Blender scale for the fluids, right? Did you need to change water params?

              The key word is _halfway_ decent. I wish more took the time and effort to master something before doing all that bandwagon-hopping…

                  1. It’s fun. And pretty easy, too. Plus, if you are doing something like an ocean, you can freeze it and export it, like a cloth sim.

                    1. Sweet =) The only thing still missing is a true volumetric shader for water… One built for speed. I have no idea if 3DL extensions like the VDB dll or the VolumeTracer one are going to work with the DS built-in 3DL, so if they don’t, I’m thinking that it should be possible to write a placeholder ‘dummy’ shader (with matching input names), hook it up to DS, and then when the RIB is done, swap the dummy with the real thing and feed it to the standalone. What do you think?

                    2. On the RIB level…yeah, it’s possible…in fact very easy. But, I think it’s not going to be much of a problem…as long as it can be ‘force fed’ in DS, it runs pretty much the same as the stand alone. The hard part is getting the support scripts right, so it’s actually usable. I’ve fed hard coded parameters/no user input to DS to use precompiled shaders with minimal support scripts before. But for something like volumes that isn’t really an option.

                    3. Scripts are kinda ‘easy’ to me now – now that I have spent so much time with them =) What makes me unsure about the DS build is that I didn’t find those standalone DLLs in the DS folders… but then, maybe they are precompiled into that linking DLL that is DS-specific.
                      I need to figure out the VDB shader.

Comment here

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s