Carrara. ... and Unity?
Mistara
Posts: 38,675
The 2 carrara hairs i been waiting on didnt make the hairs sale. figures.
under compaitible software it says Carrara and Daz to Unity Bridge
I never heard of Unity.
is it like a po'r boy version of carrara or ray dream studio?
https://www.daz3d.com/dp-aiko-6-sport-dynamic-hair
https://www.daz3d.com/michael-6-sport-dynamic-hair
thanks.
Post edited by Mistara on
Comments
Unity is a game engine.
Unity is a game engine. You can find examples of it being used as a real-time renderer for animation as well.
and the bridge can actually make carrara hair work in it?
wondering if Unity is worthwhile investigating. I ddon't want to make games, but if it can render avi or mp4 files in FFvii remake quality?
does this mean carrara physics works in unity?
oh. figures wrong label. spent the last few hours looking up Unity.
i didnt buy the hairs
The way I read that label is that the product can be used in Carrara, or it can be used inside Unity via the Daz to Unity Bridge (from within Daz Studio). Those are two very separate use cases. As far as compatibility with game engines is concerned, you can use Carrara to prepare assets but your final output must satisfy the specifications of the game engine. That is to say, everything that works natively inside of Carrara doesn't necessarily translate directly over to a game engine, but that doesn't mean Carrara can't be part of the pipeline.
Daz (and Reallusion) are trying to make their content libraries available to a larger audience that either didn't take them seriously before or found the content too computationally expensive to utilize. That's why your seeing these bridges to Unity and Unreal along with Maya, Max, and Blender. With the increased popularity, accessibility, and blurring of traditional workflows that makes a lot of sense right now. Its a lot to learn, but if your interested in animation, I would suggest at least keeping tabs until you find a comfortable entry point or something that inspires you to deep dive into learning more about it.
adding 2c,
unity and unreal are entire dev and presentation 3D world platforms that are intended for real-time game/interactive use by end-users.
One ruler to help measure their place relative to 'our' own 3D scheme, is that neither of those engines export squat (well, except live and captured video/images). They are the end point for their intentions, period. This is great, by the way, but simply where they fit.
Both environments (everything DS/Carrara do, and more) can be practically free to use/develop in, as they have indy-friendly business models - so we can all play and dream, hence the interest by DAZ/Reallusion in providing their content (and 'interactive licenses' - note the recent push on the DAZ main page? - yup...) to this marketplace. It's a prudent business direction/decision.
The unity and unreal world developer would typically create their mesh worlds from DAZ/Reallusion/Sketchup/Turbo-squid/Rendo assets (and licenses), add then their figures (from same sources) with animation/rigging (fbx/collada/...) to those worlds. (Think actors and sets - much like the games I believe you already play)
Then within their respective scripting contexts, developers map interactive user-controls to 'canned' figure animated events and/or they script logic to take commands from the users and the story logic/scripts to generate the reactions and results in an interactive context (if user does X, then figure does Y, and if figure A hits figure B with Axe, then activate gory death animation, etc.). Add camera controls, physics engines, lighting, atmosphere, rendered styles, etc. to add the magic that we hand-key in our animations...
It's all very-much a realtime interactive environment at the end, but uses a lot of preset elements that are generated using essentially the same 3D elements that we all work with. Hence the urge to find overlap. We also do this with tools like Marvelous Designer - very much the same, but different.
Take a look at the latest unreal-5 engine demo that Wendy posted a few weeks back and understand that the figure is being 'driven' through that world by a user with an Xbox-like controller, and while the figure is driven by the user, the world iteslef (lights, gravity, dust, bugs, rocks-falling, etc.) is programatically generated and reacting with the meshes and environment fed to the renderers (masterfully pre-setup), and also responding to the user/figure as it interacts with their world(s).
unreal engine 5 demo:
Then, last week they released this stuff with interactive super-real people. Cool/humbling:
To be honest, everything that goes on in those demos makes me believe that *if* that can all happen in realtime on a high-end Intel processor, and a pair of NVIDIA 3070s (cavemen tools, relative to even human brains), that perhaps we *are* all living in a simulation of some sort :) If so, I really want to have a chat with the guy/gal that's got my controller in hand, heh.
Add to that, that the whole oculus-rift immersive headset environments are doing all of this in realtime, all around you ( picture 15 4K TVs worth of imagery updating at 30-60 frames-per-second, in every direction around you - in 3D-stereo, so that's times X 2 for each frame - jeez.
Check out the demo - unity has similar goals and strengths/differences, etc. - it's mind-blowing and humbling stuff, and hopefully, after seeing/reading this, the framing of all of this unity/unreal stuff will fall into place relative to Carrara/DS/Poser/C4d/Maya more readily. For animators, it just changes the game entirely, but the enjoyment of 'old school' still has its own value. Think Farrari and convertable VW-Thing, heh. I think I'll take the 'Thing' and some AM radio or 8-track for the win.
Personally, I'm obviously impressed, but kind of afraid of it all, so I'll go back to my humble DS/Carrara world and run another render or two, and be happy and impressed with my mastery (!) of this domain.
That the humble pencil and paper can still enable the most powerful of ideas to propagate, leaves me quite inspired with the tools I'm able to explore in the comfort of my own space. In both cases, it's safe to blame the operator.
best to you misty,
--ms
Yes, what PGre and mindsong said.
I was actually considering going this route to assist in what I'm doing, using what I've done years ago scripting AI routines for Neverwinter Nights to tell characters what to do according to where they end up in the world - for myself on very small scales as far as game engines are concerned.
I'm beginning to have my doubts that this truly is where I want to go with this and, more times than not I'm finding myself just wanting to get back to doing pretty much everything directly in Carrara.
It's really cool stuff to play with and study - even experiement in (I haven't got there with a game engine just yet) but I for one feel that I have to be careful not to get too drawn in unless I really plan to finally jump into it!
exactly - and the demos (and actual capabilities) of these tools are absolutely mind-bending, but I also believe that the scene-setup process for that demo's figure to walk through that world had to be weeks of work - the birds, the bugs, the dust - let alone the actual world - they don't just appear and 'do things' without setup and 'direction'. (the illusive "make art" button...)
The most likely path to my actually producing my 'vision' is certainly not chasing the latest silver-bullet "shiny" - not enough time, money, or brain, but it does include being aware of 'nearby' technologies that allow for faster renders, better sims, and similar tweaks that can compliment an existing workflow toolset, and more important, *skillset* that lets our learned habits turn 3D clay into Wallace and Grommits most naturally.
Sure, once setup, the ability to shoot/direct an animation within the unity/unreal world is obviously far more 'creative' and interactive in realtime, but I also believe that a *lot* of energy has been spent completing a world that won't be seen for most of the shots, where the traditional animation process takes more fore-thought (ask sci-fi funk about that...), but is less cumbersome when only the world in the camera lens matters and needs to be built. My problem is I'm doing the worst of both - traditional animation for immersive/360-degree worlds. go figure.
cool stuff,
--ms
It is a reasonable conclusion. Physicist Tom Campbell has written a lot on the subject, and has produced dozens of videos.
Keeping the illusion going requires enormous computing power. But even that power fails occasionally to cover all the bases, and we experience something that seems impossible to explain.
Sorry, that was truly out in left field. I now return this thread to its regularly scheduled reality. :)
The really cool thing about using game engine technology for animation, however is that the gaming community is Immense!
It's often not that hard to find scripts that are already written and ready to go for many ambient and hero actions - like scaring birds from the thicket, a puma stalking deer, people milling about town looking for where they left their shoes, etc.,
These elements can be saved to libraries accessible any time we need them. We could even create a small scene of nothing but (to use a previous example) a bush and a waypoint with a script attached that summons (creates) x number of birds and has them rapidly fly out, swarm and flock and then fly away, and save that to the library. The script will cause a different result each time - and each one will be worth keeping if shot in video.
These AI-type scripts really had me doing some fun and crazy things back in the day!
But many of these sorts of things can likely already be found as script examples within the engine. The author will create many, Many possiblilities that aren't currently active (commented out) with full comments on their use and how to adjust them. Background folk ambient behavior scripts are a real blast to read and set up!
As mindsong says, however: very time-consuming. It's different of you have a team to work with, but otherwise all of the time we spend working on scripting could have been spend on something else, and vice versa.
Heh - recent Tom Campbell interview planted that seed, and the Unreal demo was fertilizer... Interesting lens to look at it all through. I'm not a Dr. Who follower (maybe 10 episodes seen?), but were you to watch an episode, 'The Library' gets the gears going in the same direction/way.
Enjoyed the thread diversion :^)
--ms