Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Reality has become completely malleable. Here we are creating fake people as art, now people can create completely fake videos with celebrities and politicians. No one will know what’s real and what isn’t which just seems like a perfect extension of everything about 2020... The human species is doomed.
...+1 (BTW love Jeff Goldblum [and hate the site software as I always have to log back in after being in the store])
...think I'll pass on this, I feel I've done well enough on my own
Greetings,
It's not quite as fatalistic as that. It's not terribly difficult to catch them right now; especially with certain motions. And while I'm idly interested in audio GAN's, I'm not convinced anybody's got a good one yet.
Eventually it will be possible to make ones that are very hard to detect, but even then, it will be possible to prove that a person wasn't there when the recording supposedly happened, for example. It means that fact-checking is even more important.
There is already a significant underground of people putting famous actors and actresses onto medium length porn videos, but willfully suspending disbelief is a key part of that medium. Politics is much, much harder. Because you have to make it plausible, and difficult to show the falsehood of.
Nonetheless, we will survive when it's a little harder to tell if that outrageous thing is really something someone said, or if it's fake.
I prefer to focus on the use that makes it possible to create really interesting art, that doesn't suffer from some of the uncomfortably 'not quite there' problems that too much 3D art does. If you've seen a program named "GauGAN" (put it in quotes, or you'll get the artist, who's pretty awful as a person), it provides a really interesting direction for some of this. You can sketch with 'concepts' and it draws them as things, based off of its database of images of that concept. So drawing a tree in sketch form, fills in as a realistic looking tree in the output. If, instead of raw sketches, you could provide a render, and the networks behind it used a large sampling of images to 'improve' the image...in the same way you can 'unblur' an out of focus image using a GAN.
I wish I had the time and math knowledge to try and build it myself...it'd make a hell of a DAZ plugin. ;)
-- Morgan
It'd be the ultimate plug-in! "Real-ify."
I don't have any real knowledge of GANs except a very basic idea of how they work. But it would seem to me this would be totally within their wheel house.
Mixed feelings like most everyone at the moment. Must note how darned beautiful she is! Yes, getting all that realism from direct renders would be nice. Fascinating for sure! This AI is quite intelligent and skilled.
https://github.com/iperov/DeepFaceLab/
That's the kind of scary part. You don't need that much math. The code libraries already exist. You could put together a GAN with a fairly basic level of coding skill (it might not be a great GAN but as long as it worked at all it would still start training and that's the whole point.
3 years ago my datacenter was mostly databases and websites. We had, IIRC, one cabinet doing AI training. Now it's more like half the center and its only half because we can't get the Quadros for more than that. GAN's don't yet, to the best of my knowledge, have commercial applications but other forms do (although I personally think the ones being used to "predict" the stock market are bogus).
Relying on videos to "fact check" was no option since about 20 years ago already. And relying on social media to "fact check" never was an option.
Too bad that too many people still think it would be...
Okay, okay, last one I promise.
This time she's Margot Robbie. Personally, I think she's prettier.
Fascinating work Emoryahlberg, thanks for sharing.
It's kind of like the opposite of Hollywood stunt stand-ins. In that case, they film the stunt double and then map the star's face onto them. But you've a got a clip of a famous person doing something and are mapping a fictitious face onto the star. Interesting.
+1
This.
Its my opinion that we need to stop trying to slow the inevitable spread of tools that spread the misinformation and start addressing the "culture" of misinformation spurred on by social media and fake news.
This is cool. The tech isn't what scares us, but what people will do with it and (even more so) what they'll believe.
- only two things are infinite, the universe and human stupidity, and I'm not sure about the first.
- Albert Einstein
Warning. When I tried to download that product Windows 10 Windows Defender blocked it saying it could harm my computer. The massive 14.5GB Deep Face Labs zip file from Mega you can find on this page however has no such warning:
https://github.com/iperov/DeepFaceLab/
As I only have a nVidia GeForce GTX 1650 Super 4GB & 32GB system RAM I don't know if I'll mess with it anytime before next spring though.
OOOOH for DAZ or better yet for Blender with EVEE!!!!!
+ 0.5
Why?
Technological changes and scientific advances always come with both a cost and a benefit.
It's up to us to mitigate the cost, and amphasise the benefit.
For those that think this is a paid mobile phone app or nVidia GPU only option at the link of the Deep Face Lab link I posted above: there is an openCL EXE self-zip file named DeepFaceLab_1.0_OpenCL_build_01_11_2020.exe in the "DeepFaceLab 1.0" folder that runs on AMD GPUs and yes, slow, but it would work on intel GPUs as well. You want to use the nVidia version for nVidia GPUs although technically openCL works there as well.
Also to cut down on the 14.5GB if you wanted to:
You need to download only the zips inside the Facesets folder, the zip in the PretrainedModels folder, there is only one AMD/openCL version DeepFaceLab_1.0_OpenCL_build_01_11_2020.exe or the nVidia version you want to use DeepFaceLab_NVIDIA_build_08_02_2020.exe being the latest available compiled nVidia version (Aug 02 2020).
If you compile it yourself you have many auxilary libraries and tools which you have to compile and install. They are all free but you will get an education in build chains.
Yeah, that doesn't always work out. In this case, there's not a lot of societal benefit compared to the potential societal cost.
One thing came to mind as I was browsing the forum... Apparently the technology uses thousand(s) of photos of the "victim" to make the illusion as good as possible in any angle and/or emotion - Couldn't the same kind of method be used in creating better likeness when creating a 3D model of that "victim" as well?
I think that is typical, although with this software (which only creates GIFs), you only need one image.
Really cool. Surpases all my expectations
Sorry but fake people have been around for a long time. Although they weren't very deep. This is just going the last mile and bringing them into the digital world as the vanguard before we all move there.
Expect me to sit in my comfy chair and wave you all - those who go digital - a friendly bye bye...
More bodyless digital humans in the digital world means more steak for me
I thought we were going to Mars.
there is no spoon
Space may be the final frontier, but it was made in a Hollywood basement.
But then again... Since everything you percieve through your sensors is transferred to your CPU as electrical signals. How do you know that those signals haven't been tampered with? Maybe they are completely artificial, generated by some pimple faced alien playing with his imaginary friends.
Going down the rabbithole, only one thing remains... "I think, therefore I am"... Somewhere in the universe exists an entity in some shape or form that can call him/herself "Me" due to the fact that it is capable of thinking and everything else is just figments of this beeing's imagination.
The steak you think you see, is nothing but a bate, take a nibble and see what the beeing has conjured up for you
The thing is, the reason that they look so much better is because the Deep fake tech completely scraps the way that DAZ figures make facial expressions, with the DF techniques mapping a shape pattern over a face that is actually bending and posing in a completely natural manner. Unfortunately, that can't be piped back into studio to the figures because DAZ figures simply don't have the kind of facial flexibility to directly import those expressions, especially the Genesis 8 lineup with it's over reliance on facial bones, because most real facial muscles create expressions via overlapping layers of influence rather than revolving around a fixed set of pivot points. Now, if you're willing to completely scrap all of the bones and instead import expressions that are primarily created by whole-face morphs, you might be able get something similar.