AI is going to be our biggest game changer

14244464748

Comments

  • algovincianalgovincian Posts: 2,577

    generalgameplaying said:

    Diomede said:

    In my opinion, Adobe at least is making an effort to train its art AI with images and data it believes it has the legal rights to. 

    "The rights" ;). I see PR and opt-out trickery. "Be fast, (break things,) profit."  PR always believes they have the right to do something.

    In this particular case i don't know too many details, meaning i don't really want a pro/anti position ad-hoc, but i also don't want to have to dig up stuff like "oh opt-out with tricks, whatever tricks means" after reading "we're doing it in a responsible way". Ethically reads like "more responsible than the other guy with a gun, with a gun". But really, i have just consumed a few bits and pieces of "news" and "comments", so maybe they took a couple of weeks to ask customers, to still be in the market and so on. But this also means for artists, who don't want to be surprised, that they have to publish in places, where the license is THEIR choice, and the legislation prevents certain kinds of TOS/licensing-tricks - or be on the guard on a [period of time granted by law] basis. Anyway, i just read PR when i read "effort", meaning opt-out with a TOS-update. But also "openly licensed" is not "we have any future rights to", it's just foggy. Also means... i have to dig up, what adobe trained with. Their PR statement did not contain a reference to a list of internet places they scanned, for instance (-> unless i overlooked it, feel free to correct me if i'm off anywhere).

    Maybe better than others, even likely. But green zone?

    "The first model of the Firefly family is, according to the company, trained on "hundreds of millions" of images from Adobe's Stock photo catalog, openly licensed content and stuff from the public domain, virtually guaranteeing the model won't result in lawsuits as StableDiffusion did with the Getty unpleasantness. It also helps ensure that Stock photographers and artists will be compensated for the use of their works in training these AIs."

    https://www.engadget.com/adobe-is-bringing-generative-ai-features-to-photoshop-after-effects-and-premiere-pro-130034491.html

    - Greg

     

  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    generalgameplaying said:

    Creativity isn't bound by much. Often being constrained to a set of assets, or tools, thinking of some crayons or maybe dirt and water, or wood and stone (...), leads to people becoming creative. The text-to-image suggests "everything" which seems the opposite of "confined", though of course it is confined in several ways. But look at it this way:

    For the moments with creativity i would elaborate and fake these examples:

    1. You're confined to wooden sticks and sand. How will you model "the nose"?

    2. With a generator, letting it generate "the nose, made with wooden sticks and sand", what will it look like? A perfect nose consisting of millions of pieces of sand and sticks? Something between resembling something someone else made with sticks and nose, where it was different, or maybe a statistical middle ground of several of such instances? Maybe tagging will become more precise and you could exclude "realistic" results, but would you?

    Of course the confinements are different for each case, and there is no way to compare it 1:1. Just a thought experiment on creativity in the context of constraints making a difference.

    Post edited by generalgameplaying on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    PixelSploiting said:

    I don't care about AI generated images, but it does look like some kind of bias is present. Legalities aside, because the AI might not be using "legal" dataset, but... What if a render artist or a photographer retouches their output in, let's say... 20%... But an AI user does it for 30% of the image?

    Fair question!

    I'd say, the ai does little by itself, though. In a way it does a lot of course, but you have to push it to. Apart from a photographer re-using their assets being based on their own IP deterministically, for one potential difference, i see two areas of interpretation ad-hoc:

    1. Repetition. That'll certainly happen. But a photographer will have built-in the copyright question concerning other people's works, and the cloud-ai-generator will reproduce the same for the same prompt until re-trained, or even forever. This may mean million-fold for different people, in case of popular combinations, maybe with nuances of text somewhere from subtly different to wildly different results. OR the ai-artist just reuses prompts or generated images, to alter... ~ there certainly is no difference in that action by itself.

    2. "Inspiration." The photographer makes a living, perhaps. Rehashing and reusing their own assets is fair. Humans don't die from that. The ai being re-fed their own output, means increasing entropy. Curated data sets may help, but thinking of a science paper being written with a language model entirely, on the text side, will it be filtered from input? In case of indistinguishability... the large numbers will count. Consequences won't be sudden death of the "ai", but it might produce more and more less different output. Maybe we'll be at a different type of "ai" then and we will look back at the simple language model trickery, as if it's been a fool's joke. Thinking of languages being somewhat alive, stagnation due to large-scale language model use ... would improve language of many people, but towards less in total... "interesting".

    Post edited by generalgameplaying on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    algovincian said:

    "The first model of the Firefly family is, according to the company, trained on "hundreds of millions" of images from Adobe's Stock photo catalog, openly licensed content and stuff from the public domain, virtually guaranteeing the model won't result in lawsuits as StableDiffusion did with the Getty unpleasantness. It also helps ensure that Stock photographers and artists will be compensated for the use of their works in training these AIs."

    https://www.engadget.com/adobe-is-bringing-generative-ai-features-to-photoshop-after-effects-and-premiere-pro-130034491.html

    - Greg

    I think my first answer got removed or bugged.

    So in essence, that's not more than the official adobe sites tell me. They will make up something for compensation later (see their FAQ) - it's totally fog, everything. Their "initiative" to get some "Do not Train" tag seems valid, if i look at it in a naive way, but in reality it's another opt-out for the whole of the internet, leaving gaps everywhere, including if and how the license is to be interpreted. I guess i can prevent training by setting one entry in the robots.txt, or do i have to process all images on a server to update metadata tags? Do i need to write something in the license, will their bot read and interpret the licenses of my images, e.g. if i'm a small competitor gallery thing, (and so on). All that looks good when assuming they're the wight knight. I don't assume they are, but i see other white stuff: fog. Also imposing opt-out on everyone isn't the "good guy" ploy - also consider the question of removed material and updated license, which immediately implies the question of "when" something has to be ready, in order to be treated "in an ethical way". Illegally posted image? Trained! Removed? Still in training data - it had an "open license"! There is two valid approaches, no other ones are logic. 1. Rescrape forever and remove any content from training data, that has been removed or now has a license not consenting with training. Conventions have to be found for robots.txt, license texts, html and image metadata 2. A centralized database for consent, DMCA and legality question for all images of the internet (one each of the whole internet for US, BRICS, EU?). 

    Maybe they exclude apparently copyrighted material, whatever way they do that. That would be better than the data sets containing those - but the rest? There could be a foggy common set of stuff, for which we have to assume tricks (opt-out, procedures, tricks) rather than fairness, with images used for training against the consent of the rights owners. I would not rely on a PR statement of a commercial player, in order to feel better about using their services. This goes for all of them, and there needs to be some reliability beween what they claim and what they do - but what if the claims are foggy from start?

    Perhaps it's legally sound, but from the PR texts there is quite an "open area" there, and legal doesn't necessarily mean "ethical".

    This needs to be elaborated by Adobe.

     

    ("wight knight" - real typo.)

     

    Post edited by generalgameplaying on
  • wolf359wolf359 Posts: 3,772
    edited March 2023

    Also means... i have to dig up, what adobe trained with. Their PR statement did not contain a reference to a list of internet places they scanned, for instance

    @generalgameplaying

    Except  in a reasonable society a private corporation typically requires a court order to give some random person access to all of its internal data & resources ,which is reasonable.

    You are applying a rather draconian standard to Adobe that if applied to the average Citizen 
    would result in all manners of protests & righteous indignation.

    Imagine if you Publicly stated that All of your Daz content was legally purchased and not pirated.
    But Daz inc.(or any random third party) insisted on being able to send a forensic Data tech to your home and inspect all of your storage devices and anything short of that is “proof” that  YOU are most likely lying.

    Post edited by wolf359 on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    wolf359 said:

    Also means... i have to dig up, what adobe trained with. Their PR statement did not contain a reference to a list of internet places they scanned, for instance


    Except  in a private corporation typically require a court order to give some random person access to all of its internal data & resources ,which is reasonable.

    Of course, i meant to point at the dilemma.

    But is it really that reasonable? The common interest is: has my stuff been stolen? Do i have to do a striptease in order to get that information?

    This will be a question, that likely will be treated either by courts, or by lawmakers, then ahead of courts. Imagine a world where you have to ask 10000 corporations for if they used your image, or 10000 of yours, best one request per image with a wait time of one week each?

    (Because... such applications factually pose as competition. Without a legally binding statement about training data... where are we at?)

    (Ad-hoc example for competition: Questions about harmlessness of data aside, images of livers... training an ai that detects cancer doesn't compete with the liver, nor with it's, uh, holder? ... Is it a good example? Can you make a living of seling images of your liver... likely not. It's not creative. Competition could be argued, though.)

    ("Adobe’s first model, trained on Adobe Stock images, openly licensed content and public domain content where copyright has expired, " - in fact they have "openly licensed" external data used. Obviously this could become a conflict of interest, e.g. if courts rule that CC-licensed content isn't automatically ok to train on, and if it happened so, that Adobe trained it's ai on such "openly licensed" content. Which is which... not foggy? Even for future licenses that is a too broad term.)

     

    Post edited by generalgameplaying on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    MelissaGT said:

    Is a photographer not an artist? They don't create any of their own assets...but most do heavy amounts of postwork. 

    Developing skill and style for the photo-only part also can be part of artwork or art (-style). There also can be a lot of pre-work, and it's not certain, that you find the tatooed model, if you are in the random place, and get them to let you make photos of them (with tatoos?), or maybe you use body painting instead. Or you're terribly good at communicating with birds, or vice-versa, and you just happen to be able to make a style of hitting the right moment with your camera, resulting at least in artistic moments captured. Sometimes it's just luck, the special lighting and you have a camera dangling from your shoulder, or you have that inspiration fitting the moment (..., and so on, or you work towards how to build some scene or put into something... it's just another type of tool). I would say there is plenty of room for artwork and art, including novelty. But i wouldn't look down on DAZ-users either, not even "simplistic use(r)s". Previously, for movie production, you wouldn't create all art(work) as a producer or as a director, you buy the art(work) or buy/rent the artist, simply, perhaps to create a work of works, which may be art as a whole. The only difference for asset stores like DAZ is that you make artwork accessible to many people, so it's less exclusive, but it's also less expensive. It doesn't even touch the high priced segment(s) [assumption]. Maybe it creates too many creators, if you want to go there. With ai generators it seems to be a step further, but unfortunately there also is a decoupling from the creators of art, also concerning the monetary aspect (technically not possible for huge amounts of creators. Maybe as a lottery, but...).

    (We/you make art(work) happen in the first place, regardless of the tools. Exceptions may apply at random.)

     

    (Elaboration on "technically not possible": Imagine you could know what images were used for an output, and to what extent. If you make the style and specifics of a living artist easily accessible, you have them stalked, potentially, if you don't, how can you have "accountability"? Or you don't make it accessible to the user, but then what significance does it have, how does the type of accountability even make sense (grandma's legs?)? How is it transparent to the creators? If it can't be seen, how can a creator be sure not to get ripped off (e.g. if they never receive money)? Similarly with detecting similarities with an extra algorithm, resulting in too few or too many matches, i.e. random or meaningless assignments. Next problem is, creators adapting to the algorithm. Think "SEO". This looks like a potential dilemma or even an uncertainty principle. Can there be a meaningful balance?)

    Post edited by generalgameplaying on
  • ArtiniArtini Posts: 8,889
    edited March 2023

    Have watched official Adobe Firefly demo on YouTube and looking at the comments

    most people there like it

    Adobe Firefly: Family of New Creative Generative AI Models

     

    Post edited by Artini on
  • ArtiniArtini Posts: 8,889

    Reading Adobe Firefly FAQ - https://firefly.adobe.com/faq

    I have encountered information about Content Credentials:

    Content Credentials are an implementation of a developing open standard
    for adding provenance data to digital content,
    led by the Coalition for Content Provenance and Authenticity (C2PA).
    This implementation is led by Adobe’s Content Authenticity Initiative.
    https://c2pa.org/
    https://contentauthenticity.org/

    What are Content Credentials?

    Content Credentials are sets of editing, history, and attribution details
    associated with content that can be included with that content at export or download.
    By providing extra context around how a piece of content was produced,
    they can help content producers get credit and help people viewing the content
    make more informed trust decisions about it.
    Content Credentials can be viewed by anyone when their respective content
    is published to a supporting website or inspected with dedicated tools.
    https://verify.contentauthenticity.org/

    Will be interesting to see, how it evolve and if it will be inserted into the jurisdiction

    of selling the content.

     

  • WendyLuvsCatzWendyLuvsCatz Posts: 37,910

    sadly those that want to hate will still hate regardless of what companies try to do

    I have given up on trying to be a people pleaser

    I am told eating chocolate is evil as largely third worlld countries grow cacao etc

    at least a puppy doesn't die every time I prompt an AI art generation

    (and caved and bought more 3D content too devil)

  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    wolf359 said:

    You are applying a rather draconian standard to Adob'e that if applied to the average Citizen 
    would result in all manners of protests & righteous indignation.

    Imagine if you Publicly stated that All of your Daz content was legally purchased and not pirated.
    But Daz inc.(or any random third party) insisted on being able to send a forensic Data tech to your home and inspect all of your storage devices and anything short of that is “proof” that  YOU are most likely lying.

    I'm applying the words from the Adobe PR statement, namely: "ethical" and "accountability", and maybe "initiative" if you will. It sounds so much greater, but what's under the hood?

    DAZ content can be subject to DMCA, persecution of uploaders and so on. There is 1:1 accountability by design with asset stores, also because things are visible and can be bought, or you have ways to check, in case of potential infringement. DAZ can notice buyers or make an agreement with the IP owners.

    That's not possible with training data for "something with ai" just so in general.

     

    Adobe is implying claims about "ethics", while all they say is, that they are making it more safe to use their product, legally. All the other stuff appears to be pretty foggy.

    Initially i didn't intend to take such a position, and i still am just waiting for them to clarify, currently i just deem this approach to be closest to reality. The abusive part can be with users, using ai to "stalk", but mainly resides with the use of training data, and how creators and rights owners are treated. So just by the physical setup, Adobe potentially is on the abusive side by design (of the situation, not necessarily theirs), and now tries to clarify it's approach with an announcement. Should i blindly believe that potentially unfounded foggy statements might hold true in the way Adobe profits most of them? Don't get me wrong, it's probably one of the least abusive approaches, i've seen so far, but i don't typically trust "ethical", when larger corporations use the word. At least not without actual evidence.

     

    Especially with ai, you'll need secret services to monitor corporations, if we really want to go there. But that is not what i am saying. Adobe further claims "accountability", which in the context of ai means, that you know exactly which training data was used and what decisions were made, for a resulting output. But in case of Adobe Firefly, will "accountability" survive an actual analysis? With some of their training data likely taken from the internet or other sources, which in case of "openly licensed" could become infringing, but might also "just" mean, that they took it against the (explicit) consent of the creators or rights owners. The training data does not belong to adobe, their licenses and tos stated something else, but especially "openly licensed" content from random other places isn't just so "theirs" - thus a little closer look actually is necessary, to even be able to judge it. I understand that the selection of training data can be like a business secret, but if Adobe doesn't clarify in a dependable way, i clearly see a conflict evolving around "accountability" and "ethics".

    How to resolve the questions? Head-sand? Adobe is and will not be alone with some of these question, of course some apply to other projects likewise, but especially since they advertise with "ethics", "accountability" and "compensation", they might at some point clarify?

     

    MAYBE they only took "licensed for ai" content from the internet, thus that potential issue doesn't even exist. MAYBE they didn't walk over stock photos with hard to do opt-out, maybe they don't keep training data under random licenses they found on the internet, but instead keep rechecking, e.g. to prevent illegally posted contend to end up in their training data. I could even imagine this. But that's not even what their announcement states, so... someone lift the fog, perhaps?

     

    This isn't about lies, it's about clarification. What's "openly licensed" for Adobe in this context? What's ethical, in the context of how you interpret "openly licensed"? What's ethical in case some of the rumors concerning opt-out (yes, this is for other people than Adobe to found with data)? What's ethical with the "initiative" for opt-out for all content on the internet, imposing tags for images, with a non-funded PR-theory that "metadata stays with images throughout the internet". The latter case obviously is foggy too, because a pirated image, stripped of metada, will then not have the opt-out tag, thus become training data - q.e.d. So if you thought that my statement about "not training on removed images" and similar to be draconic or somehow wild, you would be missing one if not the fundamental point.

    This is all intertwined, and needs clarification, not just remain fog. Of course you could take the stance, that they needn't clarify, until sued or until they lose at some court, but rest assured, that we have less of "ethical" and less of "accountable", if we keep going there. Then also their "initiative" would get another color. WOULD.... it could be clarified, adjusted, corrected, put in contexts by court rulings, etc.

    To elaborate on "draconic" and "legally binding":  isn't there a need for a mechanism to verify the soundness of training data sets, especially for cases where the training data creators are thrown into competition with an "ai"? I'd say that this is a societal conflict per se, and it would not go away by declaring current practices legal. The only way to resolve the conflict would be to go for the rather scientific meaning of accountability in the context of ai, resulting in some kind of transparency and/or independent oversight for such training data sets. 

    Post edited by generalgameplaying on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    (Snip - let's be positive!)

    Examples of stirring around in the fog: 

     

    (1) https://techcrunch.com/2023/03/22/adobes-thoughts-on-the-ethics-of-ai-generated-images-and-paying-its-contributors-for-them/

    Specifically: "When I think about a forward-looking [compensation] model, it’s style."

    For one thing, that could give some relief for the most blunt cases of copying of style with using ai. Apart from that, hardly anybody except a few of the most popular flagship artists will get anything substantial for their style. All others essentially are just having their images fed to an ai for free. This example from the article seems to be a decoy, because it is a new feature, in terms of "copy style of X", while the ordinary function is not covered by this compensation scheme. So in essence, they are still elaborating, but there is "very intersting" examples on the table. Imagine the style thing happening, doesn't it even threaten to create a new form of "copyright"? How will other creators react to the most popular styles? Will they mimic them too, just with their skills, in order to profit? If Adobe prevents that then, i..e. implement the "prevent too similar images" feature which i mentioned throughout the discusssion in similar context, then they will have imposed a new balance of power, next to copyright, protecting their business model. That's a hypothetical development, yet it seems hard to imagine, how compensation is going to be fair in any way, and even if, how compensation will ever be more than negligible, other than for the few ones on the flagship side.

    (2) https://www.reddit.com/r/StableDiffusion/comments/120jt1a/adobe_firefly_vs_stable_diffusion_ethics/

    No notification for opt-out? May be a common thing in the US, but is it ethical?

     

    (3) 90% of search results feels like repeating the ads. Is this an "ethics hype"? (I'm not good at internet search.)

     

    For me that's enough on the other new thing for the moment. Maybe there will be elaboration/clarification, also more details on the opt-out question could be interesting. Currently i don't see how ethics, accountability and compensation would be going anywhere (significant). Perhaps they'll elaborate during the beta phase...

    Post edited by generalgameplaying on
  • DustRiderDustRider Posts: 2,692

    WendyLuvsCatz said:

    sadly those that want to hate will still hate regardless of what companies try to do

    I have given up on trying to be a people pleaser

    I am told eating chocolate is evil as largely third worlld countries grow cacao etc

    at least a puppy doesn't die every time I prompt an AI art generation

    (and caved and bought more 3D content too devil)

    I just wanted to say thanks for your posts. I'm still on the fence about AI art (mainly because of the IP issues), but your posts help bring some balance. You have also brought to light that you can run it on your own computer and use your own images for training. Also thanks for posting the Firefly info. It looks to me like at least one company is taking IP rights seriously. 

    I must admit that looking at all the AI stuff at DA is a bit depressing.  It takes a lot of work and many many hours for me to get something even close to as "good". If all the issues of training data using works without the IP holders consent get worked out (or system requirements are a little less ridiculous for making decent sized images on your own system) I might give it a try.

    On the plus side of things, I did try searching for several of my images in the training data, and didn't find any. So not every image on the web has been scraped to train the AI engines. But watermarks and other identifying IP text showing up in the images is still a bit concerning. 

  • WendyLuvsCatzWendyLuvsCatz Posts: 37,910

    on topic different media

     

    as a user of VSTi and Myriad Virtual Singer for 12+ years this excites me

  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    DustRider said:

    On the plus side of things, I did try searching for several of my images in the training data, and didn't fin'd any. So not every image on the web has been scraped to train the AI engines. But watermarks and other identifying IP text showing up in the images is still a bit concerning. 

    Did you use a search interface for source images contained in a set of training data, where you input the tags? Such exists for some data sets. Or did you try to "find" your image by using the generator?

    Watermarks showing up means, they trained on something that contained those with a certainty of pretty much 100%. Trying to "find" source images by using the generator, may be difficult or even not possible. The system might have been trained from a differently tagged copies of your image, or there are many other images with similar descriptions, which either make it necessary to distinguish somehow, or even impossible to isolate, depending on what exactly happened in the model for the data in question. Not being able to find your image with using the generator, wouldn't mean that the system can't reproduce parts or almost whole of images (to some extent ~ resolution, ...), it's at first well within the range of capabilities those systems have. In theory they could also have trained from meta sets of data, where images have been processed by another system, e.g. split into features, extracted text and so on - this is probably not what they did, but it can't be fully excluded, if they scraped the internet like madmen.

     

    The good thing with Stable Diffusion is, that the data set is your choice, though training your own data sets, either means high cost, or smaller amounts of data to start with, and it may take some time for more models to pop up. Running a model on your own machine also is a great plus per se. I'd bet that there will be more types of data sets, e.g. with explicitly tagged for ai use, or at least responsibly monitored (important with legal reasons for removal, but also giving people a choice, e.g. to opt out with a delay). Monitoring for changed tagging and removed images is the only responsible way, if you want to progress fast-ish, because that is the only way, that an opt-out for random content from the internet can work (in a responsible way), unless you can wait for a widely accepted standart for "opt in" to establish. I hope the direction of actually accessible science (+something if needed) like Stable Diffusion will play out grand, so we don't see another phase of hardware-lock-ins and cloud-only services. Likely the commercial generators like FireFly will be great/better, but "good enough" and free, will pose for some valid competition (and also would enable smaller players).

    Post edited by generalgameplaying on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    WendyLuvsCatz said:

    on topic different media

     

    as a user of VSTi and Myriad Virtual Singer for 12+ years this excites me

    Singing is very hard to do, though, because humans are strongly bound to voice for most, and the tiniest nuance will matter. Having some experience both in singing/singers and using/testing some VSTi libraries, i'm also looking forward to improvements that the generative technology is supposed to bring.

    In fact i rather hope for updated versions of choir and singing libraries, improved by this, in terms of that EastWest choir being phonetically prorgrammable, but now with legato? I'd probably consider a purchase then. Of course VSTis could profit from generated vowels, consonants, transitions and so on. Though i'd have to add, that i assume those to profit most and fastest in theory, because they are limited in expression anyway, thus likely simpler to improve by magnitudes, and some are already very good. Human voice also is so subtle, because we perceive it with such a high bandwidth in comparison, they'll never get the interface right to model a skilled singer with "full control". Until they brain-interface (dangerous), or let you describe how exactly to sing, at the same time as maybe "show" i.e. give imperfect examples of parts of expression, but not being bound to deliver all parameters at the same time during input. E.g. you just mimic an aspect, roughly, with your voice or hands or face(?), and the system adapts the passage or a note or a transition towards that. But maybe they'll just take your imperfect "singing", e.g. ignoring pitch, and whatever parameters of your choice, and work it into the notes/performance that has been set up. Maybe a generic or like-x pop song thing and of course all those distorted to "any voice and text works" stuff, could be done in a whimp, in comparison. Capability-wise, we'll probably see interfaces, where you choose or "program" a style and then can run a song pretty easily, with just a few hints to add. But ok... we'll see. I could also imagine simulated instruments (as opposed to mostly sampled) to get a boost from this. Oh, and will we see a fair-use debate, concerning ai training on the raw data of VSTis, in the absence of any contracts with the makers?

    Post edited by generalgameplaying on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    (We will also see attacks on ai, e.g. recreating images of certain artists, but in distorted or style- or other-wise modified ways, that render their tagging less efficient. Probably simpler: tagging bombs? Of course such is expensive for the whole of the internet, but who knows, what you could reach with such approaches. Curated data sets might be the only way to go at some point, and they also make sense, if you want to seperate genres anyway.)

    (Perhaps don't think of attacking artists, but the abstraction and tagging itself. E.g. several complementary versions of "a hand", rendering the asbtraction or denoising "kaputt", i.e. attacking the construction or innerts of those systems itself, leading to a higher percentage of nonesensical output, despite the source images "looking good".The source images will probably look like fractal/psychodelic art, with some shapes +- structurally inverted or complementary versions. Disguised as art...)

    Post edited by generalgameplaying on
  • TorquinoxTorquinox Posts: 2,648
    edited March 2023

    The AI makers harvested our art without consent or compensation. The AI makers harvested the materials they used from the internet under the auspices of not-for-profit companies gathering the materials for "research" and "fair use." Now they're turning around and selling the data they gathered for profit to the public. Well, where's my compensation? Here's a nice article on that very topic: https://www.theatlantic.com/technology/archive/2023/03/open-ai-products-labor-profit/673527/

     

     

    Post edited by Torquinox on
  • TorquinoxTorquinox Posts: 2,648
    edited March 2023

    I will point out, for many DS users, getting good results out of DS is a lot of work. To say that art made in DS is the same as art made with an AI program is a false equivalence.

    Post edited by Torquinox on
  • WonderlandWonderland Posts: 6,742

    I'm thinking AI will bring whole new levels of creativity especially when you combine various apps. I tried selfie enhancer apps on DAZ renders. It's fun and creative. Just morphing or changing textures in 2D vs 3D. When I do a prompt on Midjourney, I usually have to do many, many versions to get close to what I want then bring it into Photoshop and like 10 other apps to make them my own. It's just more tools to play with. Some people just like whatever AI spits out which is OK but usually needs a lot of improvement. But if you look at book cover designers or graphic designers for print/online ads, they are mostly just using stock photos or photo manipulation. So what is the difference really? Better to have a face of someone who doesn't exist than a stock photo of a real person used by anyone. The only thing that concerns me are deep fakes where real people can be depicted doing or saying things they never did, especially with all the conspiracy people out there. For art purposes, I think it's a new creative tool. 

  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    @Torquinox I think the argument of comparison to DAZ was made with a supposed "average DAZ user" in mind, who "just" uses pre-made scenes and lighting and clicks render. I'd say that there still is a good lot of complexity and potential for variation in that, even if you "only" use premade poses and so on. I'm not sure what the average DAZ user does.

     

    @Wonderland Most and most... Many book covers are still made by professionals or artists, e.g. children's books. May strongly depend on the genre. Concerning existing faces, i wouldn't be sure that it always generates faces that don't exist. Has anyone ever checked? (A book cover ok - but a villain in a movie, in times of people becoming more and more stupid?) Concerning postwork and the likes... postwork is postwork, whatever that adds or changes or makes of an image, probably doesn't distinguish this step, regardless of the method the image has been created with. Of course there can be significant differences. (The rest on DAZ see above.)

     

    What one could be mildly concerned about, concerning creativity, is that too many outputs will in the end resemble each other too much, and it will be "in the machine way", not just because many people have similar ideas. Previously each user might have interacted with someone (uh stock photos ??), and the result have been partly based on that interaction, with some brain/creativity potentially involved. Likely the variety has been greater than it will be with massive use of the same generative ai models. There may be ways around it, e.g. an interface that asks back, resulting in higher complexity to start off with and so on. But you will always have that question, where variety will come from, if "the average prompteur" doesn't use an interactive or otherwise ingenuitive way to feed the prompt, or even the underlying training data gets less diverse due to flooding with images that can't be excluded as ai-made easily, or due to less and less genuine images fit as training data being created at all. Maybe an ubandance of tools also for environments like DAZ Studio could lead to enough of training data in the end. Right now the models might even be deceptive concerning quality, because you will probably see, that the synthesis that is claimed to happen, will be the same for millions of users - deceiving if you yourself look at the output you generate, without having access to all the thousands of other generated images - this may or may not be the case, but usually the potential is not mentioned. In a way, this could be a trick play with a deck consisting of aces! Well, maybe...

     

    Post edited by generalgameplaying on
  • WendyLuvsCatzWendyLuvsCatz Posts: 37,910
    edited March 2023

    assumptions are being made based on personal usage

    there definitely are people who prefer click and load and render readymade scenes 

    I have encountered them many times wanting to know exactly what was used in promotion images down to the expressions and hand poses

    there are also AI art users who tweak and redo generations many times inpainting, resubmitting the results to img2img, edit it in an image editor, combine elements, painstakingly remove individual pixels (me in Gimp) before loading it into Crazytalk replacing inner mouth and eyes with edited CT textures to match

    loading a depth map of the image into Zbrush as an alpha and converting alpha to mesh

    Zremeshing and reuvmapping that to fit the image

    rigging bits of the mesh and weightpainting in Carrara and using soft selection to create morph targets

    yes I actually do all that

     

    so no one size does not fit all

    I show the wireframe in Carrara at the end of the video 

    Post edited by WendyLuvsCatz on
  • jdavison67jdavison67 Posts: 587

    The way to cope with Ai is to use it, incorporate it, and master it.

    Make it work for you, and not against you.

    DAZ is a great tool for setting up scenes, and characters.

    Add a bit of Ai, and you can come up with interesting results.

    It's not for everybody, but it is not going away.

    JD

  • generalgameplayinggeneralgameplaying Posts: 506
    edited March 2023

    jdavison67 said:

    The way to cope with Ai is to use it, incorporate it, and master it.

    Make it work for you, and not against you.

    DAZ is a great tool for setting up scenes, and characters.

    Add a bit of Ai, and you can come up with interesting results.

    It's not for everybody, but it is not going away.

    JD

    First off, i don't want to discourage anyone from using "ai". Checking out where and how what type of system can be used, absolutely makes sense. I wouldn't identify all types of systems as one "ai", though.

     

    "The way to cope with Ai is to use it, incorporate it, and master it."

    For the general point, just throwing in this: https://www.theguardian.com/technology/2023/mar/29/elon-musk-joins-call-for-pause-in-creation-of-giant-ai-digital-minds

    This may seem like an example from another realm, compared to "just image generation" or "some ai tool", but in a way it shows that the "ai topic" can't be easily judged in general. Shaping any such transition might remain crucial for society. Mastering abuse, manipulation and misinformation most likely can't be done with (the same kind of generative) ai. Believing so might be a fatal misjudgement. I mean mastering the defence, not just the attack. 

    So concerning coping... in case of competition for faster image creation, this probably means that you might have to be doing something else than before, possibly faster, more restless, and get less per image, perhaps less in total. Pitty or not... different question. Maybe "thanks to ai" there will be better things to do, though seriously, training on images against the explicit consent of the creators, tricks like "opt out", that for systems which pose competition to the people who created the images the system is trained on? The reality remains: what's been done so far is not even necessary for a scientific gain, it's just the commercial exploitation of the early market, because science and hardware meet at a sweet spot right now. 

     

    "Make it work for you, and not against you."

    On the one hand: of course! There will be useful tools. But on the other hand the text-to-image generators pose direct competition to image creators, and might have similar effects on asset creation in general. Which to what extent, remains to be seen, but for swift image creation, you can be sure, that things get faster, and people get less per image, if anything at all. Maybe things get more uniform too, with post-work being the "rescue". AI isn't AI, though, and there will be all sorts of other tools, extra to the magic text-to-image ones, not all of which will be cannibalizing what they use for training data.

     

    "DAZ is a great tool for setting up scenes, and characters. Add a bit of Ai, and you can come up with interesting results."

    Agreed, though if i could "add a bit of ai" to scenes, would i need ai in the first place? The text-to-image "ai" replaces the rendering for most, that's where that type of "ai" i.s making a huge difference. For all the rest, you'll have tools for specific tasks, post, animation, posing, texturing, what not. That's all not so much of an issue. Currently much of tools demand specific devices (IPhone) or cloud services to work, which renders them less interesting for me. I'd rather wait for Mozilla/Blender/Stable Diffusion/me to come up with something, unless i had to do something specific fast right now.

     

     

     

    Post edited by generalgameplaying on
  • MelissaGTMelissaGT Posts: 2,610
    edited March 2023

    jdavison67 said:

    The way to cope with Ai is to use it, incorporate it, and master it.

    Make it work for you, and not against you.

    DAZ is a great tool for setting up scenes, and characters.

    Add a bit of Ai, and you can come up with interesting results.

    It's not for everybody, but it is not going away.

    JD

    Ya'll aren't the least bit concerned with uploading your Daz work into the 'AI Ether" where you could potentially lose copyright on it and/or any output? Ya no.  

    Post edited by MelissaGT on
  • JabbaJabba Posts: 1,458

    OK, my experiments with Stable Diffusion are leaving me frustrated for the most part...

    ...Absolutely amazing for wide sweeping landscapes, but when I have a precise idea for vehicle detail or related to fine detail in people... hopeless - it takes too long to refine and is too random.

    Sure, it will be down to my approach that is making things worse, but it's just not fitting-in the way I would like it to, and it has zero therapeutic effect because you're not actually sketching or painting anything; you're typing all day.

     

    Improvements are happening all the time, and by the time it's time for me to get a new PC, I'll have had time for a good re-think and may well opt to have a standalone AI app installed instead of using Playground AI.

  • JoeQuickJoeQuick Posts: 1,700
    edited March 2023

    I've cranked out a lot of concepts in midjourney since I stumbled on it.  I've used exactly one of them so far, and it's not an item that's made it in the shop yet.

    More recently I've been using stable diffusion to play with old promos, making the kinds of adjustments that avid photoshoppers may have done in post back in the day.

    Rather than use inpainting, I just composite the two images in photoshop, layering the SD render over DS and erasing all the SD bits I don't want.  The goal is making everything feel more tonally coherent and atmospheric mood wise.

    New Beaver Main.jpg
    1000 x 1300 - 940K
    New Snowman Popup 3.jpg
    1000 x 1300 - 682K
    New Frog Main.jpg
    1000 x 1300 - 737K
    New Duckie Main.jpg
    1000 x 1300 - 1M
    New Turtle Main 2.jpg
    1000 x 1300 - 1M
    New Axo Main.jpg
    1000 x 1300 - 965K
    Post edited by JoeQuick on
  • WonderlandWonderland Posts: 6,742
    edited March 2023

    This is not one click AI but many adjustments in a selfie app. I took one of Mousso's promos (already beautiful!) and just played to see what can be done. It's scary that people are doing this to their own faces but this is great for DAZ renders, especially with the limitations in makeup for G9, you can easily change lip colors and shine, eye color, skin, face shape, everything in a simple iPad/phone app!


     

    E1B5CB65-1BFA-460C-88AE-9E36AB43FE44.jpeg
    1454 x 1935 - 2M
    60E78E73-5228-4889-9C46-429409AE0CB6.jpeg
    1454 x 1935 - 428K
    Post edited by Wonderland on
  • ArtiniArtini Posts: 8,889

    JoeQuick said:

    I've cranked out a lot of concepts in midjourney since I stumbled on it.  I've used exactly one of them so far, and it's not an item that's made it in the shop yet.

    More recently I've been using stable diffusion to play with old promos, making the kinds of adjustments that avid photoshoppers may have done in post back in the day.

    Rather than use inpainting, I just composite the two images in photoshop, layering the SD render over DS and erasing all the SD bits I don't want.  The goal is making everything feel more tonally coherent and atmospheric mood wise.

    Awesome results. Thanks for posting.

     

  • Faeryl WomynFaeryl Womyn Posts: 3,324
    edited March 2023

    Here's some information that may be useful to some of you. This is a US based rules guide, so I would assume it's different in other countries. Below is the full text he mentions at the beginning.

    The full text is here...

    https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence

    Post edited by Faeryl Womyn on
This discussion has been closed.