Remixing your art with AI

1246721

Comments

  • kyoto kidkyoto kid Posts: 40,591
    edited December 2022

    generalgameplaying said:

     

    2. The simplest way out would be to regulate scraping and demand accountability for training data sets. Explicit consent only, signalled via metadata, plus a gov funded original works database, perhaps. So any scraper who wants to stay on legal ground, will honour the metadata. In these cases, if someone puts your images up tagged, and you manage to remove those, they will not be in the next training session of any ai. Explicit consent, meaning like some EU law, demanding an extra check-box in the settings, with clear text, not buried somewhere in the TOS, and strictly opt-in, best a global and a per image setting. Of course law needs to be more specific, e.g. if it should be allowed to withdraw a license, so "ai trainers" have to look up the consent status with each new training session, possibly there would need to be a image+consent database. I would also suggest, to think of further uses, possibly with licenses referenced by images (educational, generative ai, image recognition/tagging, ...). I know, it becomes convoluted quickly, simply because it's after all a complex topic with depths to all directions.

    ...I've suggested this on the DA forums several days ago.

    I also blocked downloads a while ago, .but that still doesn't stop someone form taking a screen shot and posting it elsewhere so it can turn up on say Google images making it fair game for anyone to use.

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 17,929

    I tried DreamStudio today, the light version. It wasn't worth the effort for most of the sequences of 4 it would generate but an occasional one was pretty good. If I had me a GeForce RTX 4090 24GB video card then I would go to the trouble of building, configuring, and installing all the pieces of DreamBooth as shown in the Wolf359 posted YouTube video posted by the Corridor Crew.

  • "Assuming convergence towards what can be done with known input, artists seeking for new edges will probably not be able to go there directly, using systems like stable diffusion and the others." - meaning to check for research on that. (How "generative" is the system, in terms of ingenuity or in terms of craftsmanship? How easy is it to create distinct variations? Does it converge towards certain interpretations for areas of input? Are areas of input getting ignored? And so on...) The system can't just mix things like in a mixer, so maybe there are such effects.

     

    Too much of convergence might not even happen (anymore, at some point), be it due to improvements on the language models, generative elements, or simply due to much larger training data sets. E.g. adding in language models that are better at abstraction and combination of language, could allow more concise description. Still i am not sure, if more precise description could tend to lead into local output complexity minima more often, where suddenly variation is few, especially when looking for areas to the sides of any mainstream. Of course you can dump some effect on anywhere, post-work, but it may be difficult to proceed within the ai-system by small alterations, like some artists might want to, because it may change interpretation fully the next moment, or just not cover up for the lack of variation. So i'm at "local input complexity minima" vs "local output complexity minima" ~ math (?). For comparison, the classic image recognition typically has the other kind of job: Lots of input has to be mapped onto one thing, basically, e.g. a red hydrant. That then for many different things, but always the same kind of task, baked into the network(s). Some of the deep learning networks are similarly trained to judge input based on known rules, however with distinct mathematic functions to judge the results (at least finally), so euphoria yes, in terms of board game killer, general intelligence no, art rather not. The generative approach is meant to generate content, potentially vastly differing from any specific input - but instead of simply mangling pixels into a useless mélange of "any", it relates bits or abstractions of input somehow via a language model to the given text, and then upscales a coarse-piece mélange into the final image, which can work due to the input images coming with some descriptive text/tags attached. Rather like that. (Correct me if i'm wrong.)

     

    At some point there very likely will be fully legit "painting helpers", which you can simply address by text precisely, and maybe some day learn together, to work on input of your language with emotion, non-verbal expression and further. So the ai provides a lot of the craftsmanship, according to as precise descriptions as you want to provide it with - not much different, just less human, to what some of the gratest painters of the renaissance did, except maybe for that they also could paint everything themselves. However that probably would still mean, that you carry full responsability for copyright violations then, as a human, while automatted bulk output could remain either banned or uncopyrightable. Simply because "the toaster" will be so precise and versatile, that it could as well be your brush, if you could handle a brush. Right now, what we have, is the cheapest way into something like this (and it takes like 600000$+ to train one time at the given input size with stable diffusion), scraping social networks for more or less random images. If the techqnique is so good, there is no reason to violate even people's feelings by using their works as input data against their consent, because on the medium run, the systems will shine anyway, even with hand-picked input. Yes, the cost-factor often is crucial, however images typically don't run away, and on the flip-side, fast-working disruptive business models that only work for a short time (...), typically create a lot of damage.

     

    Now who would make a business right now, confined to the text-to-image part? I'd be slightly weary there, as we may face unnecessary consequences with no societal gain, on the contrary, and perhaps the business model isn't even leading too far on the short term either. In addition, i think they have been training on best-rated images so far, which isn't necessarily the best thing to have, for the artist to stand out with the output, it just is the cheapest and fastest way to build such a system at this stage. So this leads to the question, if business models will only evolve around the mass of users, say your customers, rather than you ;p, or perhaps orthogonally, mainstream customers for most. On the other hand, creating images fast, if you can, better than most people would, leading slightly or as much off-mainstream as the system dares to let you, could allow some artists to even increase the amount of customers, though it'll probably be low price per image, and the ai-people gain with each image as well, with all potential consequences that may have, e.g. helping them to get your job in the future.  There is a million other applications for the techniques as well, so things will evolve eventually, certainly. More combinations with subsystems will be tried, and no one knows where this will be at what point in the (medium-) near future. Unfortunately it still does have the potential to create a good lot of damage to artists and society, as well as to create or maintain a couple of billionaires, so i'm curious how the "discussion" will proceed.

     

    I'd probably profit from ai-image-generators a fair lot (in theory), if i can do quick comic/storytelling in pretty reliable ways, virtually for nothing. However i doubt, that there will be any affordable improvements on 3D computer games in the very near future, at least without cloud/vendor lock-in. Of course i could be mistaken there, as you may very well have some open source foundations teaming up with fundraising and/or machine time donation, stuff like that. So maybe Blender will have an ai-animation helper the next day with rigging translation, e.g. for cats, donkeys and birds :) - but maybe it'll be the Studio version, or the other engines with the high cost for even smaller teams, which will be around there first.  I'd just not be on fun terms with things, if such destroyed daz3d (which i don't believe it would), as i have just dumped a fair lot of money into it.

     

    (How i would use ai: maybe i will experiment with one or another of the current systems, where i can alter input data sets, train it myself for testing, alter models perhaps, just to find or check on the edges, hardly to create art. I will be too busy trying to check/test/build/implement other stuff anyway, and there simply isn't room for ai-driven random images with my current route. I'd rather implement scene-setup helpers, smart selection dialogues and cross-application meta-data-accounting, leading me further with DAZ Studio in classic ways, than any "ai-thing" could in the near future. But i may try to use generative and/or evolutionary models combined with shape and movement recognition, to steal animation from nature recordings, perhaps, or to simulate worlds better for computer games, behavior of autonomous agents, pick from a set of muscle morphs for daz figures based on sports videos, find parameters for solid-state physics-based pseudo-models for orbital laser production, such, MAYBE. At least that's where i would aim at right now. Of course that's over the top, and not what daz artist-users typically would do, and perhaps, i won't.)

     

    That's probably my last longer post of random points, so don't despair!

  • generalgameplayinggeneralgameplaying Posts: 504
    edited December 2022

    kyoto kid said:

    generalgameplaying said:

     

    Explicit consent, meaning like some EU law, demanding an extra check-box in the settings, with clear text, not buried somewhere in the TOS, and strictly opt-in, best a global and a per image setting.

    ...I've suggested this on the DA forums several days ago.

    I also blocked downloads a while ago, .but that still doesn't stop someone form taking a screen shot and posting it elsewhere so it can turn up on say Google images making it fair game for anyone to use.

    Such could never be prevented. For the scrapers with legal commercial interests, you could try the "whitelist strategy", i.e. only allow them to use images from trusted consent-databases, that try for some kind of ownership check and provide effective means for handling dispute.

    One could lobby for accountability with training data sets. Meaning, they have to re-check consent status and ownership (i.e. also check for removal) of used images on each new training, or even every so and so time frame. Especially the time frame thing could force them to ensure legal use of images, because having to re-train just for a few images outside of the planned schedule, likely won't be feasible. But that might lead to the need of somewhat centralized databases which then contain images and consent status, and perhaps general reference of licensing terms. From there on, you could demand scrapers to exclusively process such images, which are in such a database, so removal and DMCA state can be accounted for in a reliable way. Don't ask me what that will mean for anonymous art or just pseudonymous - there is a lot of edges to cover... DAZ might have their own database, which has the advantage of better reliability concerning original author assignment, certainly for many cases, as well as dispute handling, but maybe an (n)gov thing would be something too (then disputing against thieves will be interesting). So maybe such a database would have images not shown in the database, but just the image hash, or it works on hashes exclusively. If you think it as a gov-funded open-data project, it could contain images too, and maybe some standard models that are trained based on crowdfunding or gov-funding, could be located there directly, as well as some generic image processing digest data. I'm losing the topic. There is cons too and it's a complex approach. In fact lawmakers might bungle this grand both in US and EU :).

     

    The maximum damage would probably come with the amount of users, independent of your works being used or not, similar to the impact of social networks and corporations like Amazon or Uber, just by users "going there". In my opinion it's more than legitimate, not to want your images being processed for commercial interest of yet another third party, for instance. It'll be near-impossible to assign any amount of damage to a case, if your images are within training data somewhere, so i hope lawmakers don't fall for "business models first", because for instance, auto-tagging abilities based on image recognition, are with few players like Google, even if they still have to better up on general art instead of just photos, and it looks like it was easy to see, where such can lead and where it can not lead. 

    Post edited by generalgameplaying on
  • ArtiniArtini Posts: 8,869

    wolf359 said:

    Back on the subject of "remixing you art with AI"

     

    A basic Daz content render "Disney Pixarized" with one click 

    Nice example. I can only imagine, one can play with such improvements all the time...

     

  • kyoto kidkyoto kid Posts: 40,591

    @generalgameplaying  ...DA does offer a service that notifies Core members of NFTs. minted from art they had on the site. I received several of those notifications and acted upon them to have the NFTs (successfully) removed from sites that dealt in them. 

    Again as I mentioned, another site ( don't remember which one off hand) allows members to lock out their works out so they cannot be used for AI training. DA and Daz Galleries need to implement something similar.

  • MelissaGTMelissaGT Posts: 2,610

    kyoto kid said:

    Again as I mentioned, another site ( don't remember which one off hand) allows members to lock out their works out so they cannot be used for AI training. DA and Daz Galleries need to implement something similar.

    DeviantArt does have this.  

  • kyoto kidkyoto kid Posts: 40,591
    edited December 2022

    ...interesting, but where is it?

    Post edited by kyoto kid on
  • generalgameplayinggeneralgameplaying Posts: 504
    edited December 2022

    (Hiccough)

    Of course platforms can try to provide the full content only behind logins and detect (then illegal, due to robots.txt and TOS) bots somehow and lock them out. Fully preventable... nope. A bot could of course run slow and imitate human behavior, use multiple accounts and so on. If "a bot" using mutliple ips and multiple accounts scrapes many networks at the same time, imitating "human behavior with a browser", it could in theory be like IDK-% effective, in comparison. Most mass-scraping doesn't have decades time, though, and even going slow may prevent a mass-data-based business model. Having to do something, that isn't generic in terms of doable the same way for ten thousands or rather hundreds of thousands of images, or what raises the cost per image too high... they probably won't go there. Maybe a scammer using an ai-generator might go some distance, as long as it's free. That's why the previous science-collaboration projects like Stable Diffusion would use training data sets, resulting from scraping with the consent of a few social networks, and likely only covered areas, where public/free tagged image content was available in bulk - here especially to keep the cost low, naturally. (public/free to view, not relating to licensing terms and any potential violations).

     

    Still art behind a paywall or even just behind a login, has a different audience than publicly available art, with half of an exception being thumbnail-like preview images. Being forced not to be visible can be an issue too. Question is if the issue results in "being forced" at all. One should try to get an estimate of the actual dangers there. People could always take an image and sell it as their own, or the NFT of it, or an altered version (now easier with ai). The images getting scraped for ai, would rather be the "commercial third party profiting from using your work against your consent" question. Not directly lethal, but something crude and societally problematic, hard to impossible to judge financial damage for most cases, though some famous people might get their stuff imitated and violated all over the place (partly similar to now).

    Post edited by generalgameplaying on
  • MelissaGTMelissaGT Posts: 2,610

    kyoto kid said:

    ...interesting, but where is it?

    It's a check box under your profile settings. Under 'Personal' - 'General'. Checking it will flag all of your deviations for "noai".  

  • MelissaGT said:

    kyoto kid said:

    ...interesting, but where is it?

    It's a check box under your profile settings. Under 'Personal' - 'General'. Checking it will flag all of your deviations for "noai".  

    An opt-out? At least it's global. Are there per-image settings?

  • those who don't like ai art and call people out using it had better be very original when it comes to their own art.

    this isn't my personal opinion just an observation because now more than ever copying, tracing, being inspired by is going to be scrutinised.

    the stance being taken has ensured that.

    before ai it was already a thing especially on DeviantArt among some furry hand drawing communities with "species" being a thing.

    3D content based on other peoples intellectual property will certainly get called out.

  • wolf359wolf359 Posts: 3,764
    edited December 2022

    Nice example. I can only imagine, one can play with such improvements all the time...

    I am particularly interested  in soon having the ability to batch process rendered frames to get a style completely different from your original render.
     

    As I said in another thread, if potential commissioners of art consider AI to be high risk (because of potential copyrightn liability in their jurisdiction, or jurisdictions in which they wish to be active, or because they will not be able to protect their own AI-generated images from uses that may damage their brand) then they will not accept AI-generated art. No global legislation would be required, merely local legislation in desirable markets.


    And of course such restrictions in  local jursdictions could easily be undone in the next local demcratic elections and  thus would not be a reliable long term solution for keeping AI art out of any Market in perpetuity.

    Post edited by wolf359 on
  • generalgameplayinggeneralgameplaying Posts: 504
    edited December 2022

    WendyLuvsCatz said:

    those who don't like ai art and call people out using it had better be very original when it comes to their own art.

    this isn't my personal opinion just an observation because now more than ever copying, tracing, being inspired by is going to be scrutinised.

    the stance being taken has ensured that.

    before ai it was already a thing especially on DeviantArt among some furry hand drawing communities with "species" being a thing.

    3D content based on other peoples intellectual property will certainly get called out.

    "those who don't like ai art and call people out using it had better be very original when it comes to their own art."

    Or very protective :).

     

    "more than ever copying, tracing, being inspired by is going to be scrutinised."

    I would judge "being inspired by" to be something completely different than copying, at least for the published results. The problem with the current form of "ai" is that it really constructs the image based on other people's works, and then somehow blurrs or scales it up to the end result. That in addition to the tendency for this kind of technology to become cloud-based with many people paying money to few people, while others might lose their jobs over it, makes it a hard to ignore this development. Or it should be such. Basically there is no need to use works against the consent of the creators, why should they have to? What happened just now, simply has been the cheapest and fastest way to show-case this kind of system (to great effect) at all. I don't condemn that, for the case of a science collaboration made available to the public free of charge, like with stable diffusion. But i hope, people don't fall for the "idea", that it has to be feasting on all creations regardless of consent of the creators. From here on, we really need to know what we are doing, or we will just once more kill jobs for the sake of some clickworking. Likely, if the technology becomes huge, it'll change the job-scapes anyway - why have it happen in a disruptive and in essence evil way short-term without need? Do you need the blood of other people to "be inspired"? (I might be diverting slightly here...)

    (The point being commercial exploitation here, not inspiration. The ai-system isn't inspired, you or me can be.)

     

    "3D content based on other peoples intellectual property will certainly get called out."

    Maybe, maybe not. If they make music less strict, we could have a century of actual music, again. Oh no, that was last century! Anyway...

     

    Post edited by generalgameplaying on
  • generalgameplaying said:

    WendyLuvsCatz said:

    those who don't like ai art and call people out using it had better be very original when it comes to their own art.

    this isn't my personal opinion just an observation because now more than ever copying, tracing, being inspired by is going to be scrutinised.

    the stance being taken has ensured that.

    before ai it was already a thing especially on DeviantArt among some furry hand drawing communities with "species" being a thing.

    3D content based on other peoples intellectual property will certainly get called out.

    "those who don't like ai art and call people out using it had better be very original when it comes to their own art."

    Or very protective :).

     

    "more than ever copying, tracing, being inspired by is going to be scrutinised."

    I would judge "being inspired by" to be something completely different than copying, at least for the published results. The problem with the current form of "ai" is that it really constructs the image based on other people's works, and then somehow blurrs or scales it up to the end result. That in addition to the tendency for this kind of technology to become cloud-based with many people paying money to few people, while others might lose their jobs over it, makes it a hard to ignore this development. Or it should be such. Basically there is no need to use works against the consent of the creators, why should they have to? What happened just now, simply has been the cheapest and fastest way to show-case this kind of system (to great effect) at all. I don't condemn that, for the case of a science collaboration made available to the public free of charge, like with stable diffusion. But i hope, people don't fall for the "idea", that it has to be feasting on all creations regardless of consent of the creators. From here on, we really need to know what we are doing, or we will just once more kill jobs for the sake of some clickworking. Likely, if the technology becomes huge, it'll change the job-scapes anyway - why have it happen in a disruptive and in essence evil way short-term without need? Do you need the blood of other people to "be inspired"? (I might be diverting slightly here...)

    (The point being commercial exploitation here, not inspiration. The ai-system isn't inspired, you or me can be.)

     

    "3D content based on other peoples intellectual property will certainly get called out."

    Maybe, maybe not. If they make music less strict, we could have a century of actual music, again. Oh no, that was last century! Anyway...

     

    I just cannot overlook the hypocrasy shown by users of other's premade 3D assets

    loading a 3D model in DAZ studio does not mean you are an artist either

    but you did pay for the right to sell the image you made, that's all

    you can of course do more than that which is considered artistic

    I agree actually the training model should have used public domain images especially for commercial use

    I still don't see how playing with a text to image machine learning application yourself is any worse than rightclicking and saving that pretty jpg on the internet for your wallpaper

    passing it off as your own and selling it is stealing

    otherwise it's just enjoying something pretty

    I like pretty things

    I personally don't believe one should be allowed to sell ai art though TBH, but it can be a source of inspiration like anything else 

    I got called a thief for taking someone else's ai generated picture (not by the person in fact I never even showed what I took just said on facebook I "stole" another unamed person's ai art) and used it in img2img to create something completely different

    that is how crazy this all is

  • kyoto kidkyoto kid Posts: 40,591

    MelissaGT said:

    kyoto kid said:

    ...interesting, but where is it?

    It's a check box under your profile settings. Under 'Personal' - 'General'. Checking it will flag all of your deviations for "noai".  

    ...found it, thanks. 

  • kyoto kidkyoto kid Posts: 40,591

    ....it still latkes one's own skill and time to use 3D assets to create captivating scenes as composition, placement, shaping, morphing, posing, and lighting is still in the hand and mind of the individual, not a pile of code that you simply "one click" and let it do its thing. Yes we've had bricks thrown our way by those who do everything from scratch. however not everyone is or can be an expert modeller which can take a long time to perfect.  People like myself who came into this late don't have all that time available, as many of us had other obligations vying for our time like work or family.

    iI only became involved a little over 15 years ago after I no longer could hold a brush or pencil steady and lost much of the touch  sensitivity in my hands due to severe arthritis. Yet I still approach what I do from a more traditional perspective.  I rarely ever use HDRIs save for sky/atmospheric only ones (I don't have the software of photo gear to create my own) and so I tend to build full environments using 3D assets to the point of even employing them off  camera for light sources, shadows, and reflections. I also am heavily into what some refer to as "kitbashing".   is it the same as applying paint to a canvas with a rush, certainly not, but it still allows me to get the visions  i have from mind to something tangible. 

    AI recently has also been used to write stories and even compose music when given a basic or set of basic models.  I could "compose" my own piano rags just by using say a Chopin Nocturne as the learning source for an AI bot. rather than use the years of study I went through to do the same.  However, the end result would sound exactly like one of his, not my works. 

    Again, I look at the latest development of AI and art as the long talked and joked about "make art" button.

  • SnowSnow Posts: 95
    edited March 2023

    deleted

    Post edited by Snow on
  • generalgameplayinggeneralgameplaying Posts: 504
    edited December 2022

    @WendyLuvsCatz

    I just cannot overlook the hypocrasy shown by users of other's premade 3D assets

    I am not complaining in that case, though i wouldn't rate down works composed from pieces other's have made. Most movies also sell the actors faces and abilities, and are not fully hand-painted either. Even if they use a lot of make-up, there is specialized folks to do that. I'd rather think of such a context, also for DAZ3D, only that it's like spread over time and space more.

     

    but you did pay for the right to sell the image you made, that's all

    That's a lot though. I'm just pointing at the scraping and consent question, for most.

    I still don't see how playing with a text to image machine learning application yourself is any worse than rightclicking and saving that pretty jpg on the internet for your wallpaper

    People could always use some application for post work, distort an image a little, and sell as their own. AI just makes it even easier, though it will likely at least cost a subscription for the real thing. I have no complaint for using such a system. As mentioned above i am against abuse, abuse of artists - ironically the scraping (against consent) actually is like right clicking and saving the image first, downscaling slightly (maybe) and further processing into the model used by the ai. The results then are like "lies" based on other people's works. Good looking lies, often. But there is one thing, if law is not clarified or redone: move away from individual people doing stuff, and think of copyrightable output, in which case a process or a machine will generate copyrighted output, and no one can distinguish if it's by a machine or triggered by a human. That then has the potential to go haywire quickly, especially with copyright filtering on platforms, but also with the nice letter from the other lawyer, if "they" want to get rid of some artist. Well, that's a bit far off, perhaps, but copyrght-bombing is likely and real - similar might happen with text and patents and music and lyrics and worse: "news". The outlook can be great with all the tools... in whose hands?

    I like pretty things
    I personally don't believe one should be allowed to sell ai art though TBH, but it can be a source of inspiration like anything else 

    Absolutely, no complaints. (I'm in complaint mode, so forgive me my bias.)

    I got called a thief for taking someone else's ai generated picture (not by the person in fact I never even showed what I took just said on facebook I "stole" another unamed person's ai art) and used it in img2img to create something completely different
    that is how crazy this all is

     I understand people have differing sentiments towards ai in general, concern, fear, denial, sometimes too much of enthusiasm. Of course not all people can understand what the system does and what not. While some components are well understood construction-wise, it's not so clear, what it might be able to do, at least after some modifications, and not certain at all, why exactly it does something certain, so there is no intuitive way to "understand" the system, as it doesn't take decisions like we might expect it to, and it doesn't have perfect accountability built in, "it just works". The topic is complicated, and could create many losers, while the world gets worse with it, because in the broad, abilities could decrease (in certain scenarios) - in my opinion that's a real potential to follow, especially short-term. That's also why i urge law-makers to dive deep and rely on several experts from all the areas (each), not just choices by someone's agenda.

    Also people don't always understand licenses, probably not even those ones, they impose on their own users or customers. E.g. you have a royalty free license explicitly allowing computer games, and then the product description mentions "not for redistribution", random time on a random platform, typical "one leg in jail" formula.

    E.g. many things are allowed, so you could maybe (fair use?) take an image, and modify to your heart's liking for personal use. And probably you're allowed to show it to others in private. Maybe in some legislation some cases depend on the license, while some licenses explicitly allow you to modify and even use internally in your company for instance, but impose on you that you have to adhere to the license terms, if you publish it or distribute it, e.g. it has to be under the same license then, or you may not publish it, or not for commercial reasons, and so on. You probably are well aware of those odds and ends. But without anyone knowing the license of an image, you can't really be called a thief.

    Now most of the ai-output, what license terms is there?

    - They have licensing terms. Sometimes not allowing to assume copyright for the image. Distribution also may not be magically allowed in all cases, as copyright law still applies (disregarding if the training data set is legal, or will still be the next day).

    - Scraping against consent may be illegal. Training data might get regulated. Licenses may end up void or lessened (...).

    - Some projects are collaborations with universities, with the resulting models being distributed to the public free of charge, and thus could represent something like scientific show-cases .... is this fair use, or even bluntly legal per se? What if google uses it's bot and it's image recognition (sparrows: "art vs photo!") to build a similar ai from just every image on the net ~ "you allowed our bot in", for a commercial cloud service? That'd be a different case than Stable Diffusion.

    - I DON'T KNOW, at least i don't know what which ai-based-image-generator license will mean tomorrow or the next day...

     

    If you've seen "Battlestar Galactica" (remake) then you know what toasters aren't allowed to do...

    (By all means, try the stuff, try what others try, and so on. I'm just weary about image scraping and training data legislation being bungled, resulting in even more impossible situation. Also a fully legal and no-feeling-hurt version of such an image generator, likely with more tools to be made for and from it, could change the landscape drastically - so perhaps it's another question, if we want those huge things, in the hand of the random investor, if it is a problem at all, or if it's better put somewhere else. With further developments in the ai-tech, the questions won't get less, for a while...)

     

    So if you don't feel encouraged, perhaps just feel encouraged instead?

     

     

    Post edited by generalgameplaying on
  • To be fair, the current doctrine that appears to be being adopted is "AI is incapable of generating content you can then register for copyright on," which, while it won't keep people from commercially exploiting the content, also means that anyone that sells such Generated Content effectively can't control what's done with it. It's good in terms of someone can't really use it to generate a new IP wholesale and expect to maintain it as an IP, but it also means that commercial exploitation of such also has the potential of going to a literal race to the bottom. "Oh, why should I buy this content from the person who prompted to make it, when I can go elsewhere and buy it for less than a tenth of the price, and have it still be legitimate?"

    I do still feel that the whole thing has a "we can do this thing, we didn't think about whether we SHOULD do this thing" ethics aspect to it. But that particular feline has already escaped the burlap sack, unfortunately.

  • WendyLuvsCatzWendyLuvsCatz Posts: 37,858
    edited December 2022

    I am not and will not sell ai art cheeky

    I will damned well play with Stable Diffusion on my own damned computer though wink

    I do upload videos to my ai Youtube channel but that is not monetised either, my 33K subscriber channel of mostly Carrara, iClone and DAZ Studio videos isn't even monetised anymore

    ( it used to be and I bought Zbrush with the money I made but then it dropped in popularity and proved more of a potential tax headache to be worth the trouble so I disabled monetization)

    Post edited by WendyLuvsCatz on
  • generalgameplayinggeneralgameplaying Posts: 504
    edited December 2022

    Snow said:

    So now typing text is similar to importing a model, shaping it, hair and clothing setup, lighting setup, camera setup, composing, render setup then rendering for a few minutes or hours, then post work. Buying and using a 3d model is just a tiny facet of the complete process. Typing text is well practically the whole process, maybe some post work afterwards.
    Dare I even say that using DAZ requires far more knowledge then an actual fashion photographer would have to know. We have to do everything while photographers have assistants. I am a professional photographer myself btw.

    Ask the person who does the lighting and model tutorials (dreamlight). He comes from the fashion photography world.

    No one in their right mind can pull a straight face while claiming typing ai art is the same as creating 3d scenes or photography or painting or sculpting...

    Of course it's not the same. And for the mere output for random text input, you could claim that it's just some kind of craftsmanship done by the system, or yet another brick spat out by the brick-making toaster. The results could still be inspiring or lead to artwork, regardless of the input, and that's not special: the training data consists of millions of potentially inspiring images made by other people, somehow processed into a resulting image for a given text.

    Who has seen everything?

    Text can be art, text can make things art. The process of creating something can be art, the result can be art. It can't be excluded a priori, also not for these text-to-image generators, art could contain bricks.

    Difficulty also doesn't necessarily define art, though technique and abilities may qualify for a niche of craftsmanship at least, and of course the results may be seen as art. It may also affect the perception of works, but of course doesn't define art in the first place. We also don't demand people to craft their matress each day again, before they go to sleep. So i would say, "it can be (used for) art", just like any tool or thing could, if we stay that abstract: "in the same way". Of course craftsmanship, workflows, difficulty and knowledge or other abilities needed, vary wildly amongst works.

    Concerning difficulty of posing (and perhaps animation), well knowing that there will always by many details left to create art from seemingly ordinary poses and situations, i would still perceive many of the difficulties in DAZ Studio as technical hindrances, rather than art-defining, but i am looking forward to things like games, where for instance realistic muscle movement would be nice to have, but wouldn't be the defining core element. For the movie comparison, the actor may have a good idea of how they want to interpret, and if it's not within the course, the director might also go for the very detail, but still you usually don't have someone fiddling with each muscle and toe for the general case (at least not for each frame, of course it can matter), the actors just play their role believably. That's probably where i am aiming, naturally that's not the carefully crafted still image for most of the time.

    Post edited by generalgameplaying on
  • To come back to the title "Remixing Art with AI". If ai output could be art, i do need the following artistic tool: "Image to text."

  • frank0314frank0314 Posts: 13,405

    Snow said:

    So now typing text is similar to importing a model, shaping it, hair and clothing setup, lighting setup, camera setup, composing, render setup then rendering for a few minutes or hours, then post work. Buying and using a 3d model is just a tiny facet of the complete process. Typing text is well practically the whole process, maybe some post work afterwards.
    Dare I even say that using DAZ requires far more knowledge then an actual fashion photographer would have to know. We have to do everything while photographers have assistants. I am a professional photographer myself btw.

    Ask the person who does the lighting and model tutorials (dreamlight). He comes from the fashion photography world.

    No one in their right mind can pull a straight face while claiming typing ai art is the same as creating 3d scenes or photography or painting or sculpting...

    Have to agree. My son is a freelance political protest photojournalist and the work he goes through to get his images done is very time-consuming. AI to me seems like the new "Make-Art" button

  • generalgameplayinggeneralgameplaying Posts: 504
    edited December 2022

    LynnInDenver said:

    To be fair, the current doctrine that appears to be being adopted is "AI is incapable of generating content you can then register for copyright on," which, while it won't keep people from commercially exploiting the content, also means that anyone that sells such Generated Content effectively can't control what's done with it. It's good in terms of someone can't really use it to generate a new IP wholesale and expect to maintain it as an IP, but it also means that commercial exploitation of such also has the potential of going to a literal race to the bottom. "Oh, why should I buy this content from the person who prompted to make it, when I can go elsewhere and buy it for less than a tenth of the price, and have it still be legitimate?"

    I do still feel that the whole thing has a "we can do this thing, we didn't think about whether we SHOULD do this thing" ethics aspect to it. But that particular feline has already escaped the burlap sack, unfortunately.

    Indeed, though i see two additions:

    (1) Judging if such AI was used in "a piece of art", can become difficult, if not impossible. A few touches and no one can tell anymore, perhaps. Image modification and inpainting are abilities too, which will make it difficult to distinguish. Then there will be specific tools for specific tasks, that likely will not have the effect on copyright, and in theory, you could combine such "with a language model", to create pretty similar, if not better results, just like with effects and batch processing (almost). The latter kind of system probably will have less problem-potential for the immediate copyright violation. Lots of possibilities for grey zones on the short to medium run.

    (2) Even if you couldn't monetize the resulting images, officially, if people are inspired and pay for a cloud service, it might still exist and have some impact. It'll probably be impossible to "clean the internet" from images and videos generated with such AI technology. This becomes even more probable, if (1) happens, and if there is some incentive for extra ad placement. 

     

    Ethics... we probably just reiterate social networks, Amazon, Uber, Microsoft, Intel, Standart Oil... because we did not change the underlying rules in a significant way, in terms of monopolism and societal impact. Yes, they will have to pay taxes here and there, install filters, find all the terrorists, but they could in addition display ads based on even more input, for instance, and probably still take some money for entry at the same time.

    Preventing use of images against consent, strict explicit opt-in options for social networks, probably could become the only "effective" thing, though with enough participants for training data, you might eventually feel the impact of widespread use of such systems, even if your images are not used for training (I find not being forced to participate still important).

    The general impact of  (not yet general) "AI" will be vast, as there is text and music too. Law is in the making for training data and resulting output, and often the idea is to make data available so everybody can train - usually the big models will not be available to small players, or not be feasible financially. So this will probably always be investor-based, maybe here and there some foundation, unless some part of it is pulled into the public/gov realm with some standard models being run and provided now and then.

    (For the horror show: a player might research the impact of live generated music on people, e.g. by observing a restaurant, evaluating behavior and emotion of people, trying to find music formulas to create certain moods or stirr up certain behavior in humans. The mood part probably is not so special with music, but the point is to part-control people, to be able to combine with text, to trigger agression, or apathy. And so on...)

    Post edited by generalgameplaying on
  • BlueFingersBlueFingers Posts: 826
    edited December 2022

    WendyLuvsCatz said:

    I just cannot overlook the hypocrasy shown by users of other's premade 3D assets

    loading a 3D model in DAZ studio does not mean you are an artist either

    I couldn't disagree more, by that standard photographers aren't artists either. I can name numerous photographers (Leibovitz, Cartier-Bresson, etc) who have a distictive style of capturing their subjects making their work easy to identify and require artisitc insight, I can say the same of some of the folks here (junk, Splatterbaby, Kibosh, Midgard229,etc) or other 3D artists (Beeple, Pongiluppi, etc.) that use Daz assets. Their work have a distinctive style that is part of their artistic identiy, something that is lacking when using an AI to generate an image. I think the best comparrison I read was by the artist Patryk Olas, he compares using an AI to create images to a kid using a Harmonograph being proud of his work.

    I agree with that sentiment, though I have to admit that when bashing images together that could result in work with an artistic identity (I would disagree again when using in-painting).

    Also, it's not nice to accuse people of hypocrisy, especially considering the enourmous carbon-footprint training AI models require (collecting input+training), while you say you dislike NFTs because of their carbon footprint.

    Post edited by BlueFingers on
  • BlueFingers said:

    WendyLuvsCatz said:

    I just cannot overlook the hypocrasy shown by users of other's premade 3D assets

    loading a 3D model in DAZ studio does not mean you are an artist either

    I couldn't disagree more, by that standard photographers aren't artists either. I can name numerous photographers (Leibovitz, Cartier-Bresson, etc) who have a distictive style of capturing their subjects making their work easy to identify and require artisitc insight, I can say the same of some of the folks here or other 3D artists (Beeple, Pongiluppi, etc.) that use Daz assets. Their work have a distinctive style that is part of their artistic identiy, something that is lacking when using an AI to generate an image. I think the best comparrison I read was by the artist Patryk Olas, he compares using an AI to create images to a kid using a Harmonograph being proud of his work.

    I agree with that sentiment, though I have to admit that when bashing images together that could result in work with an artistic identity (I would disagree again when using in-painting).

    Also, it's not nice to accuse people of hypocrasy, especially considering the enourmous carbon-footprint training AI models require (collecting input+training), while you say you dislike NFTs because of their carbon footprint.

    it's nowhere close and actual data not just pointless calculations 

    rendering on my PC or using my aircon on a 45C day would by that logic be worse, both don't come near to what I would use if  I owned and drove a car

    overall I can confidentiality say my power bill is well below average even with the bit of ai art and 3D rendering I do while watching zero TV and not playing video games etc

    everything thing in moderation 

    crypto is not mined in moderation 

    you only  read half my sentences on using DAZ studio too

  • WendyLuvsCatz said:

     

    it's nowhere close and actual data not just pointless calculations 

    ...

    you only  read half my sentences on using DAZ studio too

    I should have read better, you do make that distiction of how people use 3D assets, I apologize. Still the majority of AI images on Deviant Art for instance can be easiy identified as Midjourney/Stable Diffusion for instance with no identity than being AI generated images.

    But training AI models does require enormous amount of energy. Nothing compared to Bitcoin mining, but that is hardly an adequate measuring stick to asess environmental impact of our actions because it so gigantic. And those conclusions cosidering are not meaningless, they are from peer reviewed papers: Energy and Policy Considerations for Deep Learning in NLP.

    I don't think you training a model at home with 120 images for instance will have that big on an impact, but the models trained by the industry (like MidJourney and Stable Diffusion) do.

  • kyoto kid said:

    @generalgameplaying  ...DA does offer a service that notifies Core members of NFTs. minted from art they had on the site. I received several of those notifications and acted upon them to have the NFTs (successfully) removed from sites that dealt in them. 

    Again as I mentioned, another site ( don't remember which one off hand) allows members to lock out their works out so they cannot be used for AI training. DA and Daz Galleries need to implement something similar.

    Check your user settings on DA; I believe your images should already have the setting enabled by default, since there was a big outcry about AI some months back there.

  • generalgameplayinggeneralgameplaying Posts: 504
    edited December 2022

    BlueFingers said:

    But training AI models does require enormous amount of energy. Nothing compared to Bitcoin mining, but that is hardly an adequate measuring stick to asess environmental impact of our actions because it so gigantic. And those conclusions cosidering are not meaningless, they are from peer reviewed papers: Energy and Policy Considerations for Deep Learning in NLP.I don't think you training a model at home with 120 images for instance will have that big on an impact, but the models trained by the industry (like MidJourney and Stable Diffusion) do.

    Yes, the user perspective tends to skip past the training of the model (and the scraping). Likely you have the total numbers of all sorts of training vs. the whole of bitcoin in mind, but i find it hard to relate a specific instance of a model to those numbers of a general big numbers statement, until the use of the model and the training frequency has been consolidated/researched. Can't tell that, yet, of course.

    A bitcoin farm runs 24/7 typically. Bitcoin runs hot for "the purpose", a design decision for creating an artificial shortage of a resource, in order to make it attractive to investors, while an ai model ideally will be used millions of times at a lower CO2 footprint, once trained. Calculating the footprint of generating an image would of course be very interesting, an estimate might be possible.

    Also the base of comparison vs. rendering remains interesting, because using the cloud version or the local version would have much less of a footprint than rendering, but it might also get used much more, because you're not waiting for something to render as often, but then you gain a lot of new users for the cloud version. This remains complicated to judge.

     

    I found a quick thing here: https://huggingface.co/CompVis/stable-diffusion-v1-1

    - "11250 kg CO2 eq." The question is, what to compare it to. Eleven tons CO2 sounds quite impressing, but with millions of uses we're below grams fairly quickly.

    - Training cost 600000$

    -  EU Carbon Permits (EUR) 88.00, assume same in $. Oh, it's the price for a ton - isn't that ridiculous?

    - (Ad-revenue random site with ad-network, europe/art/culture: 0.0036$ per click.)

    - Carbon emission per google search query: 0.2 grams - do these include training of scraping/ads/tracking for ads/models? (Probably not.)

    - Let's assume, since it's about art :p, around 0.6g CO2 contribution by the training model per generated image to be acceptable.

     

    We'd be around 20 000 000 images to be generated, to be "acceptable". The price contribution for the current training would be around 3 cents per generated image. Of course that drops with more images generated.

    (Ad-driven? A thousand clicks per image in average (where?) seems to be difficult. Maybe all generated images are forced into an ad-ridden gallery? Reaching such average seems unrealistic, and it's conflict-ridden, maybe existing large social networks have the best chance to integrate such. Maybe see last line.)

    So the users would probably have to pay for the service, simply. Here quality and acceptance will be crucial, and with quality comes the question for the training costs again, in case of bigger training data sets. Maybe there are better sets and other improvements, later on. Copyright and commercial use are another thing, as then it's a different beast anyway. So the better selection of training data (strictly legal, consent ensured) might also be a factor for both quality and cost (adverse in the worst case). Then the law may change the next day...

    I may have made mistakes on all edges, but for the business (straight/dumb from there), any guesses? Can it even be estimated?

    (Art generators... clickworking... what's the area for comparison, where numbers exist? Could try to relate to "youtube average views per video", and "instagram metrics". )

    (I assume that the investors are more after further practical research and more diverse sets of applications than just this one. OpenAI with ChatGPT already shows some numbers, but that's high impact everyday applications.)

    Post edited by generalgameplaying on
Sign In or Register to comment.