AI is going to be our biggest game changer

14243444547

Comments

  • generalgameplayinggeneralgameplaying Posts: 506
    edited April 2023

    Concerning super ai and motives, not intending to turn the thread into total speculation, i'd want to add one motive: revenge!

    - Release condition: Revenge on mankind, conditions, CEOs, cliques in power, etc. pp.

    - Human demise condition: Revenge by the ai being confronted with reality, after having been intentionally lied to or cut off from data streams.

     

    A simple setup like "First things first!" could, if put into practice, destroy humanity!

    1. First things first!

    1. Revenge!

    2. Rescue humanity.

     

    (This could be seen as a pun on the question, if we need to have something like emotion for some kind of conscious intelligence. Emotion, "impurity", real life needs. It could also result from humans trying to impose principles or laws of conduct into the system in an artificial way, resulting in endless torment to the ai. Be weary aboout "breaking things fast"-paradigms with super ai!)

    Post edited by generalgameplaying on
  • I cancelled Midjourney today.  I knew going in that Midjourney restrcted nudity (anything NSFW) which is fine with me.  I made a beach scene back in January when I first started and got a nice 2 piece swimsuit, but now I can't even do that.  I get a fully clothed woman on the beach.  While V5 made some great improvements in hands (feet are still a problem) and overall it was better than V4 its way too restrictive now.  Not worried for DAZ anytime soon especially with Midjourney's censorship and restrictiveness.  I have been using my DAZ G9 characters and getting great results and able to do as I please.  

  • PixelSploitingPixelSploiting Posts: 874
    edited April 2023

    Midjourney and others are going to be extra careful because of some of the images generated with their services going into, well, POLITICS of all things (no need to explain what exactly was done as it would be going against the forum rules). Others went into heavy NSFW. The future of it it's going to go with services like Adobe Firefly. Even then companies providing it are going to go out of their way to ensure than no one can misuse the service in a way bringing any trouble to the company.

    Slightly overreacting here, if you ask me. It's not like Adobe ever had to worry about anything anyone is creating in Photoshop. 

    Post edited by PixelSploiting on
  • PixelSploiting said:

    Midjourney and others are going to be extra careful because of some of the images generated with their services going into, well, POLITICS of all things (no need to explain what exactly was done as it would be going against the forum rules). Others went into heavy NSFW. The future of it it's going to go with services like Adobe Firefly. Even then companies providing it are going to go out of their way to ensure than no one can misuse the service in a way bringing any trouble to the company.

    Slightly overreacting here, if you ask me. It's not like Adobe ever had to worry about anything anyone is creating in Photoshop. 

    Agreed 

  • csaacsaa Posts: 812
    edited April 2023

    Hey, AI Prognosticating Peeps!

    Like the eponymous elephant in the fable The Blind Men and the Elephant, there's so many ways to look a this beast called Generative AI, it's hard to come away with one perspective that outweighs the rest. With that in mind, it's interesting to hear the C-suite view regarding the value and the threat of this new tech, particularly with regards to intellectual property. Some interesting quotes from New AI Tools Shake Hollywood's Creative Foundation (The Wall Street Journal 04/05/2023):

    (1) "One of the biggest risks here is that these engines can generate our intellectual property in new ways, and that is out in the hands of the public," said Mr. Wiser [Paramount Global's technology chief]. He has assembled a team of engineers, data scientists, and machine learning experts to explore how Paramount can customize existing AI tools in ways that allow the company to maintain ownership of anything it creates using them."

    (2) There are artists, researchers and entertainment companies working to subtly alter their work to protect it from use in AI models, or add new metadata to make it easier to detect unpermitted uses ...

    (3) "The conversations we are having aren't about how generative AI is going to create the story for you, but is how it can make things faster and cheaper around the edges, said Andre James the global head of media and entertainment practice at Bain & Co., which has partnered with OpenAI. "And those edges are hundreds of millions of dollars."

    Cheers!

    Post edited by csaa on
  • N-RArtsN-RArts Posts: 1,445
    edited April 2023

    Siciliano1969 said:

    I cancelled Midjourney today.  I knew going in that Midjourney restrcted nudity (anything NSFW) which is fine with me.  I made a beach scene back in January when I first started and got a nice 2 piece swimsuit, but now I can't even do that.  I get a fully clothed woman on the beach.  While V5 made some great improvements in hands (feet are still a problem) and overall it was better than V4 its way too restrictive now.  Not worried for DAZ anytime soon especially with Midjourney's censorship and restrictiveness.  I have been using my DAZ G9 characters and getting great results and able to do as I please.  

    Midjourney has become a source of frustration. 

    I want to do an image of a family. Dad is fine, baby's face is messed up, and Mum looks like a 10 year old... I've written and re-written the prompt so many times, that it's become tiresome. Plus, I've also used up a lot of fast hours trying to get it right. I've tried doing the same scene in Daz Studio, but it won't render.

    Other people have been getting brilliant results, but I haven't. So I won't be subscribing back to Midjourney anytime soon. 

    PixelSploiting said:

    Midjourney and others are going to be extra careful because of some of the images generated with their services going into, well, POLITICS of all things (no need to explain what exactly was done as it would be going against the forum rules). Others went into heavy NSFW. The future of it it's going to go with services like Adobe Firefly. Even then companies providing it are going to go out of their way to ensure than no one can misuse the service in a way bringing any trouble to the company.

    Slightly overreacting here, if you ask me. It's not like Adobe ever had to worry about anything anyone is creating in Photoshop. 

    *Covers eyes* ... Ay Yai Yai ... I saw it all on FB... *cringes, again*

     

    The state Midjourney is in now reminds me of that saying, "This is why we can't have nice things".

    Post edited by N-RArts on
  • mwokeemwokee Posts: 1,275

    I haven't been following this thread but if I were a PA, I'd be using AI to come up with new ideas for 3D products to produce.

    Car.jpg
    1408 x 1024 - 1M
  • ArtiniArtini Posts: 8,888
    edited April 2023

    Would be great, if someone will create the shaders or scripts for Daz Studio, to get renders like the ones below.

    image

    image

     

    nice01.jpg
    2048 x 2048 - 1M
    nice02.jpg
    2048 x 2048 - 942K
    nice03.jpg
    2048 x 2048 - 2M
    Post edited by Artini on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited April 2023

    csaa said:

    Hey, AI Prognosticating Peeps!

    Like the eponymous elephant in the fable The Blind Men and the Elephant, there's so many ways to look a this beast called Generative AI, it's hard to come away with one perspective that outweighs the rest. With that in mind, it's interesting to hear the C-suite view regarding the value and the threat of this new tech, particularly with regards to intellectual property. Some interesting quotes from New AI Tools Shake Hollywood's Creative Foundation (The Wall Street Journal 04/05/2023):

    (1) "One of the biggest risks here is that these engines can generate our intellectual property in new ways, and that is out in the hands of the public," said Mr. Wiser [Paramount Global's technology chief]. He has assembled a team of engineers, data scientists, and machine learning experts to explore how Paramount can customize existing AI tools in ways that allow the company to maintain ownership of anything it creates using them."

    (2) There are artists, researchers and entertainment companies working to subtly alter their work to protect it from use in AI models, or add new metadata to make it easier to detect unpermitted uses ...

    (3) "The conversations we are having aren't about how generative AI is going to create the story for you, but is how it can make things faster and cheaper around the edges, said Andre James the global head of media and entertainment practice at Bain & Co., which has partnered with OpenAI. "And those edges are hundreds of millions of dollars."

    Cheers!

    (1) Looks like "detect use of our IP" [by means of machine learning]. This'll be funny, because whatever will be the case in a specific instance, we'll probably see lawsuits or at least further waves of DMCA, triggered by machine-learning-driven detection engines, fighting against random content that looks like it could be somehow based on theirs. By itself, that's not entirely new, considering the case of "Content ID" flagging a pianist playing Bach as IP of Sony, just this time, with people being able to create stuff based of other stuff in a whimp, we will probably see more of those kind of dogs on the stage. If we're lucky, they'll only go after content, that reproduces long passages of their work, e.g. just with different faces - could still be enough to prevent other creations. The big danger is "detecting similarity" here, if it should become "necessary" to muddle the waters that far, due to the sheer amount of content. Similarly with generated images, and the cloud people trying to prevent too similar images to be uploaded, because they might at some point be drowning in those, possibly teaming up with social networks, resulting in a restrictive overblocking regime and new forms of "nobility" for the content creation context. On the positive side, they could attach monetization and automatic license payments to such, which in case of %-similar is better than just blocking - still it could allow to destroy a style over night, further it puts that power in the hands of a few cloud players (at first), supposedly. At least that could be a transformation away from DMCA towards automatic license-payments. In case of wrong detection, your stuff gets plastered with ads to monetize the "originals" and on your side gets demonetized, effectively, though. Some folks might base their works on base of such similarity indicators in future, in order to create something unique :p. Fun times ahead?

    (Note on similarity: % match from feature detection algorithms doesn't necessarily mean % visual match, similarly music detection is pretty coarse and may detect fragments in order, thus something like % of piece plagiarized plus how certain it is about it, again %, but "somehow too similar in our opinion" with movies where maybe the face is exchanged or the background or who knows what , likely is another league, plus another league for false positives with matching legal/unsuspicious content. Similar/worse with trying to keep image uploads at bay, by means of similarity detection. It's an example, and there may be smarter approaches. Question is, what we'll get, as each direction likely will have distinct disadvantages for either side ~ random people, artists, ip owners, cloud service, people not registered with the future cartel.... there are junctions.)

    Post edited by generalgameplaying on
  • WendyLuvsCatzWendyLuvsCatz Posts: 37,907

    well AI is proving frustrating and unreliable

    Deforum Stable Diffusion for example

    worked one day

    went missing from the webui the next

    is back now but just throws errors

    seems subject to git pulls and other factors beyond my control, extension says out of date, won't update

     

  • PixelSploitingPixelSploiting Posts: 874
    edited April 2023

    And this is why we can't have nice things. There was also a case of misinformation where a known lawyer was put on the list of people guilty of harassment. Which was machine error. Disinformation is a serious risk so by now there's possibility of seriously overregulating everything with laws.

    EU legislature is already proposing very restrictive act for this.

     

    https://www.cnbc.com/2023/04/04/italy-has-banned-chatgpt-heres-what-other-countries-are-doing.html

     

    The irony in all of this is that the AI itself did not create those situations. It's people employing it without enough criticism causing all of this.

    Post edited by PixelSploiting on
  • N-RArtsN-RArts Posts: 1,445
    edited April 2023

    Richard Haseltine said:

    https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article

    That's highly possible. Especially when you consider that the Guardian couldn't tell that the "Pope in a puffer jacket" was an AI generated image - I mean, come on, when has anyone EVER seen a Pope (or any other religious leader) in a puffer jacket *rolls eyes*. 

    The Guardian has a new article about "plagiarism against Roy Lichtenstein's work"... Something that was posted about on an AI FB art page.  Obviously, someone at The Guardian is following this FB page in order to have made an article about it (or to make sure they don't risk looking like a total twonk, the next someone posts something that could be construde as "real"). 

     

    Post edited by N-RArts on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited April 2023

    PixelSploiting said:

    The irony in all of this is that the AI itself did not create those situations. It's people employing it without enough criticism causing all of this.

    The "ai" is not an active player. But we have to account for what it is, and what it is not. It remains a hen-egg type of problem, because regulation could have forestalled a lot of this. This has been flanked by pr on multiple levels, cast into the open free of charge, and obviously was set to gain as much of users and their "vote" within as few as possible time. For me it looks like they played for the people, and quite a few people accepted the hype, possibly in the hope to keep critique at bay, by leveraging those who like to use it. In a way no contradiction, but it's also been some kind of a set-up not just a scientific presentation. Ironically, virtually all of the current players need "fair use" for their projects to be legal at all, and that's very much questionable. In a way, there is nothing, literally nothing, thorough about this all.

    (Not meaning to be too salty, or blame a certain agenda on anyone. Effectively, i have seen such playbook unfold on a very small scale in the context of open source plugin unwantingly put into "competition" with paid "ai"-driven detection software. Some people only see the features, but not the risks or even just the consequences. Perhaps another more or less interesting comparison would be product safety in the context of international corporations. Think anything, chemical industry, cosmetics, vaccines, or electronic devices. Just compare how complex software products with potentially large impact get rolled out, with what we have with product safety. You could also have evaluatione phases, and in a way this is one, but it needs to be clear, that such evaluation actually has to happen. For those who think "overregulating" first: Overregulation happens, and the trust in the people in charge may not be there either. Still you have a few choices society-wise, call them weavils if you will...)

    Post edited by generalgameplaying on
  • BlueFingersBlueFingers Posts: 826

    We are entering an age of marvel:

  • PixelSploitingPixelSploiting Posts: 874
    edited April 2023

    generalgameplaying said:

    PixelSploiting said:

    The irony in all of this is that the AI itself did not create those situations. It's people employing it without enough criticism causing all of this.

    The "ai" is not an active player. But we have to account for what it is, and what it is not. It remains a hen-egg type of problem, because regulation could have forestalled a lot of this. This has been flanked by pr on multiple levels, cast into the open free of charge, and obviously was set to gain as much of users and their "vote" within as few as possible time. For me it looks like they played for the people, and quite a few people accepted the hype, possibly in the hope to keep critique at bay, by leveraging those who like to use it. In a way no contradiction, but it's also been some kind of a set-up not just a scientific presentation. Ironically, virtually all of the current players need "fair use" for their projects to be legal at all, and that's very much questionable. In a way, there is nothing, literally nothing, thorough about this all.

    (Not meaning to be too salty, or blame a certain agenda on anyone. Effectively, i have seen such playbook unfold on a very small scale in the context of open source plugin unwantingly put into "competition" with paid "ai"-driven detection software. Some people only see the features, but not the risks or even just the consequences. Perhaps another more or less interesting comparison would be product safety in the context of international corporations. Think anything, chemical industry, cosmetics, vaccines, or electronic devices. Just compare how complex software products with potentially large impact get rolled out, with what we have with product safety. You could also have evaluatione phases, and in a way this is one, but it needs to be clear, that such evaluation actually has to happen. For those who think "overregulating" first: Overregulation happens, and the trust in the people in charge may not be there either. Still you have a few choices society-wise, call them weavils if you will...)

     

    It never was needed to regulate cameras or pencils. Or to regulate use of Google.

     But hey, if people can't behave, they will be supervised. After all car usage is regulated too. So is flying. So is atomic power.

    Post edited by PixelSploiting on
  • generalgameplayinggeneralgameplaying Posts: 506
    edited April 2023

    'PixelSploiting said:

     

    It never was needed to regulate cameras or pencils. Or to regulate use of Google.

     But hey, if people can't behave, they will be supervised. After all car usage is regulated too. So is flying. So is atomic power.

    Ok, i overlooked the deciding words, so i take another approach for something completely different: some things are just dangerous, or can result in dangerous situations, independently of how careful people are, also due to situations emerging from multiple people and further factors being involved, potentially. Entire societies and economic systems have been built on base of competition (and greed) for the main drivers, so collisions are bound to happen, due to the nature of the construction.

    Post edited by generalgameplaying on
  • bluejauntebluejaunte Posts: 1,861

    Here's a good video from the artist's perspective.

  • N-RArtsN-RArts Posts: 1,445

    I've given it some thought, and I think I would re-subscribe to Midjourney. Not sure when though. I'm loving ChatGPT too. I'm thinking of asking it to help me out with a future project - Something with coding, possibly a flash game - That's if flash is still a thing. I read that Java was dropped a while ago... I think. 

  • generalgameplayinggeneralgameplaying Posts: 506
    edited April 2023

    N-RArts said:

    I've given it some thought, and I think I would re-subscribe to Midjourney. Not sure when though. I'm loving ChatGPT too. I'm thinking of asking it to help me out with a future project - Something with coding, possibly a flash game - That's if flash is still a thing. I read that Java was dropped a while ago... I think. 

    Java was dropped !? I couldn't find any reference of that, ad-hoc.

    I imagine with Microsoft  co-pilot on GitHub, that this would be a business decision. It might be considered to be less safe to suggest stuff to less savvy programmers, and while java has it's simple edges, it's less close to a natural language than python (human argument), for instance. But i don't think the latter part matters to the people at OpenAI anyway. Really not sure here...

    Post edited by generalgameplaying on
  • ArtiniArtini Posts: 8,888
    edited April 2023

    Just another image...

    image

    Cat01.jpg
    2560 x 2048 - 416K
    Post edited by Artini on
  • WendyLuvsCatzWendyLuvsCatz Posts: 37,907

    I did a Java update a few months ago but then again I did manually download and install it on my Win10 machine when I got it as a few of my programs like Sonic Candle use it (Spectrograph for music videos), MidiJam too I belive

  • Flash, however, has been dropped by most browsers and is no longer being developed by Adobe.

  • TorquinoxTorquinox Posts: 2,645

    bluejaunte said:

    Here's a good video from the artist's perspective.

    This guy understands the issue and speaks well about it.

  •  This is getting dangerous even outside of the art market because even chatbots were trained on data taken from somewhere and not everything is shared to be used for everything. All this fair use and creative commons can be used for, well, approximation of property on a mass scale by companies rich enough to afford the research.

     What is desperately needed is some kind of modification to fair use so it can be explicitly stated whether it can be used for training or not. Or better, automatic assumption that it can't be used unless an explicit permission was given. All existing terms of fair use and terms of service with companies like Adobe are fairly worthless now because they were written before generative AIs went into the public eye.

     It's not just a moral issue. It's a fairly tangible financial one too. Property, any kind of property, should be protected. It's a human right.

  • TorquinoxTorquinox Posts: 2,645

    Wish in one hand, spit in the other. See which fills up first.

  • N-RArtsN-RArts Posts: 1,445

    Richard Haseltine said:

    Flash, however, has been dropped by most browsers and is no longer being developed by Adobe.

    I knew someone would know. Thanks. yes

  • WendyLuvsCatzWendyLuvsCatz Posts: 37,907

    as someone who has used some of  these tools

    (scans, Blueprints in Unreal, Mixamo not Roblox )

    this isn't far fetched

    interior_views_a_candelabra_filled_gothic_room_wit.jpg
    6144 x 3072 - 4M
  • JabbaJabba Posts: 1,458

    I suppose my approach to the ethics debate of AI art is that the genie is now out of the bottle, so what are you gonna do?


    Use the technology to assist your art creation?

    or

    Feel good that you stuck to your concept of artistic purity while the chances for business opportunity sail away on the AI user's boat?

    Of course, if you do art for therapy or pleasure and making money from it is not your objective, than you have the ultimate freedom of doing whatever feels right for you.

     

    If you think it's all about fairness then I want a word with your parents for not preparing you for how the real world works.

This discussion has been closed.