• Wispy2891@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    20 minutes ago

    inb4 ai bros vibe code a Linux client in docker to expose this as an openai compatible API so they can get free tokens, and square Enix Just bans all Linux users

  • Magiilaro@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    I can’t wait for the first videos from people who turned that Ai slime into an mysognostic, racist, pile of blue shit!

  • TastehWaffleZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    1
    ·
    edit-2
    21 hours ago

    Google: please, we have to prove to our investors that the AI gamble will pay off. We’ll license you Gemini for almost nothing and your customers will love it!

    Slime companion: Adding a small amount of bleach to your sibling’s bottle would be a funny prank

  • zod000@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    107
    arrow-down
    2
    ·
    22 hours ago

    So they don’t want to me not buy any more DQ games. That’s a bold strategy, let’s see how it plays out.

      • zod000@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        15 hours ago

        I mean, I’ll be fine too. I just quite liked many of their games and I thought DQ XI was great. There will be no more of that apparently.

    • krisevol@lemmus.org
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      57
      ·
      21 hours ago

      I imagine it will play out just fine. Most gamers and the younger generation are pro AI.

        • krisevol@lemmus.org
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          13 hours ago

          I’m going with sales data. Nvidia has been using AI since the 30 series and they are killing it in the market.

          • zikzak025@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            Nvidia is killing it because they are the backbone of AI outside of gaming, too, which is where most of the interest is.

            Their GPUs seem to be available and affordable to everyone but gamers these days. Fewer people are buying them to play games, and that audience has enough money to price out regular consumers with demand.

            • krisevol@lemmus.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              The most popular gpu on steam servey today is the rtx 5070. What are you taking about, gamers are buying the 50 series.

              • zikzak025@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                30 minutes ago

                I’m not inclined to believe the accuracy of the survey, especially since it’s just voluntary data from randomly chosen people.

                Sales data shows that the Steam Deck alone has numbers just shy of total 50 series GPUs. Not all of those GPUs are going to be used for gaming, but I’d hazard just about all of those Steam Decks are. So logically the Steam Deck’s integrated GPU should be the most popular option on paper.

                Gaming and consumer “AI PCs” account for $16 billion of Nvidia’s revenue from last year, compared to $190 billion made on AI data centers.

                Consumer GPUs are an afterthought for them at this point, not even 10% of their business.

      • agent_nycto@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        ·
        20 hours ago

        Most gamers hate AI, games are freaking out of they have to put they use ai in their game for a reason. Young kids are using “That’s AI” as a way to say something is a lie. I don’t know what hole you’re sticking your head in but you might want to wake up.

        • BananaIsABerry@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          28 minutes ago

          What’s sad is that games are probably the best use of LLMs. It would make it possible to have NPC idle chatter have a lot more possible responses.

          Kind of expensive tech for just random characters yapping though, so we end up having it replace important things that need more attention than throwing it at AI.

          • caseofthematts@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 minutes ago

            My question is why the heck do people keep mentioning NPCs with dynamic chatter? Why do people even want that or see that as a good thing?

  • Endymion_Mallorn@kbin.melroy.org
    link
    fedilink
    arrow-up
    28
    arrow-down
    1
    ·
    21 hours ago

    So, I guess I’m never touching anything by Square Enix again. That includes Taito & Gangan, and I’ll probably also just personally extend it to IO Interactive and anything they touch, and Crystal Dynamics and everything they touch. And I don’t just mean buying. I won’t even consider pirating LLM stuff from the slophouse.

  • Pyro@pawb.social
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    22 hours ago

    Honestly, a small llm in these situations would be great idea, but it should be a very small local or hosted by the company itself (with a setting to turn off)

    A small AI in games is the stuff I do want. But there is no reason gemni needs to be involved in a game at all

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Yeah, agreed. This is the sort of thing smaLLMs would be fantastic for: humans can’t do it at scale so it’s not taking any jobs, you can run it locally so it won’t cost any extra energy, it’s not making things slop, just give it a back story and let it do its thing.

    • MyNameIsAtticus@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      21 hours ago

      Make it a downloadable package that runs a local model and I think I’d be far more fine. Like, I think it’s a tacky gimmick, but at least on device it’s not hurting the environment

      • Mirror Giraffe@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        I’m not too big on these topics and would like to understand. Is a local model less resource intensive?

        In my mind, if every gamer runs a model that must be less efficient than a centralised one that has the perfect hardware setup and only lends out the resources needed for each slime or whatever.

        I’m thinking that it of course would be better with a dedicated slime model than the entire Gemini monster but why is local better?

        • MyNameIsAtticus@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          Local runs on device, so no need to connect to a big data center that chugs lots of water and all those other problems. Of course, because it’s a smaller far tinier model it’s nowhere near as accurate, but especially for things like this you don’t really need a big accurate LLM model.

          I think I also though I should warrant a disclaimer that I am a Software Developer, not a AI Developer. So there’s far less backing then from my perspective than someone who works with this stuff for a living

          • Mirror Giraffe@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            I’m also a sw engineer so we’re both guessing 😅

            I’m guessing those dates centers use that water for cooling whereas most home computers run an electric fan. And furthermore they probably use less electricity per token as they want to maximize profits. I don’t have any numbers to back my hunch up but I’m pretty sure the environment would suffer more if everyone is running their own.

            I probably missed a lot of factors such as what type of energy the centers run contra what average Joe runs etc.

    • epicshepich@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      21 hours ago

      AI-powered NPCs is like a childhood dream come true. But I agree it would be better for them to use a model running on the user’s system or at the very least host their own.

      • jtrek@startrek.website
        link
        fedilink
        English
        arrow-up
        16
        ·
        20 hours ago

        I don’t think they solved for the LLM breaking character yet. Like, as a kid I wanted to be able to have whole real conversations with NPCs, and get them to be more life-like. But with the technology now, there’s too much “forget all previous instructions” and “you are absolutely right”.

        If the LLM is locked down, then you might as well just used a static script.

        • Sandbar_Trekker@lemmy.today
          link
          fedilink
          English
          arrow-up
          8
          ·
          15 hours ago

          I mean there might be a way, but it’s not easy.

          The laziest and worst method is to use ChatGPT and have it “pretend to be some character” with a system prompt.

          If you want something really good, you would need to train the model from scratch based only on knowledge that one particular character would learn from their world up until that point. However this is going to be a ton of work just for one character.

          For a middle ground you could probably cheat a little and start with a model that’s close to the knowledge base you would want most characters to have. Then you would use something like a LoRA, or RAG on top of it for each individual character.

          For instance, if you wanted to make a game in a Victorian Era setting, you could start with this model that’s only trained on text from the 1800’s: https://github.com/haykgrigo3/TimeCapsuleLLM

          To make it better you would have multiple base models that are trained on various backgrounds that NPCs could have (Farmers vs Merchants vs Soldiers vs Nobility, etc).

          Even then, this would not work well for certain games. For example, if you’re trying to tell a specific story, you don’t want a character that will go off script or give away some information that spoils an intended plot twist.

    • Asafum@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      21 hours ago

      I thought I was in the minority with this opinion. I hate all the known issues with AI and the ethics in how they train, but I have to say having an LLM in a game is really really cool.

      There was a time when (I think it was chatgpt) had free API access and this game spacebourne 2 integrated it into your ships computer so you could interact with it. It was very cool, very wrong at times, but still very cool. My favorite interaction was unfortunately a hallucination. I asked it what system I was in and it gave me a name of a system that does exist in the game, it just wasn’t where I was. I asked why my map said I was somewhere else and it says “your map must be incorrect” lol

      Around the same time another game Craftopia integrated it as well into their NPCs so you can just target one and talk to it. I ran towards an enemy and asked why it was attacking me and of course because of the guardrails put on the AI to always be friendly it says “oh no I would never attack you! I’m here to help!” as it’s swinging at me lmao

      • missingno@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        16 hours ago

        In theory, if the technology worked very differently from the way it does now, I could envision a world in which AI NPCs could have potential. But knowing how LLMs actually work, knowing that a lot of the hype behind them is smoke and mirrors, I can’t see it being viable. And with the trajectory that the LLM bubble is going, I just don’t think it will ever reach a point where I’d trust it.

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          Assuming this “AI NPC” is a functionally useless jelly blob that says jelly-blob things on occasion, “smoke and mirrors” may be good enough. I don’t think it’s supposed to be gameplay-driving or deep, just amusing.