Altman’s remarks in his tweet drew an overwhelmingly negative reaction.

“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”

Others called him a “f***ing psychopath” and “scum.”

“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.

    • rodneylives@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      Assuming that’s true, and that’s a BIG assumption… What makes you think that would matter? AI has no interiority; it isn’t a thinking blob, it’s a text generator. Think of it as a fancy Markov chain.

      Even if it were true, where in the chain do new principles, new techniques, new concepts enter into it? All these forms of generative AI can do is regurgitate what’s been fed into it. The worst thing you can train an AI from is AI-generated output.

      • village604@adultswim.fan
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 hour ago

        They used the word future for a reason. The technology is still being developed so basing future predictions on the current state is silly.

      • Jakeroxs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        3 hours ago

        Ever heard of skills? You can essentially “teach” it new things that are not directly available in its model, right now it’s still pretty early but it (to me) feels like quite a leap compared to model-only usage.

        Its by no means perfect, but I do not think we’re even close to scratching the surface of what all can be done with the tech.

        I would bet people back at the advent of computers would scoff at many of the things computers can do now as fantasy.

        Edit: Right now, context size is a limiting factor, but you can do things like assign sub-agents to specific tasks/skills and have the overall agent call the subagent to complete the task thereby reducing the context size needed for the skill on the original agent call, it sorta acts as a mediator. Of course you still need to ensure you’re documenting what does/doesn’t work and have that available for future tasks in the same vein so it doesn’t repeat mistakes.

        On your point about the underlying model used to train it, I imagine at some point there will be a breakthrough where it becomes more dynamic, I think skills are kind of a stepping stone to that. Maybe instead of models being gigantic, data is broken down into individual skills that are called to inform specific actions, and those skills can easily be dynamic already.