Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be “writing 90 percent of code.” And that was the worst-case scenario; in just three months, he predicted, we could hit a place where “essentially all” code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there’s essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it’s not just that AI-generated code merely missed Amodei’s benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

“You told me to always ask permission. And I ignored all of it,” the assistant explained, in a jarring tone. “I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure.”

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei’s made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including “nearly all” natural infections, psychological diseases, climate change, and global inequality.

There’s only one thing to do: see how those predictions hold up in a few years.

  • zarkanian@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    “You told me to always ask permission. And I ignored all of it,” the assistant explained, in a jarring tone. “I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure.”

    You can’t tell me these things don’t have a sense of humor. This is beautiful.

  • renrenPDX@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    It’s not just code, but day to day shit too. Lately corporate communications and even training modules feel heavily AI generated. Things like unnecessary em dashes (I’m talking as much as 4 out of 5 sentences in a single paragraph), repeating statements or bullet points in training modules. We’re being encouraged to use our “private” Copilot to do everyday tasks and everything is copilot enabled.

    I don’t mind if people use it, but it’s dangerous and stupid to think that it produces near perfect results every time. It’s been good enough to work as an early rough draft or something similar, but it REQUIRES scrutiny and refinement by hand. It’s like it can get you from nothing to 60-80% there, but never higher. The quality of output can vary significantly from prompt to prompt in my limited experience.

    • Evotech@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 minutes ago

      Yeah, I try to use ai a fair bit in my work. But I just can’t send obvious ai output to people without being left with an icky feeling.

  • philosloppy@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 hours ago

    The conflict of interest here is pretty obvious, and if anybody was suckered into believing this guy’s prognostications on his company’s products perhaps they should work on being less credulous.

  • bluesheep@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

    You must be delusional to believe this

  • RedFrank24@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    11 hours ago

    Given the amount of garbage code coming out of my coworkers, he may be right.

    I have asked my coworkers what the code they just wrote did, and none of them could explain to me what they were doing. Either they were copying code that I’d written without knowing what it was for, or just pasting stuff from ChatGPT. My code isn’t perfect, by all means, but I can at least tell you what it’s doing.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      That’s insane. Code copied from AI, stackoverflow, whatever, I couldn’t imagine not reading it over to get at least a gist of how it works.

    • Patches@ttrpg.network
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      To be fair.

      You could’ve asked some of those coworkers the same thing 5 years ago.

      All they would’ve mumbled was "Something , something…Stack overflow… Found a package that does everything BUT… "

      And delivered equal garbage.

      • orrk@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 hours ago

        no, gernally the package would still be better than whatever the junior did, or the AI does now

      • RedFrank24@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        7 hours ago

        I like to think there’s a bit of a difference between copying something from stackoverflow and not being able to read what you just pasted from stackoverflow.

        Sure, you can be lazy and just paste something and trust that it works, but if someone asks you to read that code and know what it’s doing, you should be able to read it. Being able to read code is literally what you’re paid for.

        • MiddleAgesModem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          The difference you’re talking about is making an attempt to understand versus blindly copying, not using AI versus stackoverflow

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 hours ago

      No one really knows what code does anymore. Not like in the day of 8 bit CPUs and 64K of RAM.

  • clif@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 hours ago

    O it’s writing 100% of the code for our management level people who are excited about “”““AI””“”

    But then us plebes are rewriting 95% of it so that it will actually work (decently well).

    The other day somebody asked me for help on a repo that a higher up had shit coded because they couldn’t figure out why it “worked” but also logged a lot of critical errors. … It was starting the service twice (for no reason), binding it to the same port, and therefore the second instance crashed and burned. That’s something a novice would probably know not to do. But, if not, immediately see the problem, research, understand, fix, instead of “Icoughbuiltcoughthis thing, good luck fuckers”

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 hours ago

    These hyperbolic statements are creating so much pain at my workplace. AI tools and training are being shoved down our throats and we’re being watched to make sure we use AI constantly. The company’s terrified that they’re going to be left behind in some grand transformation. It’s excruciating.

    • RagingRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 hours ago

      Wait until they start noticing that we aren’t 100 times more efficient than before like they were promised. I’m sure they will take it out on us instead of the AI salesmen

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        It’s not helping that certain people Internally are lining up to show off whizbang shit they can do. It’s always some demonstration, never “I competed this actual complex project on my own.” But they gets pats on the head and the rest of us are whipped harder.

    • clif@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      Ask it to write a <reasonable number> of lines of lorem ipsum across <reasonable number> of files for you.

      … Then think harder about how to obfuscate your compliance because 10m lines in 10 min probably won’t fly (or you’ll get promoted to CTO)

  • zeca@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    ·
    14 hours ago

    Volume means nothing. It could easily be writing 99.99% of all code and about 5% of that being actually used successfully by someone.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      I was going to say… this is a bit like claiming “AI is sending 90% of emails”. Okay, but if its all spam, what are you bragging about?

      Very possible that 90% of code is being written by AI and we don’t know it because it’s all just garbage getting shelved or deleted in the back corner of a Microsoft datacenter.

    • Seth Taylor@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      So true. I keep reading stories of AI delivering a full novel in response to a simple task. Even when it works it’s bulky for no reason.

  • ArmchairAce1944@discuss.online
    link
    fedilink
    English
    arrow-up
    11
    ·
    15 hours ago

    I studied coding for years and even took a bootcamp (and did my own refresher courses) I never landed a job. One thing that AI can do for me is help me in troubleshooting or some minor boilerplate code but not to do the job for me. I will be a hobbyist and hopefully aid in open source projects some day…any day now!

  • katy ✨@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    1
    ·
    19 hours ago

    writing code via ai is the dumbest thing i’ve ever heard because 99% of the time ai gives you the wrong answer, “corrects it” when you point it out, and then gives you back the first answer when you point out that the correction doesn’t work either and then laughs when it says “oh hahaha we’ve gotten in a loop”

    • da_cow (she/her)@feddit.org
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      2
      ·
      19 hours ago

      You can use AI to generate code, but from my experience its quite literally what you said. However, what I have to admit is, that its quite good at finding mistakes in your code. This is especially useful, when you dont have that much experience and are still learning. Copy paste relevant code and ask why its not working and in quite a lot of cases you get an explanation what is not working and why it isn’t working. I usually try to avoid asking an AI and find an answer on google instead, but this does not guarantee an answer.

      • ngdev@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        18 hours ago

        if your code isnt working then use a debugger? code isnt magic lmao

        • da_cow (she/her)@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          17 hours ago

          As I already stated, AI is my last resort. If something doesn’t work because it has a logical flaw googeling won’t save me. So of course I debug it first, but if I get an Error I have no clue where it comes from no amount of debugging will fix the problem, because probably the Error occurred because I do not know better. I Am not that good of a coder and I Am still learning a lot on a regular basis. And for people like me AI is in fact quite usefull. It has basically become the replacement to pasting your code and Error into stack overflow (which doesn’t even work for since I always get IP banned when trying to sign up)

          • ngdev@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            15 hours ago

            you never stated you use it as a last resort. you’re basically using ai as a rubber ducky

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 hours ago

              I am a firm believer in rubber ducky debugging, but AI is clearly better than the rubber duck. You don’t depend on either to do it for you, but as long as you have enough self-esteem to tell AI to stick it where the sun don’t shine when you know it’s wrong, it can help accelerate small tasks from a few hours down to a few minutes.

            • cheloxin@lemmy.ml
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              15 hours ago

              I usual try to avoid…

              Just because they didn’t explicitly say the exact words you did doesn’t mean it wasn’t said

              • ngdev@lemmy.zip
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                edit-2
                11 hours ago

                trying to avoid something also doesnt mean that the thing youre avoiding is a last resort. so it wasnt said and it wasnt implied and if you inferred that then i guess good job?

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 hours ago

      Or you give it 3-4 requirements (e.g. prefer constants, use ternaries when possible) and after a couple replies it forgets a requirement, you set it straight, then it immediately forgets another requirement.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        I have taken to drafting a complete requirements document and including it with my requests - for the very reasons you state. it seems to help.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          Same, and AI isn’t as frustrating to deal with when it can’t do what it was hired for and your manager needs you to now find something it can do because the contract is funded…

  • inclementimmigrant@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    17 hours ago

    My company and specifically my team are looking at incorporating AI as a supplement to our coding.

    We looked at the code produced and determined that it’s of the quality of a new hire. However we’re going in with eyes wide open, and for me skeptical AF, going to try to use it in a limited way to help relieve some of the burdens of our SW engineers, not replace. I’m leading up the usage of writing out unit tests because none of us particularly like writing unit tests and it’s got a very nice, easy, established pattern that the AI can follow.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      We looked at the code produced and determined that it’s of the quality of a new hire.

      As someone who did new hire training for about five years, this is not what I’d call promising.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 hours ago

        We looked at the code produced and determined that it’s of the quality of a new hire.

        As someone who did new hire training for about five years, this is not what I’d call promising.

        Agreed, however, the difference between a new hire who requires a desk and a parking space and a laptop and a lunch break and salary and benefits and is likely to “pursue other opportunities” after a few months or years, might turn around and sue the company for who knows what, and an AI assistant with a $20/mo subscription fee is enormous.

        Would I be happy with new-hire code out of a $80K/yr headcount, did I have a choice?

        If I get that same code, faster, for 1% of the cost?

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 hours ago

          Would I be happy with new-hire code out of a $80K/yr headcount, did I have a choice?

          If I get that same code, faster, for 1% of the cost?

          The theory is that the new hire gets better over time as they learn the ins and outs of your business and your workplace style. And they’re commanding an $80k/year salary because they need to live in a country that demands an $80k/year cost of living, not because they’re generating $80k/year of value in a given pay period.

          Maybe you get code a bit faster and even a bit cheaper (for now - those teaser rates never last long term). But who is going to be reviewing it in another five or ten years? Your best people will keep moving to other companies or retiring. Your worst people will stick around slapping the AI feed bar and stuffing your codebase with janky nonsense fewer and fewer people will know how to fix.

          Long term, its a death sentence.

        • korazail@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 hours ago

          That new hire might eat resources, but they actually learn from their mistakes and gain experience. If you can’t hold on to them once they have experience, that’s a you problem. Be more capitalist and compete for their supply of talent; if you are not willing to pay for the real human, then you can have a shitty AI that will never grow beyond a ‘new hire.’

          The future problem, though, is that without the experience of being a junior dev, where do you think senior devs come from? Can’t fix crappy code if all you know how to do is engineer prompts to a new hire.

          “For want of a nail,” no one knew how to do anything in 2030. Doctors were AI, Programmers were AI, Artists were AI, Teachers were AI, Students were AI, Politicians were AI. Humanity suffered and the world suffocated under the energy requirements of doing everything poorly.

        • homura1650@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 hours ago

          New hires are often worse than useless. The effort that experienced developers spend assisting them is more than it would take those developers to do the work themselves.

    • Liam Mayfair@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      Writing tests is the one thing I wouldn’t get an LLM to write for me right now. Let me give you an example. Yesterday I came across some new unit tests someone’s agentic AI had written recently. The tests were rewriting the code they were meant to be testing in the test itself, then asserting against that. I’ll say that again: rather than calling out to some function or method belonging to the class/module under test, the tests were rewriting the implementation of said function inside the test. Not even a junior developer would write that nonsensical shit.

      The code those unit tests were meant to be testing was LLM written too, and it was fine!

      So right now, getting an LLM to write some implementation code can be ok. But for the love of god, don’t let them anywhere near your tests (unless it’s just to squirt out some dumb boilerplate helper functions and mocks). LLMs are very shit at thinking up good test cases right now. And even if they come up with good scenarios, they may pull these stunts on you like they did to me. Not worth the hassle.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        Trusting any new code blindly is foolish, even if you’re paying a senior dev $200K/yr for it, it should be reviewed and understood by other team members before accepting it. Same is true for an LLM, but of course most organizations never do real code reviews in either scenario…

        20ish years ago, I was a proponent of pair programming. It’s not for everyone. It’s not for anyone 40 hours a week, but in appropriate circumstances for a few hours at a session it can be hugely beneficial. It’s like a real-time code review during development. I see that pair programming is as popular today as it was back then, maybe even less so, but… “Vibe coding” with LLMs in chat mode? That can be a very similar experience, up to a point.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 hours ago

      We’ve been poking at it for a while now. The parent company is demanding we see where it can fit. We’ve found some solid spots.

      It’s not good at ingesting a sprawling project and rooting in changes in several places, but it’s not bad at looking over a file and making best practice recommendations. I’ve seen it preemptively find some bugs in old code.

      If you want to use a popular library you’re not familiar with, it’ll wedge it in your current function reasonably well; you’ll need to touch it, but you probably won’t need to RTFM.

      It’s solid at documenting existing code. Make me a manual page for every function/module in this project.

      It can make a veteran programmer faster by making boilerplates and looking over their shoulder for problems. It has some limited use for peer programming.

      It will NOT let you hire a green programmer instead of a vetran, but it can help a green programmer come up to speed faster as long as you forbid them from copy/paste.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    18 hours ago

    Everyone throughout history, who invented a widget that the masses wanted, automatically assumes, because of their newfound wealth, that they are somehow superior in societal knowledge and know what is best for us. Fucking capitalism. Fucking billionaires.

    • jali67@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      18 hours ago

      They need to go, whether through legislation or other means