“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

    • Jankatarch@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      Writing new code is easier than editing someone else’s code but editing a portion is still better than writing the entire program again from start to end.

      Then there is LLMs which force you to edit the entire thing from start to end.

  • chaosCruiser@futurology.today
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    edit-2
    3 hours ago

    About that “net slowdown”. I think it’s true, but only in specific cases. If the user already knows well how to write code, an LLM might be only marginally useful or even useless.

    However, there are ways to make it useful, but it requires specific circumstances. For example, you can’t be bothered to write a simple loop, you can use and LLM to do it. Give the boring routine to an LLM, and you can focus on naming the variables in a fitting way or adjusting the finer details to your liking.

    Can’t be bothered to look up the exact syntax for a function you use only twice a year? Let and LLM handle that, and tweak the details. Now, you didn’t spend 15 minutes reading stack overflow posts that don’t answer the exact question you had in mind. Instead, you spent 5 minutes on the whole thing, and that includes the tweaking and troubleshooting parts.

    If you have zero programming experience, you can use an LLM to write some code for you, but prepare to spend the whole day troubleshooting something that is essentially a black box to you. Alternatively, you could ask a human to write the same thing in 5-15 minutes depending on the method they choose.

    • BilboBargains@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 hours ago

      This is a sane way to use LLM. Also, pick your poison, some bots are better than others for a specific task. It’s kinda fascinating to see how other people solve coding problems and that is essentially on tap with a bot, it will churn out as many examples as you want. It’s a really useful tool for learning syntax and libraries of unfamiliar languages.

      On one extreme side of LLM there is this insane hype and at the other extreme a great pessimism but in the middle is a nice labour saving educational tool.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    2
    ·
    edit-2
    13 hours ago

    You mean relying blindly on a statistical prediction engine to attempt to produce sophisticated software without any understanding of the underlying principles or concepts doesn’t magically replace years of actual study and real-world experience?

    But trust me, bro, the singularity is imminent, LLMs are the future of human evolution, true AGI is nigh!

    I can’t wait for this idiotic “AI” bubble to burst.

  • altphoto@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    ·
    16 hours ago

    Its great for stupid boobs like me, but only to get you going. It regurgitates old code, it cannot come up with new stuff. Lately there have been less Python errors, but again the stuff you can do is limited. At least for the free stuff that you can get without signing up.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 hours ago

      It regurgitates old code, it cannot come up with new stuff.

      The trick is, most of what you write is basically old code in new wrapping. In most projects, I’d say the new and novel part is maybe 10% of the code. The rest is things like setting up db models, connecting them to base logic, set up views, api endpoints, decoding the message on the ui part, displaying it to user, handling input back, threading things so UI doesn’t hang, error handling, input data verification, basic unit tests, set up settings, support reading them from a file or env vars, making UI look not horrible, add translatable text, and so on and on and on. All that has been written in some variation a million times before. All can be written (and verified) by a half-asleep competent coder.

      The actual new interesting part is gonna be a small small percentage of the total code.

      • altphoto@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        I totally agree with this. However, you can’t get there without coding experience and knowledge of the problem as well as education in computer science or experience in the field. I’m a generalist, I’m loving what I can do at home. But I still get the run around using AI. I have to read and understand the code to try to nudge the AI in the right direction or I’ll end up going in circles if I don’t.

    • Corhen@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      14 hours ago

      Yea, I use it for home assistant, it’s amazingly powerful… And so incredibly dumb

      It will take my if and statements, and shrunk it to 1/3 the length, while being twice as to robust… While missing that one of the arguments is entirely in the wrong place.

  • Aljernon@lemmy.today
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    19 hours ago

    Senior Management in much of Corporate America is like a kind of modern Nobility in which looking and sounding the part is more important than strong competence in the field. It’s why buzzwords catch like wildfire.

    • Jankatarch@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Lmao calling nobility would imply we can’t vote our senior management and it often ends up being whoever “the king” wants or one of king’s children.
      Wait…

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    20 hours ago

    Are you trying to tell me that the people wanting to sell me their universal panacea for all human endeavours were… lying…? Say it ain’t so.

    • SparroHawc@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      I mean, originally they thought they had come upon a magic bullet. Turns out it wasn’t the case, and now they’re going to suffer for it.

  • Sadness Nexus@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    18 hours ago

    I’m not a programmer in any sense. Recently, I made a project where I used python and raspberry pi and had to train some small models on a KITTI data set. I used AI to write the broad structure of the code, but in the end, it took me a lot of time going through python documentation as well as the documentation of the specific tools/modules I used to actually get the code working. Would an experienced programmer get the same work done in an afternoon? Probably. But the code AI output still had a lot of flaws. Someone who knows more than me would probably input better prompts and better follow up requirements and probably get a better structure from the AI, but I doubt they’ll get a complete code. In the end, even to use AI, you have to know what you’re doing to use AI efficiently and you still have to polish the code into something that actually works.

    • Spice Hoarder@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      15 hours ago

      From my experience, AI just seems to be a lesson in overfitment. You can’t use it to do things nobody has done before. Furthermore, you only really get good responses from prompts related to Javascript

  • MrSulu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    18 hours ago

    Perhaps it should read “All AI is over hyped, over done and we should be over it”

  • badgermurphy@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    24 hours ago

    I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don’t understand, though, is the magnitude of this bubble then.

    Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.

    In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.

    So, I guess my question is, “What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?” If there really are none, or if the gains are only incremental, then my question becomes an incredulous, “Is this biggest in history tech bubble really composed entirely of unfounded hype?”

    • brunchyvirus@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      8 hours ago

      I think right now companies are competing until they’re only 1 or 2 that clearly own the majority of the market.

      Afterwards they will devolve back into the same thing search engines are now. A cesspool of sponsored ads and links to useless SEO blogs.

      They’ll just become gate keepers of information again and the only ones that will be heard are the ones who pay a fee or game the system.

      Maybe not though, I’m usually pretty cynical when it comes to what the incentives of businesses are.

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      19 hours ago

      This struck upon one of the greatest wishes of all corporations. A way to get work without having to pay people for it.

    • SparroHawc@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      23
      ·
      23 hours ago

      From what I’ve seen and heard, there are a few factors to this.

      One is that the tech industry right now is built on venture capital. In order to survive, they need to act like they’re at the forefront of the Next Big Thing in order to keep bringing investment money in.

      Another is that LLMs are uniquely suited to extending the honeymoon period.

      The initial impression you get from an LLM chatbot is significant. This is a chatbot that actually talks like a person. A VC mogul sitting down to have a conversation with ChatGPT, when it was new, was a mind-blowing experience. This is a computer program that, at first blush, appears to be able to do most things humans can do, as long as those things primarily consist of reading things and typing things out - which a VC, and mid/upper management, does a lot of. This gives the impression that AI is capable of automating a lot of things that previously needed a live, thinking person - which means a lot of savings for companies who can shed expensive knowledge workers.

      The problem is that the limits of LLMs are STILL poorly understood by most people. Despite constructing huge data centers and gobbling up vast amounts of electricity, LLMs still are bad at actually being reliable. This makes LLMs worse at practically any knowledge work than the lowest, greenest intern - because at least the intern can be taught to say they don’t know something instead of feeding you BS.

      It was also assumed that bigger, hungrier LLMs would provide better results. Although they do, the gains are getting harder and harder to reach. There needs to be an efficiency breakthrough (and a training breakthrough) before the wonderful world of AI can actually come to pass because as it stands, prompts are still getting more expensive to run for higher-quality results. It took a while to make that discovery, so the hype train was able to continue to build steam for the last couple years.

      Now, tech companies are doing their level best to hide these shortcomings from their customers (and possibly even themselves). The longer they keep the wool over everyone’s eyes, the more money continues to roll in. So, the bubble keeps building.

      • badgermurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        20 hours ago

        The upshot of this and a lot of the other replies I see here and elsewhere seem to suggest that one big difference between this bubble and other past ones is that with this most recent one, there is so much of the global economy now tied to the fate of this bubble that the entire financial world is colluding to delay the inevitable due to the expected severity of the consequences.

    • leastaction@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      21 hours ago

      AI is a financial scam. Basically companies that are already mature promise great future profits thanks to this new technological miracle, which makes their stock more valuable than it otherwise would be. Cory Doctorow has written eloquently about this.

    • TipsyMcGee@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      23 hours ago

      When the AI bubble bursts, even janitors and nurses will lose their jobs. Financial institutions will go bust.

  • arc99@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    24 hours ago

    I have never seen an AI generated code which is correct. Not once. I’ve certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.

    I sure as hell wouldn’t trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through “vibe” programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.

    • bountygiver [any]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      for me it typically don’t cause syntax errors, but the main thing it fucks up is what you specifically told them to do, where the output straight up does not perform the way your specification requires. If it’s just some syntax errors at least the compiler can catch them, this you won’t even know if you don’t bother testing the output.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      19 hours ago

      I’ve used Claude code to fix some bugs and add some new features to some of my old, small programs and websites. Not things I can’t do myself, but things I can’t be arsed to sit down and actually do.

      It’s actually gone really well, with clean and solid code. easily readable, correct, with error handling and even comments explaining things. It even took a gui stream processing program I had and wrote a server / webapp with the same functionality, and was able to extend it with a few new features I’ve been thinking to add.

      These are not complex things, but a few of them were 20+ files big, and it manage to not only navigate the code, but understand it well enough to add features with the changes touching multiple files (model, logic, view layer for example, or refactor a too big class and update all references to use the new classes).

      So it’s absolutely useful and capable of writing good code.

      • chicagohuman@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        19 hours ago

        This is the truth. It has tremendous value but it isn’t a solution – it’s a tool. And if you don’t know how to code or what good code looks like, then it is a tool you can’t use!

    • ikirin@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      22 hours ago

      I’ve seen and used AI for snippets of code and it’s pretty decent at that.

      With my colleagues I always compare it to a battery powered drill. It’s very powerful and can make shit a lot easier. But you’d not try to build furniture from scratch with only a battery powered drill.

      You need the knowledge to use it - and also saws, screws, the proper bits for those screws and so on and so forth.

      • setsubyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        20 hours ago

        What bothers me the most is the amount of tech debt it adds by using outdated approaches.

        For example, recently I used AI to create some python scripts that use polars and altair to parse some data and draw charts. It kept insisting to bring in pandas so it could convert the polars dataframes to pandas dataframes just for passing them to altair. When I told if that altair can use polars dataframes directly, that helped, but two or three prompts later it would try to solve problems by adding the conversion again.

        This makes sense too, because the training material, on average, is probably older than the change that enabled altair to use polars dataframes directly. And a lot of code out there just only uses pandas in the first place.

        The result is that in all these cases, someone who doesn’t know this would probably be impressed that the scripts worked, and just not notice the extra tech debt from that unnecessary dependency on pandas.

        It sounds like it’s not a big deal, but these things add up and eventually, our AI enhanced code bases will be full of additional dependencies, deprecated APIs, unnecessarily verbose or complicated code, etc.

        I feel like this is one aspect that gets overlooked a bit when we talk about productivity gains. We don’t necessarily immediately realize how much of that extra LoC/time goes into outdated code and old fashioned verbosity. But it will eventually come back to bite us.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      22 hours ago

      Eh I had it write a program that finds my PCs ip and sends it to the Unifi gateway to change a rule. Worked fine but I guess technically it is mostly using the go libraries written by someone else.

    • hietsu@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      23 hours ago

      How is it not correct if the code successfully does the very thing that was prompted?

      F.ex. in my company we don’t have any real programmers but have built handful of useful tools (approx. 400-1600 LOC, mainly Python) to do some data analysis, regex stuff to cleanup some output files, index some files and analyze/check their contents for certain mistakes, dashboards to display certain data, etc.

      Of course the apps may not have been perfect after the very first prompt, or even compiled, but after iterating an error or two, and explaining an edge case or two, they’ve started to perform flawlessly, saving tons of work hours per week. So how is this not useful? If the code creates results that are correct, doesn’t that make the app itself technically ”correct” too, albeit likely not nearly as optimized as equivalent human code would be.

      • maskofdaisies@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 hours ago

        To add on to what others have said, vibe coding is ushering in a new golden age for black hat hackers. If someone is rely entirely on AI to generate code they likely don’t understand what the code they have is actually doing. This tends to lead to an app that works correctly for what the prompted specified but behaves badly the instant it has to handle anything outside of the prompt, like a malformed request or data outside the prompted parameters. As a result these apps tend to be easy to exploit by malicious actors, often in ways the original prompter never thought of.

        • korazail@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 hours ago

          I think this is what will kill vibe coding, but not before there’s significant damage done. Junior developers will be let go and senior devs will be told they have to use these tools instead and to be twice as efficient. At some point enough major companies will have had data breaches through AI-generated code that they all go back to using people, but there will be tons of vulnerable code everywhere. And letting Cursor touch your codebase for a year, even with oversight, will make it really tricky to find all the places it subtly fucked up.

      • arc99@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        22 hours ago

        If the code doesn’t compile, or is badly mangled, or uses the wrong APIs / imports or forgets something really important then it’s broken. I can use AI to inform my opinion and sometimes makes use of what it outputs but critically I know how to program and I know how to spot good and bad code.

        I can’t speak for how you use it, but if you don’t have any real programmers and you’re iterating until something works then you could be producing junk and not know it. Maybe it doesn’t matter in your case if its a bunch for throwaway scripts and helpers but if you have actual code in production where money, lives, reputation, safety or security are at risk then it absolutely does.

      • LaMouette@jlai.lu
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        It’s not bad for your use case but going beyond that without issues and actual developpers to fix the vibe code is not yet possible for llms

    • Alaknár@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      23 hours ago

      There already are. People all over LinkedIn are changing their titles to “AI Code Cleanup Specialist”.

    • aidan@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      I mean largely for most of us I hope. But I feel like the tech sector was oversatured because of all the hype of it being an easy get rich quick job. Which for some people it was.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    20 hours ago

    According to Deutsche Bank the AI bubble is a the pillar of our economy now.

    So when it pops. I guess that’s kinda apocalyptic.

    Edit - strikethrough

    • ceiphas@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 hours ago

      I am a software architect, an mainly usw it to refactor my own old code… But i am maybe not a typical architect…

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        I don’t really care if people use it, it’s more that it feels like a quarter of our architect meeting presentations are about something AI related. It’s just exhausting.