• eronth@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    14 hours ago

    Honestly just funny to see. It makes perfect sense, based on how they made the site hostile to users.

  • Wispy2891@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    14 hours ago

    Already before the LLMs for me it was the last chance before I would post over there. The desperation move. It was too toxic and I would always get pissed to get my question closed because too similar or too easy or whatever. Hey I wasted 15 minutes to type that, if the other question solved the problem I wouldn’t post again…

    In the beginning it wasn’t like that…

    I went to watch my stack overflow account and the first questions that I posted (and that gave me 2000 karma) would have been almost all of them rejected and removed

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    19 hours ago

    I’ve posted questions, but I don’t usually need to because someone else has posted it before. this is probably the reason that AI is so good at answering these types of questions.

    the trouble now is that there’s less of a business incentive to have a platform like stack overflow where humans are sharing knowledge directly with one another, because the AI is just copying all the data and delivering it to the users somewhere else.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      22
      ·
      16 hours ago

      Works well for now. Wait until there’s something new that it hasn’t been trained on. It needs that Stack Exchange data to train on.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 hours ago

        Yes, I think this will create a new problem. new things won’t be created very often, at least not from small house or independent developers, because there will be this barrier to adoption. corporate controlled AI will need to learn them somehow

      • cherrari@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        21
        ·
        16 hours ago

        I don’t think so. All AI needs now is formal specs of some technical subject, not even human readable docs, let alone translations to other languages. In some ways, this is really beautiful.

        • 123@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 hours ago

          Technical specs don’t capture the bugs, edge cases and workarounds needed for technical subjects like software.

        • SoftestSapphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          15 hours ago

          Lol no, AI can’t do a single thing without humans who have already done it hundreds of thousands of times feeding it their data

          • okmko@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            I used to push back but now I just ignore it when people think that these models have cognition because companies have pushed so hard to call it AI.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 hours ago

          The whole point of StackExchange is that it contained everything that isn’t in the docs.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          16 hours ago

          It can’t handle things it’s not trained on very well, or at least not anything substantially different from what it was trained on.

          It can usually apply rules it’s trained on to a small corpus of data in its training data. Give me a list of female YA authors. But when you ask it for something more general (how many R’s there are in certain words) it often fails.

          • webadict@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            15 hours ago

            Actually, the Rs issue is funny because it WAS trained on that exact information which is why it says strawberry has two Rs, so it’s actually more proof that it only knows what it has been given data on. The thing is, when people misspelled strawberry as “strawbery”, then naturally, people respond, " Strawberry has two Rs." The problem is that LLM learning has no concept of context because it isn’t learning anything. The reinforcement mechanism is what the majority of its data tells it. It regurgitates that strawberry has two Rs because it has been reinforced by its dataset.

            • rumba@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              14 hours ago

              Interesting story, but I’ve seen the same work with how many ass in assassian

              you can probe the stuff it’s bad at, and a lot of it doesn’t line up well with the story that it’s how people were corrected.

              • webadict@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 hours ago

                But that’s exactly how an LLM is trained. It doesn’t know how words are spelled because words are turned into numbers and processed. But it does know when its dataset has multiple correlations for something. Specifically, people spell out words, so it will regurgitate to you how to spell strawberry, but it can’t count letters because that’s not a thing that language models do.

                Generative AI and LLMs are just giant reconstruction bots that take all the data they have and reconstruct something. That’s literally what they do.

                Like, without knowing what your answer is for assassin, I will assume that your issue is that the question is probably “How many asses are in assassin?” But, like, that’s a joke. Assassins only has one ass, just like the rest of us. That’s a joke. And nobody would ever spell assassin as assin, so why would it learn that there are two asses in assassin?

                I’m confused where you are getting your information from, but this is not particularly special behavior.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      17 hours ago

      The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.

      Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        ·
        16 hours ago

        Probably explains why quora started sending me multiple daily emails about shit i didn’t care about and removed unsubscribe buttons form the emails.

        I don’t delete many accounts… but that was one of them

    • Gsus4@mander.xyzOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      19 hours ago

      What we’re all afraid is that cheap slop is going to make stack broke/close/bought/private and then it will be removed from the public domain…then jack up the price of islop when the alternative is gone…

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        18 hours ago

        I do wonder then, as new languages and tools are developed, how quickly will AI models be able to parrot information on their use, if sources like stackoverflow cease to exist.

        • Gsus4@mander.xyzOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          17 hours ago

          I think this is a classic of privatization of commons, so that nobody can compete with them later without free public datasets…

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 hours ago

          It’ll certainly be of lesser quality even if they go through steps to make it able to address it.

          good documentation and open projects ported might be enough to give you working code, but it’s not going to be able to optimize it without being trained on tons of optimization data.

        • Gsus4@mander.xyzOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          18 hours ago

          But can anyone train on them? What happens to the original dataset?

          • falseWhite@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            There are open weight models that users can download and run locally. Because the weights are open, they can be customised and fine tuned.

            And then there are fully open source models, that publish everything, the model with open weights, the training source code, as well as the full training dataset.

  • BackgrndNoize@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    edit-2
    17 hours ago

    Even before AI I stopped asking any questions or even answering for that matter on that website within like the first few months of using it. Just not worth the hassle of dealing with the mods and the neck beard ass users and I didn’t want my account to get suspended over some BS in case I really needed to ask an actual question in the future, now I can’t remember the last time I’ve been to any stack website and it does not show up in the Google search results anymore, they dug their own grave

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      17 hours ago

      I stopped using it once I found out their entire business model was basically copyright trolling on a technicality that anyone who answers a question gives them the copyright to the answer, and using code audits to go after businesses that had copy/pasted code. Just left a bad taste in my mouth, even beside stopping using it for work even though I wasn’t copy/pasting code.

      And even before LLMs, I found ignoring stack exchange results for a search usually still got to the right information.

      But yeah, it also had a moderation problem. Give people a hammer of power and some will go searching for nails, and now you don’t have anywhere to hang things from because the mod was dumber than the user they thought they needed to moderate. And now google can figure out that my question is different from the supposed duplicate question that was closed because it sends me to the closed one, not the tangentially related question the dumbass mod thought was the same thing. Similar energy to people who go to help forums and reply useless shit like RTFM. They aren’t really upset at “having” to take time to respond, they are excited about a chance to act superior to someone.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      ·
      22 hours ago

      The humans of StackOverflow have been pricks for so long. If they fixed that problem years ago they would have been in a great position with the advent of AI. They could’ve marketed themselves as a site for humans. But no, fuckfacepoweruser found an answer to a different question he believes answers your question so marked your question as a duplicate and fuckfacerubberstamper voted to close it in the queue without critically thinking about it.

      • theolodis@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        11 hours ago

        I used to moderate and answer questions on SO, but stopped because at some point you see the 500th question about how to use some javascript function.

        Of course I flagged them all as duplicate and linked them to an extensive answer about the specific function, explaining all aspects and edge cases, because I don’t think there need to be 500 similatlr answers (who’s going to maintain them?)

        But yeah, sorry that I didn’t fix YOUR code sample, and you had to actually do your homework by yourself.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          My questions weren’t homework problems with 500 duplicates. Maybe that type of shit being the most common in the vote to close queue is why fuckfacerubberstamper can’t be bothered to actually think about what they’re closing as dupes.

      • ramjambamalam@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        21 hours ago

        If the alternative is the cesspit that is Yahoo Answers and Quora, I’ll take the heavy-handed moderation of StackOverflow.

            • JackbyDev@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              Like Lemmy? The site we’re all using?

              But no my point wasn’t about a specific site, it’s about the moderation approach. Do you really think there’s no middle ground in approach to moderation between Yahoo Answers and StackOverflow?

    • kazerniel@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      19 hours ago

      Hear hear, it was the hostile atmosphere that pushed me away from Stack Exchange years before LLMs were a thing. That very clear impression that the site does not exist to help specific people, but a vague public audience, and the treatment of every question and answer is subjugated to that. Since then I just ask/answer questions on platforms like Lemmy, Reddit, Discord, or the Discourse forums ran by various organisations, it’s a much more pleasant experience.

    • dgmib@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 hours ago

      The stupidest part is that their aggressive hostility against new questions means that the content is becoming dated. The answers to many, many questions will change as the tech evolves.

      And since AI’s ability to answer tech questions depends heavily on a similar question being in the training dataset, all the AIs are going to increasingly give outdated answers.

      They really have shot themselves in the foot for at best some short term gain.

    • THE_GR8_MIKE@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      This was my issue. The two times I posted real, actual questions that I needed help with, and tried to provide as much detail as possible while saying I didn’t understand the subject,

      I got clowned on, immediately downvoted negative, and got no actual help whatsoever. Now I just hope someone else had a similar issue.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    73
    ·
    edit-2
    1 day ago

    This is not because AI is good at answering programming questions accurately, it’s because SO sucks. The graph shows its growth leveling off around 2014 and then starting the decline around 2016, which isn’t even temporally correlated with LLMs.

    Sites like SO where experienced humans can give insightful answers to obscure programming questions are clearly still needed. Every time I ask AI a programming question about something obscure, it usually knows less than I do, and if I can’t find a post where another human had the same problem, I’m usually left to figure it out for myself.

    • vane@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      22 hours ago

      2016 is probably when they removed freedom by introducing aggressive moderation to remove duplicates and ban people

      • skisnow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        8 hours ago

        It was a toxic garbage heap way before 2016. I remember creating an account to try building karma there back in about 2011 when doing that was seen as a good way to land senior job roles. Gave up very quickly.

  • MehBlah@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    15 hours ago

    Good. That site has been a toxic hole in the ground for a decade or more.

  • perry@aussie.zone
    link
    fedilink
    English
    arrow-up
    127
    arrow-down
    4
    ·
    1 day ago

    I post there every 6-12 months in the hope of receiving some help or intelligent feedback, but usually just have my question locked or removed. The platform is an utter joke and has been for years. AI was not entirely the reason for its downfall imo.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      1 day ago

      Not common I’m sure, but I once had an answer I posted completely rewritten for grammar, punctuation, and capitalization. I felt so valued. /s

      • SleeplessCityLights@programming.dev
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 day ago

        The last time I asked a question, I followed the formatting of a recent popular question/post. Someone did not like that and decided to implement their formatting, thebvproceeded to dramatically change my posts and updates. Also people kept giving me solutions to problems I never included in my question. The whole thing was ridiculous.

      • poopkins@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        24 hours ago

        As a mod, this is all I ever did on the platform. Thanks for the appreciation!

      • kazerniel@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        19 hours ago

        haha I ran into this too, someone changed the title of my question on one of their non-programming boards - I was so pissed, I never went back to that particular board (it was especially annoying because it was a quite personal question)

    • chrischryse@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      23 hours ago

      I used to post had the same thing. Then people would insult me for not knowing like “why you think I’m asking?”

  • micka190@lemmy.world
    link
    fedilink
    English
    arrow-up
    127
    arrow-down
    1
    ·
    edit-2
    1 day ago

    According to a Stack Overflow survey from 2025, 84 percent of developers now use or plan to use AI tools, up from 76 percent a year earlier. This rapid adoption partly explains the decline in forum activity.

    As someone who participated in the survey, I’d recommend everyone take anything regarding SO’s recent surveys with a truckfull of salt. The recent surveys have been unbelievably biased with tons of leading questions that force you to answer in specific ways. They’re basically completely worthless in terms of statistics.

    • chaosCruiser@futurology.today
      link
      fedilink
      English
      arrow-up
      56
      arrow-down
      11
      ·
      1 day ago

      Realistically though, asking an LLM what’s wrong with my code is a lot faster than scrolling through 50 posts and reading the ones that talk about something almost relevant.

      • Rob T Firefly@lemmy.world
        link
        fedilink
        English
        arrow-up
        62
        arrow-down
        20
        ·
        1 day ago

        It’s even faster to ask your own armpit what’s wrong with your code, but that alone doesn’t mean you’re getting a good answer from it

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          35
          arrow-down
          10
          ·
          1 day ago

          If you get a good answer just 20% of the time, an LLM is a smart first choice. Your armpit can’t do that. And my experience is that it’s much better than 20%. Though it really depends a lot of the code base you’re working on.

          • chaosCruiser@futurology.today
            link
            fedilink
            English
            arrow-up
            36
            arrow-down
            1
            ·
            1 day ago

            Also depends on your level of expertise. If you have beginner questions, an LLM should give you the correct answer most of the time. If you’re an expert, your questions have no answers. Usually, it’s something like an obscure firmware bug edge case even the manufacturer isn’t aware of. Good luck troubleshooting that without writing your own drivers and libraries.

            • SkunkWorkz@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              23 hours ago

              Yeah but in that edge case SO wouldn’t help either even before the current crash. Unless you were lucky. I find LLM useful to push me in the right direction when I’m stuck and documentation isn’t helping either not necessarily to give me perfectly written code. It’s like pair programming with someone who isn’t a coder but somehow has read all the documentation and programming books. Sometimes the left field suggestions it makes are quite helpful.

              • chaosCruiser@futurology.today
                link
                fedilink
                English
                arrow-up
                2
                ·
                22 hours ago

                I’ve found some interesting and even good new functions by moaning my code woes to an LLM. Also, it has taken me on some pointless wild goose chases too, so you better watch out. Any suggestion has the potential to be anywhere from absolutely brilliant to a completely stupid waste of time.

            • MagicShel@lemmy.zip
              link
              fedilink
              English
              arrow-up
              14
              ·
              1 day ago

              If you’re writing cutting edge shit, then LLM is probably at best a rubber duck for talking things through. Then there are tons of programmers where the job is to translate business requirements into bog standard code over and over and over.

              Nothing about my job is novel except the contortions demanded by the customer — and whatever the current trendy JS framework is to try to beat it into a real language. But I am reasonably good at what I do, having done it for thirty years.

              • chaosCruiser@futurology.today
                link
                fedilink
                English
                arrow-up
                6
                ·
                1 day ago

                Boring standard coding is exactly where you can actually let the LLM write the code. Manual intervention and review is still required, but at least you can speed up the process.

                • Aceticon@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  arrow-down
                  1
                  ·
                  edit-2
                  1 day ago

                  Code made up of severally parts with inconsistently styles of coding and design is going to FUCK YOU UP in the middle and long terms unless you never again have to touch that code.

                  It’s only faster if you’re doing small enough projects that an LLM can generate the whole thing in one go (so, almost certainly, not working as professional at a level beyond junior) and it’s something you will never have to maintain (i.e. prototyping).

                  Using an LLM is like giving the work to a large group of junior developers were each time you give them work it’s a random one that picks up the task and you can’t actually teach them: even when it works, what you get is riddled with bad practices and design errors that are not even consistently the same between tasks so when you piece the software together it’s from the very start the kind of spaghetti mess you see in a project with lots of years in production which has been maintained by lots of different people who didn’t even try to follow each others coding style plus since you can’t teach them stuff like coding standards or design for extendability, it will always be just as fucked up as day one.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            21
            arrow-down
            4
            ·
            1 day ago

            How do you know it’s a good answer? That requires prior knowledge that you might have. My juniors repeatedly demonstrate they’ve no ability to tell whether an LLM solution is a good one or not. It’s like copying from SO without reading the comments, which they quickly learn not to do because it doesn’t pass code review.

            • MagicShel@lemmy.zip
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              edit-2
              1 day ago

              That’s exactly the question, right? LLMs aren’t a free skill up. They let you operate at your current level or maybe slightly above, but they let you iterate very quickly.

              If you don’t know how to write good code then how can you know if the AI nailed it, if you need to tweak the prompt and try over, or if you just need to fix a couple of things by hand?

              (Below is just skippable anecdotes)


              Couple of years ago, one of my junior devs submitted code to fix a security problem that frankly neither of us understood well. New team, new code base. The code was well structured and well written but there were some curious artifacts, like there was a specific value being hard-coded to a DTO and it didn’t make sense to me that doing that was in any way security related.

              So I quizzed him on it, and he quizzed the AI (we were remote so…) and insisted that this was correct. And when I asked for an explanation of why, it was just Gemini explaining that its hallucination was correct.

              In the meanwhile, I looked into the issue, figured out that not only was the value incorrectly hardcoded into a model, but the fix didn’t work either, and I figured out a proper fix.

              This was, by the way, on a government contract which required a public trust clearance to access the code — which he’d pasted into an unauthorized LLM.

              So I let him know the AI was wrong, gave some hints as to what a solution would be, and told him he’d broken the law and I wouldn’t say anything but not to do that again. And so far as I could tell, he didn’t, because after that he continued to submit nothing weirder than standard junior level code.

              But he would’ve merged that. Frankly, the incuriousity about the code he’d been handed was concerning. You don’t just accept code from a junior or LLM that you don’t thoroughly understand. You have to reason about it and figure out what makes it a good solution.


              Shit, a couple of years before that, before any LLMs I had a brilliant developer (smarter than me, at least) push a code change through while I was out on vacation. It was a three way dependency loop like A > B > C > A and it was challenging to reason about and frequently it was changing to even get running. Spring would sometimes fail to start because the requisite class couldn’t be constructed.

              He was the only one on the team who understood how the code worked, and he had to fix that shit every time tests broke or any time we had to interact with the delicate ballet of interdependencies. I would never have let that code go through, but once it was in and working it was difficult to roll back and break the thing that was working.

              Two months later I replaced the code and refactored every damn dependency. It was probably a dozen classes not counting unit tests — but they were by far the worst because of how everything was structured and needed to be structured. He was miserable the entire time. Lesson learned.

            • mcv@lemmy.zip
              link
              fedilink
              English
              arrow-up
              4
              ·
              23 hours ago

              This is the big issue. LLMs are useful to me (to some degree) because I can tell when its answer is probably on the right track, and when it’s bullshit. And still I’ve occasionally wasted time following it in the wrong direction. People with less experience or more trust in LLMs are much more likely to fall into that trap.

              LLMs offer benefits and risks. You need to learn how to use it.

          • PlutoniumAcid@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            1 day ago

            Also depends on how you phrase the question to the LLM, and whether it har access to source files.

            A web chat session can’t do a lot, but an interactive shell like Claude Code is amazing - if you know how to work it.