You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.

  • fork@feddit.online
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 hour ago

    It’s never justifiable because it can and will output incorrect information. It’s made my job worse because it means confidently incorrect people bug me when it’s wrong and I have to explain why it’s wrong.

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    its the next abstraction of search. A search does not answer a question correctly necessarily. Its pretty much not going to stop the same as having people not search online and instead go through newspapers and encylopedias and refernce texts. Energy wise if they are entertaining themselved and not generating images and just screwing around with text then its preferable to streaming vidoe if replacing it. The scariest part is it being used ineffectively and people not realizing it. I sometimes feel we are in a new dark ages with blood letting, trepanning, and curing demon possession.

  • agamemnonymous@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    49 minutes ago

    I think it’s useful as a starting point for a lot of things. I can ask AI a question about a topic I know nothing about, which will typically give me some context on the topic and the terminology to do further research.

  • tomiant@piefed.social
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    3 hours ago
    1. The sciences obviously

    2. For me personally, data collation

    3. Learning

    4. Assisting with Linux sysadmin stuff (used to be a “how do I X” meant hours of scouring online forums and asking questions that might be deleted because draconian forum rules or get answered weeks later if at all, now I can get shit done in minutes)

    5. I also use it a lot to explore ideas and arguments, like a sort of metaphysical sparring partner.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    2 hours ago

    no, never.

    AI is fascism. it all supports fascist companies that wish for nothing more than to enslave you.

    why anyone would want to support the thing that wants you enslaved or dead is beyond me.

    • thinkercharmercoderfarmer@slrpnk.net
      link
      fedilink
      arrow-up
      2
      ·
      2 hours ago

      AI isn’t fascism, it’s a tool. Fascists use art in their propaganda, does that make art fascist? No. Fascism doesn’t create, it just corrupts. The problem we have, that we have always had, is with people, not the tools they use. The tools may be terrifying in their hands, but we can’t just wish them gone any more than we can wish nuclear weapons away. We have to figure out how to live with them.

      • GreenKnight23@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        53 minutes ago

        AI is fascism. it’s a tool being employed to circumvent democracy and attack our personal freedoms.

        no AI company is in existence that isn’t desperately attempting to enslave us all.

        I will never “live with it”. much the same way I’ll never “live with” rapists or child molesters.

        • thinkercharmercoderfarmer@slrpnk.net
          link
          fedilink
          arrow-up
          1
          ·
          31 minutes ago

          I still think you’re mistaking the murder weapon for the murderer. AI is just a program, it can be used for whatever purpose the mind can devise. If someone uses an airplane to traffic children, I don’t think a reasonable response is to say that airplanes are child traffickers.

          Also I don’t mean “live with” to imply a surrender to how other people (including fascists) use AI. We should do everything in our power to build the world we want to live in, and that means dismantling the power structures of those who abuse them. I mean accepting that AI tools exist and then planning from there. Wishing that they had never been invented is a perfectly fine thing to do, they are something of a headache at the moment, but they’re here and can’t be un-invented. We can either find a comfortable existence in this reality and strive for that (perhaps by limiting their use), or resign ourselves to the doom we find ourselves in.

  • entropiclyclaude@lemmy.wtf
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    I think we should be building localized, smaller, more finely-tuned LLMs.

    1. They wouldn’t require data centers.
    2. They would be forced to become more energy efficient or resource aware because they add costs to organizational profit margins - forcing innovation and creativity instead of throwing data centers and billionaires at the problem.

    I used AI to help with debugging and coding, as well as exploring a theory I came up with a long time ago - and with my framework and notes and research papers and everything else I’ve collected to support my theory, I was able to put it into application with my own AI cybersecurity I’ve developed.

    We’ve created 26,000 new cyber threat datasets because I had access to an LLM that could help me take the frameworks, notes, and research I’d gathered in my attempts to build this out and within a couple months I had something that blew my prototype out of the water.

    • there is a lot of value in these LLMs. What I’ve been exploring is on-hardware AI. Not a friend. Not a chatbot. A program that does what it’s supposed to and that’s it.

    My startup in cybersecurity- we use less than 1GB of ram, at peak use maybe 30% of a single cpu core, and it was build with ethics and safeguards in mind. Not LLM but real Machine + reinforcement learning.

    To me ethics also meant resource awareness. If I’m poisoning the planet and the people then it’s not a good product.

    Building smaller, more specialized local models is not only better from a cybersecurity perspective, but smaller local LLMs mean new startups to build them, a race to innovate and improve resource usage, more data privacy, smaller attack surface, no obscenely expensive API calls and overage fees…

    What we should have is a Symbiotic approach to AI - a partnership sort of understanding.

    LLMs helped me with debugging and putting this research and theory together. And in a fraction of the time it took me to build the framework.

    I pushed autonomous operation because I felt that it was about giving people their time back. Providing freedom. If my cybersecurity can take care of 94.1% of all threats before they reach an analyst - that analyst doesn’t have to wake up at 2AM to sift through 10000 false positives. We do it.

    Now that analyst can do what they got a degree to do - actually defend a network. Build and explore threat research and databases. Find their purpose again.

    We require that a human is always in the loop and help protect cybersecurity jobs by ensuring that all human input is always the final decision. Let our AI do the heavy lifting so you can take care of this shit that matters and what you really want to do.

    Sorry I think my adhd took control of this conversation.

  • If you made all the training data yourself, or ethically acquired the training data, then go nuts do whatever you want with it. See Corridor Digital’s ai chroma key thingy.

    If the training data isn’t ethically sourced, then it gets iffy.

    I use ai primarily for my own entertainment. None of it are things I’d want to share with the world. Is “dicking around” justifiable? Eh. I eat meat and shop at amazon, both of which are things that some people would find “not justifiable”, so someone is going to be upset with me no matter what.

    In the case of artistically, I don’t take offense to ai tools being part of a process, what’s important to me is that the ai isn’t the entire process. You wouldn’t go to a cinema, record the movie with your phone camera, then post it online saying “look at what I made”. That’s nonsense. But if you took clips of that same movie, rearranged and dubbed over them thus creating a new unique work, you could post that online and say “look what I made”. Whatever the ai output, no matter how detailed your prompt, should be treated as being made by someone else. You don’t get to say “look what I made” unless you actually do something with it.

    Another use case is summarizing conversations and compiling notes. This is another one that I do often. I could go on for hours about a subject (usually while drunk) and at the end I tell it “compile a detailed report on everything discussed, be verbose and leave out no details” or something similar, and that output goes into my notes documents. It’s fine to copy pasta that, because it’s not going to be anything that anyone ever actually sees.

  • Pinetten@pawb.social
    link
    fedilink
    arrow-up
    20
    arrow-down
    3
    ·
    7 hours ago

    I think anything with text generation is fine. Your multiple Google searches are highly likely to eat more resources than that. Also, fuck Google, use Ecosia. But when I suspect an answer isn’t one quick search away, I happily rather use Le Chat for answers, than give Reddit traffic, or have to wade through the shite that is Fandom, Wikia or whatever. Not to mention using AI helps me get past the issue of having to check multiple sites for an answer, just to find that the answer is “Google it” or “Nvm, solved it”. Some of you fuckers did this.

    However people need to understand that an AI is exactly as fallible as any person. Yes, it has access and capability to handle way more data but between trying to please you and just it getting it’s wires crossed, it’s going to make mistakes. YOU need to be able to assess the accuracy of the output. The more important the topic, the more careful you need to be and always assume that the possibility of error is there no matter how hard you try - JUST LIKE WITH ANY BIT OF INFORMATION. I see so many people cite academic articles like they prove whatever claim they are making, just to see that the study in question was funded by The Company That Wants to Prove The Claim and sample size was 3 people who work for The Company That Wants to Prove The Claim. At least AI has a small chance of pointing the issue out if YOU yourself tell it to be critical - and I actually suspect this is part of the reason some people hate AI. They don’t like that it absolutely can be more intellectually rigorous than a person with an emotional investment in whatever they want to be true. Yes, you can have an AI asspat your grandest delusions but if you actually try to get it to be critical, it will be. You can use a hammer to hit people, or you can use it on a nail as intended (and how many times you hit your own fingers is on you, not the hammer).

    I would draw a line on artwork, videos, music. While I’m not going to crucify actual artists using AI assistance to take out some tedium from a project, I still wouldn’t encourage it. Stolen artwork to train AI is one thing and the environmental impact is VASTLY greater than just text. Generating one AI image can use as much energy as even a 1,000 text responses. I would also really like to be able to completely opt out of AI slop in media sites. I fucking hate that Soundcloud allows it.

    And a last point on AI text responses: if you saw the rise of alt-right and the anti-vaxx stuff, you probably are familiar with gish galloping and Brandolini’s Law. If not, you really fucking should be. AI can make it so much easier to debunk misinformation. YES it can make it easier to perpetuate too but this is where we see the AI weapons race. Bad actors can AND WILL use AI to fill any void with their rhetoric. If you value truth and facts and want to prevent misinformation from spreading you are gimping yourself if you’re not using AI.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      I use Suno on occasion. I enjoy writing poetry, and being able to turn it into a song is something I find fun and inspirational, driving me to write more than I have in decades. I could never, ever write a chord of music.

      I don’t share it. It’s just for personal gratification. If it’s super good maybe I’d share with some friends in discord who are super into AI. Thing is, part of a song might be super good, but I’ve never had an entire song turn or the way I want. And I’ve found no one ever thinks a song is as good or interesting as the prompter.

      AI is like the cheap consumer goods of art and thought. Cheap, but not quality or durable. It works and looks great if gently used, but as soon as it gets any real pressure or scrutiny, it falls apart.

      I think it’s likely, if we continue down that path, to be the artistic equivalent of IKEA vs a master woodworker. You can buy an end table for $30, or you can but something hand crafted from teak and mahogany for $3000. A lot of people like IKEA, but if they weren’t around a nice end table might be $600 and be heirloom quality (if not as good as the $3k one). But today that middle market doesn’t exist. Rather it does, but it’s filled with IKEA quality shit dressed up to look a bit nicer temporarily. I don’t know, maybe my analogy fell apart.

      I’m just saying that these things are fun and interesting on an individual level, but I agree they shouldn’t be commercial. We should just make it so that there are no enforceable rights granted on anything AI produces. It can be freely copied and distributed. But that doesn’t help real artists make a living. And their work should be appreciated and respected (and result in a lifestyle that affords them the ability to keep making art).

      • Pinetten@pawb.social
        link
        fedilink
        arrow-up
        6
        ·
        6 hours ago

        I don’t agree with the use but at least you’re keeping it private. Not gonna crucify you because I understand the appeal. I’d encourage you to find a way to pay for it though, or even just start making a donation to some environmental cause as a way of off-setting.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 hours ago

          That’s a pretty reasonable ask. I do donate to other things I use like Lemmy. I like your suggestion.

  • cally [he/they]@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    4 hours ago

    Absolutely not — it’s a computer program, a piece of software, pretending to be human. I’ve always been against that, especially now that it’s less obvious if it’s a real person talking 🙆, or just a computer program someone prompted 🤖

    I value honesty, and, sincerely, I hate that the web is filled to the brim with AI ‘slop’. As a human being who values creativity, I don’t want to see that. Fundamentally it is made to mimic human output — it’s not just annoying, it’s disingenuous.

    • 🌞 Alexander Daychilde 🌞@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      60 minutes ago

      You missed:

      • Starting with “What a great way to think about things!” or similar overly-positive reinforcement
      • Ending with “Do you want me to help you with…” that it ends with
        • jj4211@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          59 minutes ago

          Yeah, I was thinking it was because of all the tells in one comment in such a perfect context is just too on point…

          But a bit triggered because I recently spent a a bit of time trying to figure out if someone replied to me was on something because their reply was so weird, irrelevant, and such vaguely annoying before I realized it was his LLM authored out of office message trying to be ‘cute’.

        • thinkercharmercoderfarmer@slrpnk.net
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          Now that you mention it, I see that it’s pretty blatantly slop parody. You got me – with the emojis , the em dashes, and most especially the lists of comma phrases that don’t really add to the text, I was sure that this was LLM spam 😅

          Well played 🎯

  • CosmicTurtle0 [he/him]@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    28
    ·
    8 hours ago

    The best use of AI I’ve seen thus far is reading legislative bills. Those monstrosities are so fucking long and filled with earmarks that it’s next to impossible to understand what is in them.

    Having an AI not only read the bill but keep a watch of it as it goes through Congress is probably the best use of AI because it actually helps citizens.

    I am on record saying we need an AI that can track prices of various things that can then predict when the best time it is to buy something.

    I want an AI bot that saves me money or gets me a good deal or extracts money from the capital class.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 hours ago

      Except they can screw up at that role.

      There’s a lawsuit because DOGE asked ChatGPT to summarize projects DEI-ness, and for example it declared a grant for fixing air conditioning was a DEI initiative

      • Trilogy3452@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        If you ask for quotes and explanations it would help, i.e. treat the LLM output as a smart index/table of contents. You’d be able to quickly verify claims

        • jj4211@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          As long as you follow through to actually source the original, instead of assuming the quotes provided are intact. The point was in the case above, DOGE was doing no follow up, and most people who look to that as a ‘summary’ assistant aren’t wanting to dig deeper.

          Hell, even without AI lawmakers frequently got caught admitting they didn’t read the law they signed, they didn’t have time for that. Now with AI summaries as an excuse…

          • 🌞 Alexander Daychilde 🌞@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            56 minutes ago

            It’s a tool, like everything else. It’s easy to google wrong info. You can get wrong info from an encyclopedia.

            You can even from a dictionary: One thing that slightly annoys me is the change in the spelling of “yeah” such that “yea” is a common alternate spelling - thanks to autocorrect. “Yea” was a word - it’s archaic these days. If you see someone say “Yay or nay” that was “yea or nay”. “Yea” is not the same meaning as “yes” or “yeah”, although it is somewhat similar.

            I remember someone quoting dictionary definitions to me to try and “prove” that “yea” meant the exact same as “yeah” or “yes”.

            They were wrong.

            But the point is: The tool is just a tool. AI is a tool.

        • jj4211@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 hour ago

          Indeed:

          ChatGPT determined that this was related to DEI, responding, “Yes. Improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences. #DEI.”

          • jtzl@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            17 minutes ago

            Lord. Yet another example of folks finding out the hard way that “AI” is marketing-speak. I get that people want to make this like LLMs are effectively like discovering how to make fire, but could we please not suspend judgment wholesale!?

  • Quazatron@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    9 hours ago

    It’s not going away. The cat is out of the bag.

    As with any tool it has its use cases. It’s not a good fit for everything. You can drive a screw with a hammer but a screwdriver works best.

    We’re experiencing the capitalist euphoria that happens when something new comes along. This needs to get regulated into submission like all the previous bubbles.

    • cRazi_man@europe.pub
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 hours ago

      Tech bros benefit from saying that AI is the solution to everything… And people are eating this up.

      You’re exactly right that it really should be the right tool for the right job, and people don’t know how to differentiate good vs bad uses of AI.

      I’ve used it for getting over my Linux migration problems. Ive also used it to help me set up my home server. Ive used the tech Bros tools to remove as many tech bro products as I can from my life. I think this is the perfect use of AI, on a noncritical problem with good impact and absolutely no consequence when it is completely wrong. I ask AI to interpret massive docker log files for me and point me in the right direction. Once I know what the problem might be then I can go read human written solution posts.

      I know people have successfully used AI to write letters to help get out of unfair parking tickets, battle shitty landlords and use it to do shitty useless tasks that bosses ask them to do. I fully support using AI to push back against overbearing authority. Use their own tool to stick it to the man! We just need to prioritise reducing the climate, energy, water impact to make it not destroy the planet at the same time. I want ethical AI that doesnt steal everyone’s content.

  • Denjin@feddit.uk
    link
    fedilink
    arrow-up
    23
    ·
    10 hours ago

    Medicine.

    Evidence shows that some highly specialised models are better at things like detecting breast cancer in scans than human doctors.

    Properly anonymised automatic second scans by an AI to catch the markers that human doctors miss for another review by a specialist is an excellent potential use case for an LLM AI.

    Transcription services can save doctors huge amounts of admin time and allows them to focus on the patient if they know there’s a reliable system in place for typing up notes for a consultation. As long as it’s treated as a “please review these notes are accurate” rather than treated as a gospel recording and the data is destroyed once it’s job is complete and the patient has been able to give informed consent.

    The way these things are being used in actual medical contexts right now is frankly terrifying.

    • tomiant@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 hours ago

      Yeah the sciences in general I’d say. There’s a project aiming to translate the tens of thousands of cuneiform clay tablets that sit in storage all because there’s like a handful of people in the world that can read them- AI is an amazing way to mass translate them and unlocking vast troves of hitherto completely unknown ancient knowledge.

      The problem is not even the AI, but the scientists themselves who guard the tablets jealously because they don’t want anyone else to translate “their” tablets that they dug up, even though they are incapable of possibly make a dent in the sheer volume in their collected lifetimes.

      Imagine, so much information encoded, from thousands of years ago that could reveal so much about the origins of our culture and civilization!

    • Hossenfeffer@feddit.uk
      link
      fedilink
      English
      arrow-up
      19
      ·
      8 hours ago

      I had a colonoscopy last year (such fun!) and there was an ‘AI’ monitoring the camera feed to detect anomalies. If it spotted something it just drew the doctor’s attention to it for his expert, human review. I was ok with that. Effectively an extra pair of eyes that can look everywhere on the screen all at once and which never blink.

      • cynar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        That’s how AI systems should be used. A “heads up, something weird here” system.

        I could also see it being used well like this for patient history analysis. Often a doctor is treating 1 symptom of something larger. They can’t see the wood for the trees. An LLM could pick out oddities and flag them. The doctor can then filter out the mistakes and hallucinations, but be alerted to rare or unusual conditions that match the patient’s symptoms and history.

  • kreynen@kbin.melroy.org
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 hours ago

    @58008@lemmy.world I recently read a developer compare AI to lead pipes or asbestos… something that seemed cost effective at the time, but ended up being a REALLY bad idea. Communities are already realizing that the power and water required for this are not compatible with human life in the same place and the market reflecting the cost of increasing electric production.

    Being “off grid” was something only peppers did, but as connection fees increase and battery technology improves, it makes less financial sense to keep residential homes connected to subsidize data center consumption.

    Elon’s work around for the lack of cheap electricity for his data centers has.been methane. While the US is a top methane producer, the next 3 countries are Russia, Iran and China. The cost of methane is impacted by global conflict the same way gasoline prices are.

    While the efficiency of data centers will increase, so will awareness of the impact these facilities have on the places they are built and the toxic ewaste they generate driving up their costs.