To be honest, I think we’re losing credibility. I don’t know what else to put in the description.

  • dejected_warp_core@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 hours ago

    I have a lot of thoughts on this because this is a complicated topic.

    TL;DR: it’s breakthrough tech, made possible by GPUs left over from the crypto hype, but TechBros and Billionaires are dead set on ruining it for everyone.

    It’s clearly overhyped as a solution in a lot of contexts. I object to the mass scraping of data to train it, the lack of transparency around what data exactly went into it, and the inability to request one’s art from being excused from any/all models.

    Neural nets as a technology have a lot of legitimate uses for connecting disparate elements in large datasets, finding patterns where people struggle, and more. There is ample room for legitimately curated (vegan? we’re talking consent after all) training data, getting results that matter, and not pissing anyone off. Sadly, this has been obscured by everything else encircling the technology.

    At the same time, AI is flawed in practice as it’s single greatest strength is also its greatest weakness. “Hallucinations” are really all this thing does. We just call obviously wrong output that because that’s in the eye of the beholder. In the end, these things don’t really think, so it’s not capable of producing right or wrong answers. It just compiles stuff out of its dataset by playing the odds on what tokens come next. It’s very fancy autocomplete.

    To put the above into focus, it’s possible to use a trained model to implement lossy text compression. You ship a model of a boatload of text, prose, and poetry, ahead of time. Then you can send compressed payloads as a prompt. The receiver uses the prompt to “decompress” your message by running it through the model, and they get a facsimile of what you wrote. It wont’ be a 1:1 copy, but the gist will be in there. It works even better if its trained on the sender’s written work.

    The hype surrounding AI is both a product of securing investment, and the staggeringly huge levels of investment that generated. I think it’s all caught up in a self-sustaining hype cycle now that will eventually run out of energy. We may as well be talking about Stanley Cups or limited edition Crocs… the actual product doesn’t even matter at this point.

    The resource impact brought on by record investment is nothing short of tragic. Considering the steep competition in the AI space, I wager we have somewhere between 3-8x the amount of AI-capable hardware deployed than we could ever possibly use at the current level of demand. While I’m sure everyone is projecting for future use, and “building a market” (see hype above), I think the flaws and limitations in the tech will temper those numbers substantially. As much as I’d love some second-hand AI datacenter tech after this all pops, something tells me that’s not going to be possible.

    Meanwhile, the resource drain on other tangent tech markets have punched down even harder on anyone that might compete, let alone just use their own hardware; I can’t help but feel that’s by design.