(Shot himself in the shin, so it wasn’t a suicide attempt)
Might have been, sounds like he’s a shin-for-brains.
(Shot himself in the shin, so it wasn’t a suicide attempt)
Might have been, sounds like he’s a shin-for-brains.
This is a privacy intrusion that should be banned nationally.
And some subreddits have fascist mods who arbitrarily ban anyone who’s not a alt-right or worse.
Interoperability is a big job, but the extent to which it matters varies widely according to the use case. There are layers of standards atop other standards, some new, some near deprecation. There are some extremely large and complex datasets that need a shit-ton of metadata to decipher or even extract. Some more modern dataset standards have that metadata baked into the file, but even then there are corner cases. And the standards for zero-trust security enclaves, discoverability, non-repudiation, attribution, multidimensional queries, notification and alerting, pub/sub are all relatively new, so we occasionally encounter operational situations that the standards authors didn’t anticipate.
TripAdvisor has better content. Too many Google reviews give a business 1 star because the review author was too stupid to check working hours, or has some incredibly rare digestive condition that they didn’t bother to communicate to the eatery before ordering. Or they expect their Basque waiter to speak fluent Latvian, or to accommodate a walk-in party of 20.
Isn’t yelp a pretty easily replaceable thing?
Yelp is at this stage a completely worthless thing. The only thing they were originally was an aggregator of semi-literate reviews, and a shakedown racket against businesses that pissed off some Karen
Yeah, just like the thousands or millions of failed IT projects. AI is just a new weapon you can use to shoot yourself in the foot.
is all but guaranteed to be possible
It’s more correct to say it “is not provably impossible.”
Someone, somewhere along the line, almost certainly coded rate(2025) = 2*rate(2024)
. And someone approved that going into production.
If they aren’t liable for what their product does, who is?
The users who claim it’s fit for the purpose they are using it for. Now if the manufacturers themselves are making dodgy claims, that should stick to them too.
If a self-driving car kills someone, the programming of the car is at least partially to blame
No, it is not. It is the use to which the system has been put that is the point at which blame can be assigned. That is what should be verified and validated. That’s where some person is signing on the dotted line that the system is fit for use for that particular purpose.
I can write a simplistic algorithm to guide a toy drone autonomously. So let’s say I GPL it. If an airplane manufacturer then drops that code into an airliner, and fail to test it correctly in scenarios resembling real-life use of that plane, they’re the ones who fucked up, not me.
No liability should apply while coding. When that code is deployed for use, there should be liability if it is unfit for its intended use. If your AI falsely denies my insurance claim, your ass should be on the line.
Yeah, all these systems do is worsen the already bad signal/noise ratio in online discourse.
Unless there is a huge disclaimer before every interaction saying “THIS SYSTEM OUTPUTS BOLLOCKS!” then it’s not good enough. And any commercial enterprise that represents any AI-generated customer interaction as factual or correct should be held legally accountable for making that claim.
There are probably already cases where AI is being used for life-and-limb decisions, probably with a do-nothing human rubber stamp in the loop to give plausible deniability. People will be maimed and killed by these decisions.
They are a product of lack of control over the stadistical output.
OK, so describe how you control that output so that hallucinations don’t occur. Does the anti-hallucination training set exceed the size of the original LLM’s training set? How is it validated? If it’s validated by human feedback, then how much of that validation feedback is required, and how do you know that the feedback is not being used to subvert the model rather than to train it?
It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.
You’re attempting to redefine “bug.”
Software bugs are faults, flaws, or errors in computer software that result in unexpected or unanticipated outcomes. They may appear in various ways, including undesired behavior, system crashes or freezes, or erroneous and insufficient output.
From a software testing point of view, a correctly coded realization of an erroneous algorithm is a defect (a bug). It fails validation (a test for fitness for use) rather than verification (a test that the code correctly implements the erroneous algorithm).
This kind of issue arises not only with LLMs, but with any software that includes some kind of model within it. The provably correct realization of a crap model is still crap.
I’m no AI fanboy, but what you just described was the feedback cycle during training.
Semi-randomly
A more correct term is constrained randomness. You’re still looking at probability distribution functions, but they’re more complex than just a throw of the dice.
My dad’s '57 Cadillac Fleetwood had that too. Big-ass electric eye on the dashboard. So it’s not exactly bleeding-edge technology.
An ancestor of mine wrote a memoir of growing up in an Old West mining town. He saw one gunfight. In the early morning, a man saw the front door of his house open and another man walk out. Not happy to find that another gentleman’s bacon had been in his grill, he demanded satisfaction. That led to an impromptu duel which the offended husband won. My ancestor was walking to school when it all went down.
That was probably an exceptional situation, since the town in question was notoriously violent and corrupt.