It wasn’t wrong. All mushrooms are edible at least once.
Amanitas won’t kill you. You’d be terribly sick if you didn’t prepare it properly, though.
Do you have a source on them not being able to kill you? Everything I’m finding on them suggests they can even if it isn’t too common
I tell people who work under me to scrutinize it like it’s a Google search result chosen for them using the old I’m Feeling Lucky button.
Just yesterday I was having trouble enrolling a new agent in my elk stack. It wanted me to obliterate a config and replace it with something else. Literally would have broken everything.
It’s like copying and pasting stack overflow into prod.
AI is useful. It is not trustworthy.
Sounds more actively harmful than useful to me.
I think of it like talking to some random know-it-all that saddles up next to you at the bar. Yeah, they may have interesting stories but are you really going to take legal advice from them?
I know nothing about stacking elk, though I’m sure it’s easier if you sedate them first. But yeah, common sense and a healthy dose of skepticism seems like the way to go!
Yeah, you just have to practice a little skepticism.
I don’t know what its actual error rate is, but if we say hypothetically that it gives bad info 5% the time: you wouldn’t want a calculator or an encyclopedia that was wrong that often, but you would really value an advisor that pointed you toward the right info 95% of the time.
5% error rate is being very generous, and unlike a human, it won’t ever say “I’m not sure if that’s correct.”
Considering the insane amount of resources AI takes, and the fact it’s probably ruining the research and writing skills of an entire generation, I’m not so sure it’s a good thing, not to mention the implications it also has for mass surveillance and deepfakes.

I mean, he asked if it can be eaten not what the effects of eating it are. 😅

Exactly, because a person can eat batteries and rocks if they so choose, which means those things are edible.
Secondly, the amanita muscaria pictured has been repeatedly consumed throughout history (most notably by viking raiders who were looking to get their raping and plundering done in “berserk” mode, with only fuzzy memories afterwards of whether all that shit really happened or not)
I think the more precise question would be “is this object ‘comestible’, ‘digestable’, or able to be survived if eaten”
Which is why it should only be used for art.
I don’t believe the billionaire dream of robot slaves is going to work nearly as soon as they’re promising each other. They want us to buy into the dream without telling us that we’d never be able to afford a personal robot, they aren’t for us. They don’t want us to have them. The poors are slaves, they don’t get slaves. It’s all lies, we’re not part of the future we’re building for them.
should only be used for art
No, churning out uncanny valley slop built on mass IP theft ain’t it, either. Personally I think AI is best used for simulations and statistical models of engineering problems, where it can iteratively find optimized solutions faster and sometimes more accurately than humans. The focus on “generative AI” and LLMs trying to get computers to act like humans is incredibly pointless, IMO. Let computers do what computers are good at, and humans do what humans are good at.
It shouldn’t be used for art, hire an artist or learn to make it yourself.
You have to slice a fry it on low heat (so that the psychedelics survive)… Of course you should check the gills don’t go all the way to the stem, and make sure the spore print (leave the cap on some black paper overnight) comes out white.
Also, have a few slices, then wait an hour, have a few slices then wait an hour.
The lesson is that humans should always be held responsible for important decision making and to not rely on solely ML models as primary sources. Eating potentially dangerous mushrooms is a decision that you should only make if you’re absolutely sure it wont hurt you. So for research If you choose upload a picture to chatgpt and ask if its edible instead of taking the time to learn mycology, attend mushroom foraging group events, and read identification books, well thats on you.
That’s not one of the published use-cases for the AI you’re parodying.
If you don’t read the manual and follow the instructions, you don’t get to complain that the app misbehaved. Ciao~
While I understand what you’re trying to say, it should be on the owner of the software to ensure that the AI won’t confidently answer questions it isn’t qualified to answer, not on the end user to review the documentation and see if every question they want to ask is one they can trust the AI on.
I disagree for similar reasons.
There’s no good case for “I asked a CHAT BOT if I could eat a poisonous mushroom and it said yes” because you could have asked a mycologist or toxicologist. The user is putting themself at risk. It’s not up to the software to tell them how to not kill themselves.
If the user is too stupid to know how to use AI, it’s not the AI’s fault when something goes wrong.
Read the docs. Learn them. Grow from them. And don’t eat anything you found growing out of a stump.
If someone calls up Bank of America’s customer service and asks if they should eat a mushroom they found in their back yard, and the rep confidently tells them “yes”, do you think the response should be “Well, it’s not the rep’s fault you listened to their advice, you should have known that Bank of America isn’t a good source for mycology information”, or “That rep should have said ‘I don’t know, ask someone qualified’”?
I’d argue that it’s at least 50% on the person who gave the advice.
The customer is pushing the responsibility of protecting their own health onto someone else. It’s not that other person’s responsibility, it’s yours. You don’t get to sah “but I asked Timmy the 8th grader and he said yes” or “I asked an AI chatbot and it said yes” and then be free of responsibility. Protecting yourself is always your responsibility. If you get a consequence, it’s because of what YOU did, not because of Timmy or ChatGPT. ciao ~
Also there are too many cases to cover, so it’s impossible. Sure i do it for mushrooms but then i have to do it for other 100 things
what do you mean?
That someone who develop an AI can block their AI from anwering some types of questions but there are too many things than an user can ask, it’s unrealistic to being able to cover everthing.








