When I tried it in the past, I kinda didn’t take it seriously because everything was confined to its instance, but now, there’s full-featured global search and proper federation everywhere? Wow, I thought I heard there were some technical obstacles making it very unlikely, but now it’s just there and works great! I asked ChatGPT and it says this feature was added 5 years ago! Really? I’m not sure how I didn’t notice this sooner. Was it really there for so long? With flairs showing original instance where video comes from and everything?
Why do people bring this up every fucking time?
“I used chatgpt”
Because they know it’s not accurate and explicitly mention it so you know where this information comes from.
Then why post it at all?
It makes idiots whine
Because they’d still like to know? it’s generally expected to do some research on your own before asking other people, and inform them of what you’ve already tried
Asking ChatGPT isn’t research.
ChatGPT is a moderately useful tertiary source. Quoting Wikipedia isn’t research, but using Wikipedia to find primary sources and reading those is a good faith effort. Likewise, asking ChatGPT in and of itself isn’t research, but it can be a valid research aid if you use it to find relevant primary sources.
At least some editor will usually make sure Wikipedia is correct. There’s nobody ensuring chatGPT is correct.
AI seems to think it’s always right but in reality it is seldom correct.
Sounds like every human it’s been trained on
No, it sounds like a mindless statistics machine because that’s what it is. Even stupid people have reasons for saying and doing things.
Yes, stupid people’s reason is because Trump said so, so it must be true
If those people are inaccurately spouting ‘facts’ from some article they can barely remember, yeah that’s pretty much exactly the same output.
Buddy, it’s nap time. Catch you in a couple hours when you’re feeling better.
A nap does sound good.
Why post anything? Because they wanted to, the same way you posted something that you felt was worth adding. For me it wasn’t adding anything. Nonetheless I answer you. Because I wanted to.
People also say they googled, unfortunately
Not the same thing.
google allows for the possibility that the user was able to think critically about sources that a search returned
chapGPT is drunk uncle confidently stating a thing they heard third hand from Janet in accounting and then taking him at his word
Google results are like:
Is peertube compatible with the fediverse?
ADVERT
Introduction: A lot of people wonder if peertube works with other peertube instances…
ADVERT
What is peertube? Peertube was set up in 1989 by john Peer…
Pop-up: do you like our publication? Give us your email address.
ADVERT
Why you might want to set up peertube: peertube is a decentralised way…
ADVERT
Please support us! From £30 a month you can help us to write more.
Wat is the fediverse? The fediverse is a technology…
ADVERT
Articles you may also like:
ADVERT
So can peertube instances talk to each other?
ADVERT
the answer is yes.
ADVERT
In conclusion, peertube is very…
Comments (169)
John Smith wrote at 12:28 on Friday
At this point, ad blocker is pretty much mandatory for me, just like how antivirus software used to be a decade ago (probably more)
PLEASE DISABLE YOUR AD BLOCKER! We use the revenue from annoying you to feed our starving CEO!
Can’t wait for them to be starved. Does installing two adblockers speed up the process?
They are? This is why you use a pop up blocker…
I love that fair and balanced opinion. I hope he is running for an office I can vote for. /s
but at least your drunk uncle won’t boil the oceans in the process too
How dare you, my drunk uncle is completely capable of boiling the oceans! He was even boasting about it at our last family dinner!
noone boils the ocean with using chatgpt
one transatlantic flight produces the same amount of CO2 as 600000 ChatGPT requests; if you use Quen 2.5, you need to make nearly 2 mio. requests.
To set this in relation, transport only for Bezos wedding in Venice equals about 54000000 ChatGPT requests.
Using a LLM once in a while is negligible.
You’re giving people using google too much credit.
People before ChatGPT thought critically of things on Google as much as they do ChatGPT today.
People before facebook thought critically of what they saw on the news as much as they do facebook today.
Sure, people didn’t think about things too much at any point in time and sources aren’t always perfectly reliable, but some sources are worse than others,
Unfortunately now Google is ChatGPT. It provides its own shitty AI answers, and its search results have been corrupted by an ocean of slop.
I assumed it was bwing used the current common usage for using a web search, like how kleenex is used for any facial tittle, not literally Google the search engine.
Speaking of literal, Google is putting Gemini results before search results, not using chatGPT.
Also you: “why do people bother to mention when information comes from ChatGPT”
Ai’s provide you with links so you can use your critical thinking
Do you click on the links?
If they are links from the search, isn’t that just the same thing as doing a regular search and verifying the results?
What does this extra layer add other than an unreliable middleman who is extremely inefficient?
But don’t you see? It allows the corporations to insert their opinion into the answer and bias you before you click that link. That’s better right?
You are correct. AI can give an a completely different answer than its source and they can just blame it on AI. This is true but Google has sway the results given depending on the individual. Obama talks about this and how it contributes to the extreme divide of people of the US.
It steals content from creators while being worse for the environment at the same time. Not the same thing, it is worse.
I worked in education in computer science and basic usage in nearly every age group. When you realize how bad people are at using search engines, you can see why people think they accomplished something using AI. It’s like giving a child a calculator saying he can do math now.
Creating search prompts itself is a skill. You wouldn’t think so until trying to teach some one logic through search prompts. It is hell, literally my hell. Some people just don’t get it like 0 percent.
Differentiating what is a good source and what is a bad source is an even harder skill. People will believe what they want to believe. Google search adapts to the bias of individuals because it keeps people searching. This is why, even though it isn’t perfect, engines like duckduckgo are important.
How would you phrase this differently?
“It looks like this feature was added 5 years ago.”
If asking for confirmation, just ask for confirmation.
So, your solution is for the user to provide less information and then respond to people to inform them if they used chatgpt if asked?
It just seems like much less reps are used if they say they used ChatGPT.
Additionally, if they don’t say it and no one asks, in the future people might look for a source, at least this way there is a warning there might be misinformation.
I know what your going to say next, they should research the thing themselves independently of ChatGPT, but honestly, they probably don’t care/have the time to look up released notes over the past few years.
Why would anyone ask where they got the info if it is accurate?
The point Is that it might not be accurate. It’s like saying, “a friend told me…”
It lets the reader know that the information being shared was presented as truthful, but wasn’t verified by the commenter themselves.
Apparently the feature was added 5 years ago.
So, your solution is for the user to provide less information and then respond to people to inform them if they used chatgpt if asked?
It just seems like much less reps are used if they say they used ChatGPT.
Additionally, if they don’t say it and no one asks, in the future people might look for a source, at least this way there is a warning there might be misinformation.
I know what your going to say next, they should research the thing themselves independently of ChatGPT, but honestly, they probably don’t care/have the time to look up released notes over the past few years.
My partner describes her bowel movements to me when she returns from her daily ablutions.
Honest answer? It’s easy and it won’t judge you for asking stupid questions.
Edit - people are replying as if I said I do this. I’m sorry for the confusion. I don’t. This is why I see other people do it. When it comes to the general population, most people don’t care, they just want easy.
Search engines and Wikipedia don’t judge you for asking stupid questions either.
You’re right, but they take actual thought and effort. People who use chat gpt don’t wanna do that.
Almost all content has been hyper-optimized to rank well on Google, not to provide good answers for humans
No it’ll just hallucinate shit that’ll make you look dumb when you go and state it as fact.
Yep, agree. That’s why I don’t personally use it.
Because people are dumber than chatgpt.
It also proves we don’t have a 50/50 split in intelligence. We need to look at the mean, then we’ll see most people are just plain fucking dumb
Also, lazier. I’m more likely to stick with information from the first 1-3 search results I decided to click, while AI will parse and summarize dozens in fraction of time I spend reading just one.
This is the golden age of misinformation and you are bitching about citations?
I think it’s because it causes all of Lemmy to have a collective ragegasm. It’s kind of funny in a trollish way. I support OP in this endeavour.
what do you mean? it’s like being angry that people bring up I googled something
Googling at least until fairly recently meant „I consulted an index of Internet”. It is a means to get to the bit of information.
Asking ChatGPT is like asking a well-behaved parrot in the library and believing every word it says instead of reading the actual book the librarian would point you towards.
Well now it’s as if half of the books in the library are written by the parrot. The librarian doesn’t know the difference, and keeps trying to make you speak with the parrot anyway.
I use it instead of search most of the time nowadays. Why? Because it does proceed to google it for me, parse search results, read the pages behind those links, summarize everything from there, present it to me in short condensed form and also provide the links where it got the info from. This feature been here for a while.
And it still gets shit wrong.
It’s all good, Lemmy users are strongly anti-ai and are genuinely learning right now that chatgpt, mistral, perplexity etc can search the web
Let’s just keep adding more and more layers like a game of telephone!
What do you mean?
Go ask chatGPT
I don’t use ChatGPT, I use LM Studio which runs Local LLMs (it’s like AI you can run locally on your PC, I have solar and a solar battery so this means there’s no co2 emissions from my queries, I primarily use this for coding questions and practice, translations from Russian/Ukrainian/French, practising french, etc), then I use mistral AI second (french based), then third perplexity (american)
I also use Ecosia.org for searches as well
I asked mistralai/mistral-small-3.2 to elaborate on what you said, Is this what you meant?
The phrase “Let’s just keep adding more and more layers like a game of telephone!” is a metaphorical way of expressing skepticism or concern about the accuracy and reliability of information as it gets passed through multiple layers of interpretation, especially when involving AI systems.
Here’s what it likely means in this context:
Game of Telephone Analogy: In the classic “game of telephone” (or “Chinese whispers”), a message is whispered from one person to another in a line, and by the time it reaches the end, the original message is often distorted or completely changed due to mishearing, misinterpretation, or intentional alteration. The user is suggesting that relying on AI systems to search, summarize, or interpret web content might introduce similar layers of potential inaccuracies or biases.
Layers of Interpretation: The “layers” could refer to the steps involved in using an AI system to access and summarize information:
Concerns About Accuracy: The user might be implying that each additional “layer” (especially when involving AI) could introduce errors, biases, or misinterpretations, much like how a message gets distorted in the game of telephone.
Hostility Toward AI: Given the context you provided (Lemmy users being “strongly anti-AI”), this comment likely reflects a broader distrust of AI’s ability to accurately and reliably convey information without introducing new problems.
In essence, the user is cautioning against blindly trusting AI systems to handle information retrieval and summarization, suggesting that doing so could lead to a breakdown in accuracy or meaning, similar to how a message degrades in a game of telephone.
We aren’t any a. I. We just ain’t lemmings.
I use a I as an inspiration. That’s all it is. A fancy fucking writing prompt.
You use AI for writing prompts? That’s pretty cool, a lot of people use AI for writing prompts, a lot of writers say it’s great for getting rid of writers block
google: I checked the listing of news sites to find information about a world event directly from professionals who double check their sources
chatGPT: I asked my hairstylist their uninformed opinion on a world event based on overheard conversations
I mean a moron could find the wrong information from google and your hairstylist could get lucky and be right, but odds are one source provides the opportunity for reliable results and the other is random and has a massive shit ton of downsides.
What if your hairstylist is on the Fediverse, avoids mainstream social media, and spends a lot of their spare time reading scientific papers?
Google results are like:
Is peertube compatible with the fediverse?
ADVERT
Introduction: A lot of people wonder if peertube works with other peertube instances…
ADVERT
What is peertube? Peertube was set up in 1989 by john Peer…
Pop-up: do you like our publication? Give us your email address.
ADVERT
Why you might want to set up peertube: peertube is a decentralised way…
ADVERT
Please support us! From £30 a month you can help us to write more.
Wat is the fediverse? The fediverse is a technology…
ADVERT
Articles you may also like:
ADVERT
So can peertube instances talk to each other?
ADVERT
the answer is yes.
ADVERT
In conclusion, peertube is very…
Comments (169)
John Smith wrote at 12:28 on Friday
Lots of legitimate concerns and issues with AI, but if you’re going to criticize someone saying they used it you should at least understand how it works so your criticism is applicable.
It is useful. Chatgpt performs web searches, then summarizes the results in a way customized to what you asked it. It skips the step where you have to sift through a bunch of results and determine “is this what I was looking for?” and “how does this apply to my specific context?”
Of course it can and does still get things wrong. It’s crazy to market it as a new electronic god. But it’s not random, and it’s right the majority of the time.
Right: it skips the part where human intelligence and critical thinking is applied. Do you not understand how that’s a fucking problem‽
Could you try to understand what I’m saying instead of jumping down my throat?
If I want to turn off a certain type of notification in a program I’m using, I don’t need to sift through three forum threads to learn how to do that. I’m fine taking the AI route and don’t think I’ve lost my humanity.
It might be wrong more often than you think
https://futurism.com/study-ai-search-wrong
Besides the other commenter highlighting the specific nature of the linked study, I will say I’m generally doing technical queries where if the answer is wrong, it’s apparent because the AI suggestion doesn’t work. Think “how do I change this setting” or “what’s wrong with the syntax in this line of code”. If I try the AI’s advice and it doesn’t work, then I ask again or try something else.
I would be more concerned about subjects where I don’t have any domain knowledge whatsoever, and not working on a specific application of knowledge, because then it could be a long while before I realize the response was wrong.
IS wrong
Ftfy
In this study they asked to replicate 1:1 headline publisher and date. So for example if AI rephrased headline as something synonymous it would be considered at least partially incorrect. Summarization doesn’t require accurate citation, so it needs a separate study.
OK but google (or ask your AI?) about AI accuracy. This isn’t the only source saying theres a problem with the answers.
LLM’s have been able to search the web for a few years now
The main one outside of ChatGPT is https://www.perplexity.ai/
You can also look at hosting your version with Local LLM’s:
https://olares.medium.com/building-a-local-perplexity-alternative-with-perplexica-ollama-and-searxng-71602523e256
Looking up a list of resources that you then evaluate yourself is very categorically different from getting an “answer” from a bot.