I’ll save you a click. that article asks 3 basic questions : is it dangerous, how to tell something is AI and is it bad for the environment.
They get only non-answers. Thanks, BBC.
The answers are more like “yes”, “check the source and intent”, “yes”
The second one is not really a way to check if it’s AI, only if it may be deceiving you, and the third one’s conclusion is not “yes” but “use responsibly”, like it’s in the power of the common person to even choose to use AI and like corporations aren’t the ones pushing it with no regard to impact anyway.
The problem is those 3 questions are very vague and would need complex answers, and maybe the guy vould have been able to give these, but in any case they’re not in the article.
My AI questions:
Can AI fuck off?
Can the bubble pop?
Can we make all AI models free and open source since it’s entirely trained on stolen content?
I came here to state that 90% of answers to any AI questions people may have would be a simple “no”. Seems I lowballed it.
I’d love to say “yes”, but my uneducated guess would that’s not happening before 2027.




