9 Comments
User's avatar
ethX's avatar

I educate my students really well on how to use AI, but in almost every piece, I notice things over and over again that are so easy to debunk. It's embarrassing.

Expand full comment
My GloB's avatar

How about just going back to the books as you've had to do to ensure your article is accurate? It seems unavoidable to me.

Translation from ancient languages, even from modern languages, introduces a very high level of potential interpretative difference among critics and commentators. Subjectivism made rife.

Also, the context in mostly irreproducible situations where much of what 'really' existed/happened at the time is missing adds yet another level of complexity.

To prove something philosophically is hard enough using the actual sources, let alone trusting the artificially compounded and compartmentalised intelligence of the machine. Greater gaps are unavoidable and, as with all errors, the magnitude of the error itself is also beyond gauge which of necessity, potentially increases the number issue(s) exponentially.

Thanks for this! It's refreshing and much needed.

Trying to be objective, I would ask: 'Can we call AI results lies when we're the ones building and trusting them?

Expand full comment
Robin Turner's avatar

With regard to hallucinations/bullshit, the current state of AI reminds me of the early days of Wikipedia, when anyone could write anything, and there weren't enough knowledgeable people to check.everythimg. That's how I started writing for Wikipedia - I was horrified by one of the articles. Now it's far from perfect, but a pretty reliable source of information on most topics.

I hope that AI will evolve so that the correction mechanisms win out more often than not. Part of that is inherent to AI training, but there are also proposals for personal AI assistants (aka "intimate AI" or "guardian AI) that provide a kind of buffer between you and the larger community of AIs. (See Sundae Lab for such a proposal)

Expand full comment
Matt Bianca's avatar

I think there is too much stress and anger around AI. Especially in the US. Honestly, who cares if an article or whatever is written by a human or a robot? The important thing is the content.

Expand full comment
Robin Turner's avatar

Bit the point here is that the content was false.

Expand full comment
Mel Pine's avatar

While you are absolutely right, Doug, in general, and of course, in the specific instance you cited, I want to speak up for the responsible use of AI. My guidelines: 1) Don't rely on AI unless you have a good basic understanding of the subject and honesty about what you don't know. 2) Try different AI services, decide on one, and be willing to pay for it. (I pay $20 a month for Perplexity Pro, which cites its sources.) 3) Frame your question without a clear bias. 4) Check the sources carefully. If the source of a quotation is a publication for a general audience (like an article on nifty Buddha quotes about love), disregard it. Although I have occasionally encountered outright laziness and deception, as I might with a human researcher, most of the errors are in putting quotation marks around an otherwise reasonable summation. That may sound like a lot of work, but it's far cheaper than a good researcher and saves time.

Expand full comment
meika loofs samorzewski's avatar

It's just not plagiarism or fraud or copyright infringement, its a whole new level of hell. https://whyweshould.substack.com/p/machine-learning-maps-social-learning/

Expand full comment
james's avatar

https://www.channelnewsasia.com/east-asia/china-schools-ai-courses-4990306

A generation or two of this should remove the populace's capacity for independent thought. Custom propaganda ... the dream of autocrats, the promise of the dogmatists.

Expand full comment