Discussion about this post

User's avatar
ethX's avatar

I educate my students really well on how to use AI, but in almost every piece, I notice things over and over again that are so easy to debunk. It's embarrassing.

Expand full comment
My GloB's avatar

How about just going back to the books as you've had to do to ensure your article is accurate? It seems unavoidable to me.

Translation from ancient languages, even from modern languages, introduces a very high level of potential interpretative difference among critics and commentators. Subjectivism made rife.

Also, the context in mostly irreproducible situations where much of what 'really' existed/happened at the time is missing adds yet another level of complexity.

To prove something philosophically is hard enough using the actual sources, let alone trusting the artificially compounded and compartmentalised intelligence of the machine. Greater gaps are unavoidable and, as with all errors, the magnitude of the error itself is also beyond gauge which of necessity, potentially increases the number issue(s) exponentially.

Thanks for this! It's refreshing and much needed.

Trying to be objective, I would ask: 'Can we call AI results lies when we're the ones building and trusting them?

Expand full comment
7 more comments...

No posts