6 Comments
User's avatar
Kfix's avatar

Good work on the pros and cons of this technology, and a co-sign on the decline of ironing being well overdue...

Ian Milliss's avatar

Yes I encountered this problem early on and it drove me crazy. My trick was to shift the draft to a different LLM, tell it I want it to just make some slight adjustments while retaining most of the existing text intact. This usually but not always works. But it means I have learned to down load each version until I'm finished then discard them.

For me it raises the the issue that AI, like so many previous technologies, may reduce labour but often just morphs it into a new form. There is a real skill set involved in using AI productively.

Andrew Wilson's avatar

But this just confirms the view that AI just increases workloads and does nothing to improve productivity

John Quiggin's avatar

It's consistent with there being a net saving in effort. But the jury is still out, which is why I'm experimenting.

James Wimberley's avatar

Slightly but not entirely OT comment. Will Lockett has published an attractive Occam's-razor-compliant theory that AI billionaires are addicted to their own product, and it is driving them nuts. https://substack.com/inbox/post/181281318

(He takes Elon Musk as his test case. This lands him with the difficulty that, as he admits, Musk was pretty far gone before AI chatbots came on to the scene in 2023, but it has got worse.)

Lockett adds that : "Chatbots are programmed to please the user and so inherently operate as “yes men”. Is this correct? It fits social media engagement algorithms, for which the addiction analogy is close enough. But do LLMs flatter users? I can see how they might reinforce Dunning-Krueger self-deception, as everyone can now "do their own research", badly.

Sandy Behrens's avatar

On ironing. My one simple solution: don't. Bad for you, bad for your clothes, bad for the environment.

On AI - I have been struggling with this the last few days. I see the attraction on many different levels. However, as an RHD supervisor I get a particularly unique slant on this. Of my 6 Masters students all of them have used AI to some extent. Typically it results in the same poor quality documents you hint at here. Many giveaways exist but I'd say my top 3 are:

1) shopping lists (ChatGPT and the like LOVE to give you bullet points - even when this makes no sense)

2) hallucinations - these become painfully obvious when these are in-text citations and have absolutely nothing to do with the original work

3) plastic writing - a bit like botox but for writing.

While this has been a source of great pain for me the last few weeks (in particular) I have also found a source of hope. AI just isn't that good.