AI In mediation…and elsewhere
- Ed Johnson

- 8 hours ago
- 4 min read

I tell clients I don’t use AI for client information or documents, but why bother, they don’t seem to care but I do.
So why the worry about AI in mediation, can’t we use Chatgpt to generate documents and even records of events? If you don’t know Zoom (like Teams before it) tries to encourage you to record every bloody meeting, and as you also know, if you’ve been reading these blog posts for the years I’ve written them, or have any interest in mediation it is confidential, always. Except when it isn’t by order of the court…but even then most of the time it is, and that’s a hill most mediators will, if not die on certainly hold for as long as possible.
The point is using AI even on a simple level to create, let’s say, an open financial summary, firstly isn’t necessary you can link information from a spreadsheet to a word document without and AI bot using pre-existing tools (mail merge for instance), but secondly it means using AI so you’re uploading to do-you-know-where (?) to a site to manipulate store and, whether you like it or not, for the AI to learn from what is uploaded. The genius and danger of AI is it learns, and before we go down the whole Terminator/War-Games risk route we have to acknowledge that when we say to our clients “your information is confidential” that if what we mean is confidential “within limits” we have to ensure clients understand those limits. And that we do too.
So whilst the easy option is to say we don’t use AI and never ask ChatGpt (or others) anything ever in the hope that by the time you reach retirement the issue won’t arise or the computers have taken over so no one cares, the realistic option is to accept it is already happening, I’m posting this on Wix and will potentially circulate via Buffer to various “socials” (I know I sound ancient) and I suspect that (if I looked I’d know for certain that) AI is being used in that process, so I cannot say my hands are entirely AI clean.
But you’ve also used Chat for some of these posts haven’t you? Guilty as charged, I’m generating a blog or extending one from something I previously wrote by tapping Chatgpt, my justification is that I haven’t and wouldn’t be paying someone else to do that job I’ve simplified my own work, and importantly I’ve not recorded client information on an AI site.
Now there is a whole other area of where AI gets its information from (is it breaching copyright left, far-right and centre?) but I’m not putting anyone out of work from its use.
My concerns in the increased, unexpected and unexplained AI use, is, as with your FB data and everything else you publish, that it goes into what is vaguely described as “the cloud” where it sits forever. User agreements rarely give you control of copyright of what is created or stored on their systems, look at a parent’s fight to get access to data from Meta or ask why my artist friends and colleagues bemoan the amount of AI shArt churned out on websites, everything from roleplaying hobby sites to professional legal ones uses previously artist created work to vomit some ungodly bastardised imaging out that ticks the right boxes but doesn’t engage with people.
So what should I be saying to clients? “As far as I am aware” I’m not using AI, or “if I use AI it’s only by accident…except when it’s not”? As with all things mediation informed consent is key. But when mediators themselves (and hopefully it is clear from the above I include myself in this) don’t understand or even realise where AI is being used what hope have we got in explaining it to clients?
Panic ye not. Clients understand that we live in a fast changing society, they probably have some vague idea that the media including social media is all bias and retaining details they post/interact with (I’m not going to be the only one who put “beds” into a search engine one hour and by the next hour was bombarded with cheap mattress offers) but it does open us up to risks.
So I will continue to store only the basest of information, deleting and shredding on a regular 6 monthly basis anything sent to me, trusting always that when the web server says it does not store details once delated that this is accurate.
Terribly good programme about this on Radio 4 by the way called “Everything Fake”, the last episode of which is well worth a listen, to the end do not give up halfway or you will miss the entire point of the episode. I confess I was driving so managed to not switch off 15 minutes in with the fear that it was already too late…but sorry no spoilers

Comments