Since years, emails have become a part of everyday life. The rise of AI has made communication easier than ever. ChatGPT, for example, can respond to and compose emails with great accuracy. This simplifies daily interactions. AI is becoming more integrated into our communication, and the question arises of how much responsibility we should assume when we send an email generated by AI.
I. Email communication is increasingly dependent on AI
AI is changing the way we reply to emails. The obvious impact of LLM, where people now ask ChatGPT or Llama2 for their reply, sharing by the same instance potentially confidential information, and the impending launch of Microsoft Copilot, natively available in Outlook, makes it easy to imagine an increasing reliance AI to compose emails and improve productivity.
II. Trust Trap: Reduce cognitive-critical analysis
Users are gaining confidence in AI technologies as they get fast, and often better, generated answers (or, at least, more options to choose from). This growing confidence may lead to a decrease in critical thinking. Users may begin to accept AI-generated content at face value, without applying the same scrutiny as they would to their own work.
This can create a trust trap that leads to the risk of AI generated content being relied upon blindly, without adequate verification. In certain situations, such as sensitive legal or commercial matters, failing to analyse AI generated answers can lead to miscommunications and unintended results.
III. AI and email: the personal responsibility of the user
AI can help draft emails and reply to them, but it is the author who is responsible for what they say. AI is not a substitute for the author’s judgment or ethical concerns.
Let us consider a simple, classic case in which Sophie delegated a task to Paul. Sophie is responsible for reviewing the work before it’s sent, validated, or signed.
Let us now consider that Sophie trusts Paul because of all the quality work he has done over the years. He pays attention to every detail, is confident in his abilities, and displays a calm and polite demeanour.
Let’s get even crazier and consider that Sophie is so trusting of Paul (Paul has never made a serious mistake in the past year) that she only reviews his work with a minimal amount of scrutiny.
Sophie will still be held responsible for Paul’s preparation and work. If a mistake is made, Sophie may find the best management strategy to correct the issue, but it is she who is ultimately accountable.
Anyone who has delegated tasks in the past is aware that the overarching responsibility relies on them, but the cognitive biases influencing the trust they place into the person to whom they delegate a task, is clearly something that will also grow more and more regarding AI generated content.
IV. Balance trust and verification of AI-Assisted Email Communications
In order to achieve a good balance, it is therefore paramount to rely on AI-generated content while still exercising critical analysis. The users must avoid falling into the trap of trust by being cautious, inquisitive, and relying upon AI answers.
HR team should be deeply involved with IT deployment team to train employees when using AI tools for generating emails, to repetitively raise awareness on this creeping and subtle bias that will get even more stronger with the improvement of AI output and the increase of the necessary confidentiality for any organisation.
When it will be safe enough and secure to share confidential corporate input into an AI to produce ad-hoc output, that trust in the system will only increase and the lowering of critical analysis will also mechanically be changed. Unless talent development leaders keep on their agenda the importance of regularly educated with updated materials users on the benefits and pitfalls of AI.
Personal responsibility is a crucial aspect to maintaining ethical standards as AI continues its evolution in email communication. AI generated content could be a great source of inspiration and ideas, but this should also be accompanied with a commitment of critical analysis and verification.


Leave a comment