Thursday, November 21African Digital Business Magazine

Building trust amidst suspicion: The impact of AI on business and society

By Wayne Toms, CEO of Ghostdraft

Hardly a day passes without someone, somewhere, speculating about artificial intelligence (AI) and whether we, as businesses and society at large, can and should trust the technology. While much of the fear mongering is based on hearsay and outright speculation, there certainly are legitimate concerns. However, businesses and consumers have more than enough reason to believe — and trust — AI’s ability to add significant value to their processes and lives. 

Customer communication management (CCM), infused with complementary AI, has literally shifted the goalposts for what businesses can achieve with personalised, accurate and compliant customer communication at scale. 

On the other hand, screaming headlines work people into a state. Rumours have surfaced that Sam Altman’s departure from OpenAI, before his return amidst the furore, was linked to concerns among OpenAI board members that significant breakthroughs in artificial general intelligence (AGI) had been made, and that Altman was rushing these into production without sufficient safeguards for society at large. The stuff of horror movies.

AGI refers to an advanced form of AI that can perform many activities as well as, or better than, humans. Many followers of AI think that AGI is closer than we may believe. Let’s be clear: the real reason behind Altman’s departure is not known, and may be unrelated to AGI. It is also important to highlight that AI experts are divided on whether true AGI is imminent, or even likely.

I would suggest that even if the output of AI approximates that of the human mind, such as when Deep Blue stunned the world when it beat a reigning world chess champion, there is a critical difference between the approaches used by AI compared to those used by humans. 

Sophisticated AI algorithms such as Large Language Models (LLMs) do not understand the meaning of concepts in the way humans do. Computer algorithms tokenise words into numbers from the outset, which means they don’t imbue words with meaning in the same way as humans. More generally, for AGI to become a reality, there will need to be significant advances in AI’s ability to understand the meaning and context of patterns in the data, and not just detect them.

The question, then, becomes: What does any of this mean for businesses who wish to leverage the power of AI to improve their business processes and customer service? As a point of departure, it’s interesting to read the views of Kevin Scott, Microsoft CTO, referring to Microsoft’s AI tool for its Office suite, in an article in the New Yorker

“The Office Copilots seem simultaneously impressive and banal. They make mundane tasks easier, but they’re a long way from replacing human workers. They feel like a far cry from what was foretold by Sci-Fi novels. But they also feel like something that people might use every day,” he is quoted as saying.

The article goes on to say that if Scott, Microsoft CEO Satya Nadella and Chat GPT CTO Mira Murati get their way, then “AI will continue to steadily seep into our lives, at a pace gradual enough to accommodate the cautions required by short-term pessimism, and only as fast as humans are able to absorb how this technology ought to be used. There remains the possibility that things will get out of hand—and that the incremental creep of AI will prevent us from realising those dangers until it’s too late. But, for now, Scott and Murati feel confident that they can balance advancement and safety.”

This is a sound approach. What about the rest of us? Businesses have a responsibility to serve their shareholders, but also their customers and society more broadly. Treading a thoughtful line in the release of AI functionality to the market requires regular consideration of the benefits and potential risks.

It is crucial for business leaders to support the notion of good corporate citizenship in how they serve customers and develop products. Regulation will play a massive role.  It appears as though EU lawmakers have concluded marathon discussions to put in place a regulatory framework for AI. The framework will probably maintain a list of all AI models deemed to pose a systemic risk, and providers of general-purpose AI will be required to publish summaries of their algorithms and the content used to train them. The EU is leading the global regulatory response to AI and could become the blueprint which other governments may follow.

If we cast our gaze back towards CCM, GhostDraft’s use of AI is specifically focused on supporting the generation of documents to capture contractual agreements between companies and their customers quickly and accurately. CCM is an absolutely crucial part of modern customer communication. Gone are the days when document automation was sufficient. The smart and calibrated use of AI is a great example of how technology  supports a transformative evolution of businesses’ capabilities to create mass communication that’s personalised, compliant, well-designed, dynamic and fast. 

If a company cannot communicate clearly, quickly and accurately to its customers, it will lose them. GhostDraft uses AI for design and development of communications templates and generative AI to analyse sample data, informing the structure and production of future documents. It is enhancing the readability and completeness of key customer documents and forms, so that customers can feel comfortable about their business interactions.

It is clear that AI can improve customer service and interaction and it can make more relevant recommendations to customers, ensuring that they receive products and services tailored to their needs. In addition to this, using AI can deliver cost efficiencies to businesses, which can then translate into more accessible and affordable services for customers. These benefits engender trust. 

On the other hand, we already know that AI doesn’t understand context the way humans do. It doesn’t understand nuance, hope or fear. If we allow AI to have unfettered access to run business activities, it will miss things, and that is where customer trust will be damaged. In fact, the term “AI hallucination” refers to instances where AI tools identify patterns in the data which are actually non-existent or nonsensical. 

There are practical ways this can be addressed over and above regulation. Businesses can restrict chatbot scope to simpler questions, and the routing of complex or nuanced business functions to humans. Giving users more control is also important. Businesses can do this by offering users control over their data and the extent to which AI is used in their interactions, and by allowing users to easily opt in or opt out of AI-driven features. 

When AI practitioners are building and training models, they would do well to record and selectively audit AI recommendations, and use this to refine their models. Responsible businesses, meanwhile, should communicate clearly to customers how AI is being used in their products or services in a way that is understandable to non-technical users.

About GhostDraft

GhostDraft, launched in Cape Town in 1984, was one of the pioneers of the document automation industry. It has evolved today into an agile cloud-based solution that delivers advanced customer communications management (CCM) capabilities. 

The unique architecture is flexible, scalable and deployed through Microsoft Azure for rapid implementation. GhostDraft’s intuitive technology is easy to implement, maintain and use, helping businesses in all industries to become self-sufficient and successful. GhostDraft endeavours to help companies build lasting connections with customers, making it easy for teams to create, deliver and manage communications, so they can drive engagement, efficiency and compliance. Further information can be found at https://www.ghostdraft.com/southafrica/.