The Institute of Customer Service welcomes the opportunity to respond to the call for views and evidence on the recently published DCMS AI Regulation Policy Paper, āEstablishing a pro-innovation approach to regulating AIā.
The lasting impacts of the COVID-19 pandemic, organisations striving to deliver consistently to high customersā expectations and reduce operational costs whilst improving business performance have all led to the increased usage in automation and artificial intelligence (AI) across the economy. From a customer service perspective, the Institute is clear that well-developed technology leads to positive outcomes for customers ā a goal which organisations should hold as their constant focus and priority.
However, there are pitfalls and challenges that organisations must ensure they are cognisant of and avoid in order to ensure a consistently successful implementation of AI to deliver service excellence for customers. It is critical that AI is therefore deployed by organisations in a balanced, thought through and holistic way, and indeed regulated as such. The risks associated with deploying AI incorrectly are possible exclusion of vulnerable or digitally excluded sections of an organisationās customer base and a drop in trust of such organisations as well.
Organisations that are deploying AI in customer facing environments should always be focused on how use of AI will improve customer experiences and help to deliver service excellence. Moreover, whilst AI deployment should always lead to improved customer experience, the roll out of AI across a customer facing organisation should not replace, but should complement, existing human customer services roles. AI should, when deployed successfully, allow for customer service advisers and other customer facing staff to be empowered and have capacity to step in where customers experience a problem or need assistance from a human to help them resolve their query ā indeed the Instituteās Chief Executive, Jo Causon notes in her foreword of the Instituteās most recent research on the growing implementation of technology, automation and AI across customer facing organisations that,
āOne of the most striking examples of employees and technology combining to deliver service is the role of customer service advisers as ādigital coachesā, helping customers to use digital channels, stepping in where customers are experiencing a problem and providing reassurance. By working directly with the customer, these employees help customers improve their confidence, knowledge and capabilities.ā
The regulation of AI therefore should focus on ensuring that AI is being deployed by customer facing in a holistic way, with a moral framework put in place that can help to guide organisations deploying AI in their businesses that will assist in the pro-active regulation of a form of technology that is always learning and developing. Thus the effective regulation of AI should aid the continuing professionalisation of the customer service industry and lead to better customer experiences too.
This consultation response will cover the following areas:
- Organisational accountability
- Voice of the customer
- Expertise in regulators
- Unintended consequences of utilising AI.
Referenced throughout this consultation response is the Instituteās most recent breakthrough research, entitled, āA Connected World? Ensuring the right blend of people and technology for customer serviceā. That research can be accessed by clicking here.
- Organisational accountability and the need for an ethical framework
There exists a dilemma with the regulation of AI insofar as the amount of different organisations using AI to different degrees and in different settings for different customer experiences.
However, one constant across customer facing deployment of AI by organisations is that chatbots and other customer facing AI cannot replicate the personal understanding, reassurance or complex issue that a skilled employee can.
The Instituteās breakthrough technology research notes four items that effective deployment of artificial intelligence in customer contact requires the following:
- Quality of customer journey mapping and decision trees;
- Continuous testing and learning to improve the efficacy of chatbots and inform when they should be used;
- Options to use free text, rather than just pre-selected options which may not fit with the customerās situation;
- Quickly identifying where a customer needs expert help, or an experience is not working and enabling a customer to speak to an employee.
From a regulation standpoint, AI regulation should ensure that the use of AI does not suppress or replace the effectiveness of a skilled customer service employee in an organisation, but instead that AI is complementing such employees.
Although circling back to the different organisations using different deployments of AI across their businesses for differing purposes, regulating the use of AI via a āone size fits allā regulation framework could be difficult to implement with regards to pro-active regulation of the use of AI. The Institute is keen to work with DCMS further to ensure that customer facing AI, and AI deployed in workflow and other āback officeā sections of customer facing organisations, is deployed in a way that supports the continuing professionalisation of customer service employees and focuses on the needs of the customer and service excellence.
But with AI and machine learning comes a more holistic moral challenge. Given AI, once deployed, will continue to grow and learn more about a businessās processes and/or the interactions that different demographics or geographies of customers will have with said business, there will always exist a moral challenge of boundaries for AI that organisations must find and keep within. For example, there will be a moral judgement for organisations to make about what data is useful for an organisation to hold about a customer to genuinely improve their customer experience and service delivery and what data will start to concern customers if deployed AI starts using such data to predict customer habits or even to predict things that customers would prefer be kept private using āoff limitsā data, such as tracking customersā location without their consent or collecting personal data associated with gender, ethnicity or religion, unless it is essential for customer service purposes. Ā Regulation of AI should therefore put in place a moral framework for organisations to adhere to when deploying and utilising AI and should consider what data should be āoff limitsā to organisations that do not essentially need personal data on a customer.
- Voice and priorities of the customer
When reviewing regulation of AI, it is important to understand, and to focus on, regulation in the context of the end-user i.e. the customer. Understanding the priorities of customers in the context of deploying AI to improve customer experience and service delivery should be of paramount importance to organisations.
The Instituteās breakthrough research notes:
āOur research with customers suggests that many people are uncomfortable about data or technology applications being used in the context of highly sensitive or personal experiences, even where there are potential customer service benefits.ā
This links back to the imperative for a moral framework in the regulation of AI that organisations can use as a guide to help them implement and deploy AI that genuinely improves customersā experiences. However, this also speaks to the need for organisations to understand their customers first in order to deploy the right levels of AI, operating and supporting positive customer outcomes and complementing human customer service staff in an organisation. On customer engagement and trust and the link to new technologies, including AI-enabled chatbots, our recent research shows that,
āOrganisations need to invest time and effort in communicating the benefits of new technologies and applications, providing reassurance abut security and navigating new ways of interacting. In some cases, there may be need to reset expectations of customers who have learnt that they can get a fast response or be offered a better deal if they contact an organisation through traditional channels. Ultimately, the most effective way of building confidence and trust is by demonstrating the convenience, usability and quality of new applications.ā
The priorities and need for the customer should be kept front and centre at all times by organisations, but also the potential for new technologies to exclude vulnerable and those who wish to be, or who may be regarded as ādigitally disengagedā when it comes to service provision: these people may either not have access to digital technologies or not have readily available assistance (from a friend or family member) to help them use digital platform or engage properly with AI-enabled chatbots. Our research suggests that at least 15% of customers do not feel confident about using technology. Indeed, our research identified a significant majority of customers are at risk of digital exclusion because they lack skills, confidence, financial resources or have a disability or health condition that makes it difficult to deal with organisations using technology or digital channels.
Regulators and government will have a key role to play here in monitoring customer service outcomes for vulnerable people. Regulation should ensure that minimum standards are set and organisations are encouraged to share best practice. Regulation of AI will need to ensure that offers made available to digitally-confident customers who are happy to share their data and able and willing to use AI-enabled chatbots and apps are not offered differentiated products or deals by organisations. Fairness in customer outcomes should not dependent on a customerās ability to use, or be willing and able to use, AI-enabled features of an organisation.
Regulation of AI should therefore work to ensure that organisations deploying AI do not, through deployment, exclude vulnerable customers or those who may be ādigitally disengagedā. Monitoring of customer service outcomes for these groups of customers should be put in place via regulation of AI, with clear minimum standards set out for access to customer service for the same groups too.
There also exists the potential for AI to be deployed in ways which may frustrate, annoy or diminish trust levels amongst an organisationās customer base. Our research suggests that interactions with AI-enabled chatbots are more likely than any other customer experience to cause annoyance. Across all range of channels from messenger services, to apps, in person meetings, phone queries and automated and AI-enabled chatbots, chatbots are the least preferred option. This is shown in the bar chart results of our research below:
By contrast, email and phone calls were associated with the highest number of positive customer experiences, perhaps reflecting the widespread use of these channels to deal with organisations.
This shows therefore that if organisations are implementing AI-enabled features for customers, there needs to be consideration of customersā preferred channels of communication and the potential for frustration from customers of interaction with such AI-enabled chatbots. When regulating AI this should be considered, particularly from the point of view of how often internal testing and learning should be conducted within an organisation to improve the efficacy of such AI to inform how and when, and potentially with which customers, they should be used.
- Expertise in regulators
When considering who in an organisation should ensure compliance with AI regulation, the Institute is clear that whomever holds responsibility for the compliance to regulation within an organisation must hold a specific set of skills. Such a person should clearly have a working knowledge of how AI is being deployed within an organisation and how such AI works. However, in addition to this, the person responsible for the compliance of AI regulation should also ensure their organisationās AI is complying to a moral framework and to ethical guidelines, as well as having an intricate working knowledge of the governance of an organisation and particularly a clear understanding of how these items all interact with one another when it comes to AI.
Our research noted that 28% of respondents felt that reassurance that regulatory approval is in place should be given by large organisations when they introduce new technologies and applications into customer experiences that are easy for customers to use. Additionally, across a survey of 316 managers and employees in UK-based organisations, more than 30% attached high importance to the role of technology in enabling regulatory compliance and building trust.
- Unintended consequences of utilising AI
Innovative AI, alongside other automated processes in organisations, combined with customer data gathered by consent has the potential to create genuinely improved customer experiences through personalised customer outcomes and communications. However, risks exist of privacy, security and of unintended consequences. Our research notes that āOrganisations therefore need to evaluate the ethical and reputational implications of technologies and applications as well as commercial benefits.ā Our research notes some key actions when organisations are evaluating ethical and reputational implications of technology deployment, such as:
- Ensure that technology applications comply with appropriate regulation and legislation at all times
- Ensure governance and controls are in place to monitor rules, coding and the processes underpinning autonomous learning and decision-making by technology applications
- Assess the risk that deployment of technology could inadvertently disadvantage some customers
- Evaluate the impact of technology deployment on employee well-being and engagement
- Consider setting up an ethics advisory committee to review the ethical and reputation implications and risks of technology deployment.
All these items should be considered as AI regulation is developed. Regulation of AI should consider how it can mitigate proactively against unintended consequences of the use of AI, of invasions of privacy and of security risks. The pace of technological change and extent of collaboration between technology companies presents particular challenges for regulators. It may not always be realistic to expect regulation to keep pace with all technology changes and innovations. We believe therefore that the regulatory framework should place a clear onus and requirement for organisations to be responsible for the outcomes and consequences of the technologies they deploy so that accountability is explicit and unambiguous.
This Post Has 0 Comments