15. januar 2018
AI Sustainability – A new form of CSR
Within AI, ethics will play a substantial role. Going forward an “AI Sustainability Strategy” will become the moral licence to […]
Within AI, ethics will play a substantial role. Going forward an “AI Sustainability Strategy” will become the moral licence to operate. Thus, the boards of all companies aiming for a digital future should address the question: How do we deal with the ethical aspects of our data and algorithms?
January 2018. Tinius Talk written by Anna Felländer.
Data is the new gold. We are entering a new era where every organisation needs to use AI on data to create customer value. As consumers and individuals, we are expecting personalized services and recommendations based on AI. Organizations that fail to live up to the increasing demands will lose their relevance. The potential for value creation and efficiency gains for organizations, society and individuals are enormous. Yet, the technology is galloping and the short-term profits related to AI can sometimes be seductive.
With the opportunities of using algorithms on data, comes critical ethical questions that must be addressed. Profits need to be weighed against ethical considerations. For example, how much information about a person’s critical life situation is ok to use with the purpose of maximizing sales? Where do you draw the line? How can we detect AI recommendations that are based on historical preconceptions? How do we audit AI based decisions?
By implementing active and holistic ethical principles in AI based decision processes, unintended ethical negative consequences can be avoided.
Data and Artificial Intelligence[1] (AI) is the largest driver at this stage of digitalization. The amount of data is increasing in an exponential pace, at the same time as data from different data sources can be stored and processed at a lower cost. This makes algorithms learn faster and enables them to make autonomous decisions in a way we’ve never experienced before. AI is nothing new, but mature technologies being cross fertilized, such as IoT, cloud, speech recognition etc., is escalating the capabilities of AI.
A hidden trade agreement
AI on large data sets creates enormous values and vast efficiency gains. For example, by improved decease diagnoses in the health sector, robots performing high risk tasks in the industry sector and logistics management in the trade sector. Furthermore, personalized offers and recommendations build more loyal customers relations. Customers give away their behavior data from various data sets and receive recommendations that he or she sometimes weren’t aware of that he or she needed. These services and recommendations are saving the consumer time, money and minimizes risks in their everyday life. There is a hidden trade agreement, between the organisation and individual, where consumer data is the new currency.
The majority is still not prepared
Data and AI increases the competitive divide between leaders and laggers. To be relevant in the new digital era, an AI strategy will be a prerequisite. But the majority is still not prepared. According to MIT and BCG few executives, managers and analysts across industries have a concrete strategy for AI and how to govern it.[2]
The technology is galloping, and AI is different from anything we know. It is scaling in a different and autonomous way compared to other data-driven digital initiatives. The audibility is easily lost without specific and continuous monitoring.
When algorithms in different silos in an organisation are targeted towards maximizing profits and productivity, the ethical considerations are sometimes blurred, inconsistent and without oversight.
So let’s do a health check: Do you trust all ethical considerations in all AI-based decisions in your organisation? Also, in the networked economy innovations are often built on opening up data and creating partnerships. So, what about the organizations attempt to work on or have access to your client’s data?
Equally important to capture the value from data and AI, is the need for active and holistic ethical principles regarding the use of AI. Status quo is not an alternative. The regulatory framework is lagging and organizations will stand accountable for every ethical decision on their clients’ data as well as their clients’ client data.
As more cases of both unintended and intended negative ethical consequences of AI occur in the media, customers will demand transparency and communication around ethical considerations. Otherwise, I believe they will withdraw their “currency”.
With less access to customer data, algorithms performance will go down.
The ethical approach
The ethical approach to AI requires deep knowledge in all organizations. Research regarding AI and ethics has recently been given more attention. Since October, there is an AI and Ethics Research Unit at Google’s Deepmind. Below, I present some of the risks and discuss ethical considerations.
1. Misuse of Data and AI
GDPR, EU’s new integrity legislation, is strengthening the consumer’s access to its data. It covers the right for explicit consent, the right to be forgotten, the right for portability and the right for algorithm transparency. For individuals, the legislation will highlight the value and sense of control for his or her data, or currency. For organizations, the process to adapt to GDPR will be costly and complex. Even though the consumer gives consent to data sources that are separately at low risk for privacy intrusion, combined they can result in an intelligence that is difficult to comprehend for both parties. This could feel like an intrusion of privacy for the customer and thereby lost trust and loyalty for the organization. Building AI competence, training and hiring the right roles for AI across the organization is crucial. What about assigning a new role in the organisation, an AI Ethics Compliance Officer.
2. The bias of the creator
Ethical considerations lie in the bias of the creator of algorithms. For example, what life should be valued over another in a self-driving car collision? Is it ok to send furniture commercial to someone about to get divorced? Should agencies be contacted if a person drives drunk? How far can an agency go in its collection of personal information for credit risk scoring? What about when media recommend articles to a person with racial opinions supporting his or her orientation? Holistic ethical principles need to be applied to all relevant AI based decision processes. A chief marketing officer might consider it to be ok to send sugar commercial to sugar addicts yet this might not be consistent with the organization’s values. Maximizing AI decisions is about weighing short-term profits against sustainable ethical values.
3. Immature AI
Insufficient training of AI can lead to false predictions and wrong recommendations. Data sets could be too scarce or algorithms can fail to read in all possible outcomes and scenarios. Data sets are frequently refreshed which means that algorithms are working in a constantly changing landscape. In some sectors the negative consequences could be vast, such as in e-commerce where the consequences are not devastating and ethical dimensions are far-fetched (eg. recommending a coat on sale that I have already purchased). But what happens when an insurance company continues to send commercials for child insurances to a woman who had a miscarriage? AI needs to be trained in secure environments with deep consequence and scenario analysis, identifying all AI-based decisions with ethical dimensions.
4. Machine bias
The algorithms are learning from past data; it learns from historic preconceptions. Even though AI is well trained and applied to large data sets, it can lead to unethical consequences. For example, AI was used by US courts to predict future criminals. It was biased against black prisoners – and wrongly flagging defendants. Using AI recruiting a new CEO can bias the preferred attributes towards successful male leadership skills, since history show that the largest part of CEO’s are male. Thus, identifying machine bias requires new tools. Even if AI based decision making and recommendations can create much less bias compared to humans, the tolerance for machine based mistakes have proven to be lower. The demands for non-bias in AI supported decisions should be high.
Accountable leaders can turn risks into opportunities using ethical AI as a competitive edge.
A moral licence to operate
So, how could organizations address these challenges? I believe that an “AI Sustainability Strategy” will be the new form of CSR. It will be a moral licence to operate in this data and AI driven era. Since some organizations are using AI too blunt and some organizations are hesitant to use it at all, there need to be standards and certificates for sustainable use of AI.
Sustainability in this sense means a long-term commitment to costumers to be transparent, accountable and consistent with the organizational ethical principles regarding data and AI.
An AI Sustainability Strategy consist of different steps.
- Make ethical principles a board topic:
The ethical dimension in AI based processes must be identified and then addressed and aligned in the board room. Leaders should define a holistic ethical strategy that is monitored, tracked and under continuous development.
- Build capabilities:
AI solutions should be used by enabled people from the business and technical sides of the organization but with ethical guidelines.
- Safety must be the priority:
Before embarking all the seductive gains from AI, build governance systems and check in all AI-decisions with continuous monitoring.
- Communicate:
Be transparent with an internal and external communication plan.
Profits and productivity as AI-targets should be replaced or integrated with economic, social and environmental sustainability targets as well as gender equality and diversity.
Low tolerance for mistakes
So far, few AI leaders has taken a proactive approach on AI and ethics. Let’s take Amazon as an example. Amazon’s fast monopoly situation are explained both by its first mover advantage and its ability to create personalized services and a superior customer experience. The range of products and services it offers is extremely wide. This spurs from Amazon’s early adoption of AI and early access to data from customers’ different life situations. Also, logistics and the operational efficiencies are reducing spill, increasing efficiency and reducing costs. To this date it is likely that Amazon will enter the Swedish market, as they are already investing in local data inventories. Some experts argue that Amazon’s sharp competitive edge will potentially lead to shut down of local e-commerce as well as local physical retail. Others argue that the local actors will have to reposition to be part of the Amazon ecosystem by collaborating with the giant.
AI-leaders will affect local markets all over the world. But, the stakes are getting higher. As businesses are depending on consumer trust for accessing and handling personal behavioral data, we will see a low tolerance for mistakes. As AI-leaders are entering new segments, potentially the health sector, the stakes will increase even more.
One or more scandals due to misuse of data, inconsistent or unethical AI, machine bias or immature AI will make the break of customer trust more painful.
For all organizations, both private and public, there is an urgency for total transparency and proactive ethical strategies regarding data and AI.
Just as CSR entered the corporate world, an AI-sustainability strategy will be a license to gain consumer trust in the future.
[1] Defining artificial intelligence: Self-learning algorithms transforming data into insights. Artificial Intelligence is here defined as machine-based systems that sense the environment, pursue goals, adapt to changes and provide information or take actions.
[2] A study on AI by MIT and BCG available here.
Anna Felländer is a digital economist and has many years of experience in analyzing how digitalisation is changing society, business and the economy. Felländer is senior adviser for Boston Consulting Group and board member Whispr Group. She is involved in both large companies and start-ups, and has a platform in academia. Felländer has previously worked for Swedbank as chief economist and digital economist, as well as having many years of experience working in government offices.
What is “Tinius Talks”? Tinius Talks are articles, videos or debates by specialists on the future of journalism, fair tax policies and ethics in algorithms – shared by the Tinius Trust once a month throughout 2018.