AI bias and Data Scientists’ responsibility to ensure fairness

As artificial intelligence creeps out of data labs and into the real world, we find ourselves in an era of AI-driven decision-making. Whether it’s an HR system helping us sort through hundreds of job applications or an algorithm that assesses the likelihood of a criminal becoming a recidivist, these applications are helping shape our future.   

AI-based systems are more accessible than ever before. And with its growing availability throughout industries, further questions arise surrounding fairness and how it is ensured throughout these systems. Understanding how to avoid and detect bias in AI models is a crucial research topic, and increasingly important as its presence continuously expands to new sectors. 

AI Systems are only as good as the data we put into them.”

IBM Research

AI builds upon the data it is fed. While AI can often be relied upon to improve human decision-making, it can also inadvertently accentuate and bolster human biases. What is AI bias? AI bias occurs when a model reflects implicit human prejudice against areas of race, gender, ideology and other characteristic biases.  

Google’s ‘Machine Learning and Human Bias’ video provides a tangible example of this idea. Picture a shoe. Your idea of a shoe may be very different from another person’s idea of a shoe (you might imagine a sports shoe whereas someone else might imagine a dressy shoe). Now imagine if you teach a computer to recognize a shoe, you might teach it your idea of a shoe, exposing it to your own bias. This is comparable to the danger of a single story.  

The single story creates stereotypes, and the problem with stereotypes is not that they are untrue, but that they are incomplete. They make one story become the only story.”

ChimamandaNgozi Adichie

So, what happens when we provide AI applications with data that is embedded with human biases? If our data is biased, our model will replicate those unfair judgements. 

Here we can see three examples of AI replicating human bias and prejudice:  

  • Hiring automation tools: AI is often used to support HR teams by analyzing job applications and some tools rate candidates through observing patterns in past successful applications. Where bias has appeared is when these automation tools have recommended male candidates over female, learning from the lack of female presence. 
     
  • Risk assessment algorithms: courts across America are using algorithms to assess the likelihood of a criminal re-offending. Researchers have pointed out the inaccuracy of some of these systems, finding biases against different races where black defendants were often predicted to be at a higher risk at re-offending then others.  
     
  • Online social chatbots: several social media chatbots built to learn language patterns, have been removed and discontinued after the posting of inappropriate comments. These chatbots, built using Natural Language Processing (NLP) and Machine Learning, learned from interactions with trolls and couldn’t filter through indecent language.   

The three scenarios above illustrate AI’s potential to be biased against groups of people. And the key underlining factor of these results is biased data. Although inadvertently, they did exactly what they were trained to do — they made sense of the data they were given.   

Data reflects social and historical processes and can easily operate to the disadvantage of certain groups. When trained with such data AI can reproduce, reinforce, and most likely exacerbate living biases. As we move into an era of AI-driven decision-making, it is more and more crucial to understand the biases that exist and take preventive measures to avoid discriminatory patterns. 

Understanding the types of biases, and how to detect them is crucial for ensuring equality. Google identifies three categories of biases:

  • Interaction bias: when systems learn biases from the users driving the interaction. For example, chatbots, when they are taught to recognize language patterns through continued interactions.  
  • Latent bias: When data contains implicit biases against race, sexuality, gender etc. For example, risk assessment algorithms which show examples of race discrimination. 
     
  • Selection bias: When the data you use to train the algorithm is over-represented by one population. For example, where men are over-represented in past job applications and the hiring automation tool learns from this.    

So how can we become more aware of these biases in data? In Machine Learning literature, ‘fairness’ is defined as “A practitioner guaranteeing fairness of a learned classifier, in the sense that it makes positive predications on different subgroups at certain rates.” Fairness can be defined in many ways, depending on the given problem. And identifying the criteria behind fairness requires social, political, cultural, historical and many other tradeoffs.  

Let’s look at understanding the fairness of defining a group to certain classifications. For example, is it fair to rate different groups loan eligibility even if they show different rates of payback? Or is it fair to give them loans comparable to their payback rates? Even a scenario like this, people might disagree as to what is fair or unfair. Understanding fairness is a challenge and even with a rigorous process in place, it’s impossible to guarantee. And, for that reason, it is imperative to measure bias and, consequently, fairness.   

Strategies of measuring bias are present across all society sectors, in cinema for example the Bechdel test assesses whether movies contain a gender bias. Similarly, in AI, means of measuring bias have started to arise. Aequitas, AI Fairness 360, Fairness Comparison and Fairness Measures, to name a few, are resources data scientists can leverage to analyze and guarantee fairness. Aequitas, for example, facilitates auditing for fairness, helping data scientists and policymakers make informed and more equitable decisions. Data scientists can use these resources to evaluate fairness and help make their predications more transparent.  

The Equity Evaluation Corpus (EEC) is a good example of a resource that allows Data Scientists to automatically assess fairness in an AI system. This dataset, which contains over 8,000 English sentences, was specifically crafted to tease out biases towards certain races and genders. The dataset was used to automatically assess 219 NLP systems for predicting sentiment and emotion intensity. And interestingly, they found more than 75% of the systems they analyzed were predicting higher intensity scores to a specific gender or race. 

As AI adoption increases rapidly across industries, there is a growing concern about fairness and how human biases and prejudices are incorporated into these applications. And as we’ve shown here, this is a crucial topic that is receiving more and more traction in both scientific Literature and across industries. And understanding the human biases that percolate into our AI systems is vital to ensuring positive change in the coming years.    

If you’re interested in learning more about fairness in AI, here are some other interesting references:

https://fairmlbook.org/ 
http://papers.nips.cc/paper/6374-equality-of-opportunity-in-supervised-learning.pdf
https://papers.nips.cc/paper/6316-satisfying-real-world-goals-with-dataset-constraints.pdf 

How AI can help to understand the customer

Ahead of us is a significant change in the way brands use customer experience (CX).  We are already starting to see the switch from companies competing on price and product to competing on CX. But what exactly do we mean by CX? Gartner defines CX as a customer’s perceptions and feelings caused by the one-off and cumulative effect of interactions with a supplier’s employees, systems, channels or products.   

Previously, the communication flow between customers and companies was either in person, writing or via a telephone call to the support line. Now, there are increasingly more ways customers can interact with brands, and when they do, they expect a high-quality experience “on demand.” 81% of marketing leaders were expected to mostly or completely compete based on customer experience by 2019, as revealed in the 2017 Gartner Customer Experience in Marketing Survey.  

There are many tools already giving insight to CX, such as NPS and Customer Success Scores. However, when companies need to make quick decisions, real-time insights are what’s helping decision makers. Technologies such as AI are now gathering these insights by allowing companies to organize and categorize data based on business needs, helping to make sense of all these interactions.  

To understand the customer from a CX perspective, and give some real-world examples, we can filter down a myriad of AI technologies and categorize them into three buckets: 

  • Speech Analytics: understanding, interpreting and analyzing voice conversations. Example: understand sentiment, IVR systems.
  • Image: capturing, processing and analyzing images, photos and video. Example: customer patterns, social media image analysis. 
  • Natural Language Processing: analyzing human expression and emotion. Example: text, chatbot, email analysis.  

The below table shows CX use cases and examples of these AI technologies in action:  

Source: Gartner 2019

Are data scientists the only ones needing to understand these technologies? No, it’s extremely valuable to both marketing and CX teams to gain an understanding of these tools. Every company has unique needs depending on CX goals and business objectives. Teams need to make a well-informed decision and understand which tools are most useful to their business, which will essentially lead to more accurate decision-making and a customer-first approach.     

Now, are people rushing to adopt these new AI technologies for CX? In Gartner´s 2018 Enterprise AI survey, it was revealed that businesses that are already deploying AI, 26% are implementing it to improve customer experience. Although it may not seem urgent to start implementing these technologies right away, it’s important that businesses are aware and start to familiarize themselves with these AI applications.  

A good place to start is mapping out a customer journey and finding the ‘dark spots’. These are the areas that could benefit from deeper real-time insights, such as understanding the mood of a customer when they are talking with a chatbot. Having these insights will allow you to hand over the conversation to a human based on the customer’s emotion.  

Companies are dealing with an increasing number of interactions happening across multiple channels and devices. With customer expectations are at an all-time high, it’s not easy to connect all these touch points and deliver an excellent customer. AI can help provide rich insights allowing you to get faster, real-time understandings, and optimize the overall customer journey. 

5 trends in AI for 2019

It´s not easy to project trends in a market evolving as rapidly as AI. However, through analysis of cross-industry data and experience with a diverse client-base, we’re willing to make some bets. From automating mundane daily tasks to leveraging computer vision for more accurate medical diagnoses, here are 5 trends in AI we expect to emerge in 2019. 

TREND 1: “EDGY” AI 

Edge AI refers to processing AI algorithms locally instead of relying on cloud services or data centers. 

Smartphones, cars, and wearable devices are examples of devices that need to make faster and more accurate real-time decisions. Autonomous vehicles, for instance, need to make hundreds of decisions per second – brake, accelerate, turn on lights, identify and interpret traffic patterns, signals, and speed limits – all while simultaneously responding to the driver’s voice commands. These decisions must take place in a fraction of a second, and they need to be independent of the connectivity issues that come with cloud computing.  This means that autonomous vehicles need powerful chips to process all this information rapidly and accurately.  

Tech leaders like Nvidia, Qualcomm, Apple, AMD, and ARM are investing in developing and delivering chips that can handle these kinds of workloads. 

In 2019 we’ll see more models being deployed at the edge as well as specialized chips allowing AI models to operate independently from the centralized cloud, or on the “edge” if you prefer.

TREND 2: AI IN HEALTHCARE 

 Last year the FDA (U.S. Food and Drug Administration) approved IDx-DR, an AI-enabled software that can independently diagnose diabetic retinopathy before severe complications (such as blindness) emerge.  

The FDA also cleared Dip.io, a product developed by startup Healthy.io, as a class II medical device. This diagnostic tool can monitor urinary tract infections and track pregnancy-related complications by analyzing photos of dipstick urine tests. It’s as simple as uploading a photo, the model takes it from there.   

2019 will be a remarkable year for AI in healthcare. 

TREND 3: PREDICTIVE MAINTENANCE 

Equipment failure is one of the main causes of production downtime, a huge line-item for any asset-intensive business. However, today maintenance teams spend 80% of their time collecting data but only 20% analyzing it.  

Factory and field equipment generate mountains of unleveraged data that could go a long way to solving these issues. Alongside cameras and sensors, ML-driven algorithms can learn to check assets’ “vital signs,” catch small irregularities (a loose screw) before they turn into larger ones (a damaged turbine) and provide productivity predictions, allowing firms to plan accordingly.      

With sensors becoming more affordable, and edge computing gaining momentum, machine learning will become even more heavily incorporated in industrial processes in 2019. 

TREND 4: CONVERSATIONAL AI 

We say conversational AI, what pops into your head?  If it’s chatbots, you’re not alone. While that’s certainly a huge part, the technology is much broader as it is integrated across messaging apps and voice-enabled virtual assistants who go far beyond the scope of chatbots.    

In 2019 we can expect to see even more AI deployed to handle routine customer service interactions. Whether you’re booking a flight, searching for a new restaurant or requesting the arrival date of your next purchase, AI can assist you.   

Research from eMarketer shows that this year 66.6 million Americans are expected to use speech or voice recognition technology. Banking and retail are great examples of industries already using conversational AI initiatives, and as the technology continues to mature in 2019, we expect to see even more use cases in even more industries.

TREND 5: RPA / BACK OFFICE AUTOMATION 

RPA (Robot Process Automation) covers a variety of back-office tasks that can be automated by bots. It’s not a new concept, nor is it AI. But here are some interesting facts:   

  • According to McKinsey, RPA will have an economic impact of around $6.7 trillion by 2025.   
  • Forrester Research also mentioned that RPA market is estimated to grow to $2.1 billion by 2021.  

Although RPA is not considered AI – since it’s rule-based and can’t learn anything on its own – there’s been a collaboration between RPA and AI.  Due to its capacity of automating repetitive and time-consuming tasks, RPA can save employees tons of time, at the same time it can ensure processes are running smoothly and precisely. On the other hand, AI can enhance RPA.  

For instance, take a bank that’s onboarding a new client and needs to adhere to Know Your Customer/Anti Money Laundering Compliance Regulation. RPA is great for doing a lot of manual work. What AI can do is analyze the data the RPA’s pull in a more sophisticated manner, and arm a Compliance officer with more useful information.  

Whether there is a need to automate processes or implement solutions in this field, RPA has been mainly leveraged by large companies – until now. In 2019 we can expect to see small and medium-size businesses starting to adopt RPA, due to its clear benefits and increased popularity.  

100 most promising AI startups

日本語版はこちら

We’ve come a long way since forming in 2015. Starting out as a small team, we now have four offices worldwide – Lisbon, Porto, Tokyo, and Seattle – and continue to grow every day.  

Our unique platform has helped many successful companies feed their artificial intelligence applications with training data. Using human intelligence coupled with machine-learning, we deliver project-specific, quality-guaranteed data.    

Today, we’re proud to announce that DefinedCrowd is among CB Insights’ third annual list of 100 AI startups. A research team from CB Insights selected 100 startups based on the following factors: investor profile, market potential, partnerships, competitive landscape, and team strength. 

Source: CB Insights

Companies are categorized by focus area. These focus areas aren’t mutually exclusive and include core sectors such as telecommunications, government, retail, healthcare and enterprise tech sectors such as training data (where we sit), software development, data management, and cybersecurity. 

We are pleased to be among this group of incredible AI startups, selected from an extensive list of 3k+ AI companies, and look forward to seeing these companies grow.  

It´s been a great start to 2019. And, we´re very thankful to everyone who has helped get us here.  

AI分野における最も有望なスタートアップ企業 トップ100社

English version available here

DefinedCrowd社の”AI向け学習データプラットフォーム”は、ヒューマンインテリジェンスとマシンラーニングを組み合わせたワークフローにより、、AIアプリケーションの開発・改善に必要な”学習データ”を、お客様毎に、更には、個別のプロジェクト毎に最適化された形でご提供しています。

この度、DefinedCrowd社は、米調査会社CB Insightsが発表した、AI分野における最も有望なスタートアップ企業 トップ100社の中の1社に選ばれました。       

Source: CB Insights

この発表は今年で3回目となり、今回は総数3,000社以上のAI関連のスタートアップ企業の中から、投資家のプロフィール、市場ポテンシャル、パートナーシップ、競争環境、その企業の強みなど、複数の要素を加味・評価した上で、トップ100社が選ばれています。

これらの”AIスタートアップ企業 トップ100社”は、その注力する業界や技術分野毎にカテゴリー分けされており、DefinedCrowd社は”エンタープライズ テクノロジー”の一角、「トレーニングデータ部門」の1社として選ばれました。私たちのデータプラットフォームは、多くの活躍している企業の人工知能アプリケーションに必要なトレーニングデータを、ヒューマンインテリジェンスと機械学習を組み合わせ、プロジェクト固有の高品質データを提供しています。

5 Use Cases for AI in Utilities

Use Case 1: Energy Theft Prevention

An industry-wide shift to advanced metering infrastructure (AMI) has been a boon for the utilities industry, allowing providers to handle peak-load conditions and meet demand fluctuations in real time. What’s more, corresponding technologies like IoT and digital twins have vastly improved maintenance efficiency while reducing total operational costs. Overall, AMI has been a huge win for the sector as a whole.  

One downside? Anti-fraud technology has failed to keep pace with the rate of infrastructure change, leaving these automated systems highly susceptible to theft as a result. Around the world, the numbers speak for themselves. In Canada,  B.C. hydro has seen electricity theft increase from 500 GWh in 2006 to 850 GWh today. Developing countries, like Brazil, have to contend with the fact that 1/5 of all generated electricity is stolen.   In the US, $6 billion worth of electricity is lost to piracy every year, making energy the third most stolen good in the country (credit card information and cars rank 1 and 2). All told, energy theft accounts for almost $100 billion in losses each year around the globe. That’s a terrible number for energy producers, but an even worse statistic for consumers, who end up bearing the brunt of those costs.  

Artificial intelligence and machine learning technologies can combat energy piracy by enabling utilities providers to leverage the vast data sets produced through AMI upgrades. Pattern-detection models equipped with entity-recognition capabilities can scan individual customer profiles and flag suspicious discrepancies between billing and usage data. In fact, automated theft-prevention pilots are already making real headway here. One such project in Brazil can accurately pick out fraud at a 65% hit rate, which outpaces similar tools on the market. 

Thanks to AI, soon enough, energy thieves will have nowhere left to hide. 

Energy Theft Prevention

Use Case 2: Sentiment Analysis in Digital Marketing

Building out individual consumer profiles and keeping tabs on long-term behavior doesn’t just allow utilities companies to flag potential bad actors, it also opens up opportunities to identify, reward, and retain legitimate consumers, while informing strategic initiatives to grow a company’s overall customer base.   

Sentiment analysis models can produce both macro-level analysis (sifting through social media postings and online reviews) and micro-level insights (keeping tabs on individual customer interactions with customer-service representatives and virtual agents) that allow utilities providers to better serve targeted marketing campaigns to specific geographical locales and  individual customers.

Who’s at risk of churning? Who may be looking to upgrade their services? These are all questions AI can help answer.       

Digital Marketing

Click here for an infographic with all the 5 Use Cases.


Use Case 3: Virtual Agents and Chatbots

As digital technologies lower barriers to entry in just about every sector, asset-intensive businesses, like utilities, increasingly face challenges from new competition. With a consumer base spoiled for choice, customer-service capabilities and consumer-engagement initiatives have become all the more critical to long-term success. It’s no surprise that in Gartner’s “2019 CIO and CEO Agenda: A Utilities Perspective,” industry decision makers ranked “Customer Experience” as a top business priority for 2019.

We’ve spoken at length about how chatbots can meet growing customer expectations saving companies time and money today, while  setting the foundations for one-to-one communication channels down the line.     Utilities providers can draw inspiration from cutting-edge chatbots in other sectors, who do much more than regurgitate store locations and opening hours. Bank of America’s Erica helps customers understand their financial habits through animated spending paths, while Shell’s virtual avatars Emily and Ethan  use Natural Language Processing to assist customers in finding specific goods in massive product databases.

Utilities chatbots could help customers track their energy usage, provide useful pointers on increasing efficiency and, through product marketing partnerships with appliance makers, even offer recommendations for energy efficient appliance upgrades while projecting the long-term savings resulting from their installation. 

Virtual Agents and Chatbots

Use Case 4: Predictive Maintenance

The crown jewel of AI in utilities has long been predictive maintenance. The goal? To create automated systems that continuously monitor critical infrastructure in order to:

A) Notify technicians of minor issues (think a loose screw on a wind turbine) before those issues compound into major costs (think that loose screw leading to permanent damage on the turbine’s rotor blades).

B) Analyze mass data sets from IoT-enabled infrastructure and accurately predict the likelihood that any particular asset will need to be replaced/repaired in the near to mid-term.  

The potential benefits here are enormous. Companies like Duke energy are already saving tens of millions of dollars through early detection of dips in asset performance, which allows them to nip minor issues in the bud before they get a chance to snowball. As the technology becomes more advanced, predictive maintenance will solve other industry pain-points too, such as redundancies in backup infrastructure purchases.



Click here to discover how DefinedCrowd’s data expertise puts EDP on the path to predictive maintenance.


Predictive Maintenance

Use Case 5: Document Management

At DefinedCrowd, we’re proud to have worked with visionary companies like EDP, who are leveraging expertly-annotated computer vision training data to implement predictive maintenance practices. For more, check out our case study linked above. 

Even with advanced document management systems in place, information workers (lawyers, paralegals, accountants, and compliance officers) still waste 10% of their time chasing down improperly filed documents, or recreating lost files altogether. 

In the utilities sector, document management issues are often compounded by the vast number of subcontractors and independent distributors present in the field, all of whom bring their own unique invoicing and documentation procedures into the fray. It’s not unheard of for invoicing and other record keeping to be delivered in hand-written form at times, particularly in rural areas.

NLP models capable of entity recognition, document segmentation, and digital transcription can order the chaos by digitizing hand-written documents and segmenting contracts and invoices into individual sections and clauses. From day one, these capabilities can exponentially improve contract organization, and give those information workers back their lost time. Down the line, these models can also assist with contract creation and negotiation by executing the legal research grunt work that arms teams with provisions, briefs, and court filings relevant to specific contract disputes.


Click here for an infographic with all the 5 Use Cases.

10 Tips For Building a Successful Chatbot

“Building a bot is easy. Building a bad bot is even easier”-  Norm Judah (CTO-Microsoft)

Intro:

Globally, businesses spend $1.3 trillion on 265 billion customer service calls every year. As a result, brands across industries are investing in chatbots as a way to save time (99% improvement in response times) and money, (30% average drop in cost-per-query resolution) while increasing customer satisfaction.

But, that holy trifecta only comes to fruition if the bot gets things right every single time. Without precision training data, models trip up on simple tasks, consumers get frustrated, and the whole thing falls apart. 

While an average company may look at chatbots simply as a means of cutting costs, industry-leaders understand that AI opens the door for entirely new and innovative products. Take banking customers, for example, who identified their top priorities in a study by CGI group as follows:

  • To be rewarded for their business
  • To be treated like a person
  • To be able to check their balance anytime they wish
  • To be provided with wealth-building advice
  • To be shown spending habits and given advice on how to save

Forward-thinking banks know that by investing in a chatbot today, they’re laying the groundwork for a technology that, down the line, will allow them to hit every single one of those customer priorities. They’re investing accordingly and according to the McKinsey Global Institute, they’re building an insurmountable advantage as a result.

With that in mind. Here are my top 10 tips for keeping a chatbot initiative on the road to long-term success:

1. Know The Story:

Intents are the fundamental building blocks of task-oriented chatbots. Think of them as the problems that your agent will need to be able to resolve. In a banking scenario, these could be anything from checking an account’s balance, to wiring money, or checking branch hours. You need to understand your customers’ needs and map them out into well-defined actions (intents). Make flowcharts that delineate every possible flow of a conversation from point A to point B. Understand how the customers intents are interlinked, and determine whether there is a logical order between them. If you don’t do this exhaustively, your bot will be thrown by even the slightest variations.

2. Get Your Entities Straight

If intents define the broad-level context that determines a chatbot’s capabilities, entities are the specific bits of information the bot will need in order to execute those actions. That means when a bot recognizes an intent, like wiring money let’s say, it also needs to know the recipient and monetary amount to be transferred (at the very least). Intents can be as complex as needed, containing both mandatory and optional entities (like source account or currency, in the money wiring scenario).

3. Divide To Conquer

Don’t expect intents to come with all their requisite entities in just one turn. People leave things out. Nobody types, “I’m looking to wire $500 from my savings account to Mike Watson.” Things like “Wire $500” are much more common. Consider what further steps your bot will need to take in order to fill in the gaps. Zoom in on those flowcharts from step 1 and, for each intent, map out all the possible entity combinations. Design the conversation flow accordingly.

4. “If I Remember Correctly …”

Your bot needs to remember things! Keep track of recent interactions (intents and entities). People tend to ask follow-up questions, and it’s a nice touch to be able to answer without the redundancy of requesting information they’ve already provided. Imagine that a customer asks for a specific bank branch address. The bot successfully responds to the intent, and then the user asks: “And when does it open?” The best chatbots will answer immediately, understanding that the conversational subject is still that same branch. Keep in mind that the same can be true of intents: A customer may ask “What are the Greenwood branch hours?” followed by “What about Capitol Hill?”

5. Know What To Do When You Don’t Know What To Do

Prepare to not understand everything your customer wants, and know how to respond accordingly. You can simply say, “Sorry, I didn’t get that,” but the best bots (like the best customer service reps) provide more useful responses, such as “I didn’t quite catch that. Do you want me to perform an online search?” Or, “I didn’t quite catch that. Do you mind asking the question a different way? Or shall I connect you to an agent?”

6. “Let’s Run It From The Top”

Even though you’ll do everything in your power to avoid it, your bot could get lost in complex conversations where customers express a high number of unique intents. That’s why users should always have the option to restart the conversation from scratch. A clean slate beats a long stream of frustrating interactions from which you won’t be able to recover.

7. Control what You Can Control

You can’t control what the customer is going to say, but you sure can control how your bot will respond. Invest in variability. Different greeting and parting phrases are a nice touch, as is addressing customers by name.

8. Quality Is Variability. Variability Is Quality.

People express the same intents and entities in a multitude of different ways. Investing in data collection that gathers comprehensive variants for how people express certain bits of information is one of the most important steps on the road to building successful virtual agents. Only then will your bot understand that “How much did I spend between November 1st and November 31st” is the same as “How much have I spent this month.”

9. Sound like a Local 

People in the Pacific Northwest might refer to their savings accounts as “rainy-day” funds, whereas customers in the deep south may prefer the term “honey-pot.” On the global scale, in the US, people like to say “checking account,” but in the UK, “main” or “current” are the more popular terms. A globalized company looking to serve a broad customer-base needs to understand how different consumer blocs speak at a granular level. That way, their bot can properly interface with every customer. Here, once again, the world’s most clever algorithm won’t save you. It’s all about the data.

10. Precision. Precision. Precision. 

To quote Google’s Peter Novig, “More data beats better algorithms, but better data beats more data.” Collecting a lot of variants and running them through intent classifiers and entity-taggers only works if that data is annotated correctly. When a customer says, “check balance,” your bot needs to understand that “check” can serve as both a noun and a verb depending on the context. Otherwise, your costumers will be ramming their head against the wall with something as simple as checking the balance of their savings account. All the data in the world does you no good if it’s improperly annotated.