Artificial Intelligence and the Future of Jobs

The growth and development of computer programs supported by artificial intelligence has led to intense debate around regulatory difficulties and because of the technology’s potential effects on employment. Are people’s concerns in these areas warranted?

From the earliest days of civilization, man, as a single thinker on earth, sought to reduce the need for physical work by inventing tools. First came the wheel and the transportation of food over farther distances with less manpower. We have been taught to evolve by creating more with less effort. For many, this was negative in the short term. Those who were once freight carriers lost their jobs. The wheel was invented around 3000 BC. You might think I´m crazy to start a technological discussion about this historical moment, but the historical reference is useful to be made in order to demystify the discussion, and to then further analyze the data we have. 

Moving forward some years later, at the start of the industrial revolution, millions of people protested in the streets of England and the United States against the introduction of weaving machinery. On the surface, the “destruction” of jobs seemed quite high. However, in truth, these jobs were never really destroyed, but rather professionally reformed. Factories, with a drastic boost in production, were increasing the salaries of those who adapted to the machinery while simultaneously reducing their overhead costs. Both countries’ wealth grew as a result due to increases in disposable income for families, and more jobs created to support the burgeoning count weaving and spinning industry. Indeed, the number of people employed in weaving jumped from 7,900 to over 320,000 after the invention of the weaving machine.  

Now after a little history revision, let’s return to the present. 

Recently PWC, one of the world’s largest consultants, launched a global study in which they estimated that artificial intelligence in the UK will replace 20% of today’s jobs within the next 20 years. However, they also estimated that artificial intelligence will create just as many jobs as it replaces. Sectors at high risk include law, finance, insurance, drivers and white-collar workers. Areas like education, science, information, communication and computing are among those that will be most valued in the future.  

Nowadays, from the moment we wake up and look at our mobile phones, until the moment we lay down and check our Facebook feed for the last time, we’re in constant contact with artificial intelligence that gives us the kind of information that allows us to make better decisions. We need to accelerate the transformation of educational systems, adapting them to the new realities of the fourth technological revolution with a particular focus on programming disciplines. We also need to find ways to support professional training programs that respond to the demands of the labor market. 

Ultimately, there is no future in which machines will be able to replace what binds human beings: creativity, intuition and love. At the end of the day, perhaps AI will make us even more human.

On Smarter AI for a Better World

Before founding DefinedCrowd, our CEO Daniela Braga had a long career as a researcher at companies like Voicebox and Microsoft. A pioneer in speech technology, she became one of the earliest advocates for voice-enabled technology as a primary user interface (long before Alexa, Siri, and Cortana, proved her right).

Her stance was rooted in a passion for uncovering the crossroads where technology and human experience collide. Convinced that Automated Speech Recognition (ASR) could improve all of our interactions with the world at large, she strove to build the models that would make that vision into a reality.

Quickly, she butted up against technological limits. The lack of high-quality training data so critical to constructing effective models was chief among them. She founded DefinedCrowd in 2015 as a direct result of that frustration, envisioning a company that would leverage cutting-edge technology, dynamic workflows, and innovative crowd-management practices to deliver the exact data sets researchers would need to build high-performance models.

Her passion for AI as a means of enhancing human experience permeates everything we do. We take the goal seriously. As our COO, Walter Benadof, wrote so eloquently just a few months ago, it’s imperative that every practitioner in this field maintains a core set of ethics and values as they continue to develop and mature.

That word, “practitioner,” is no accident.  I use it to reinforce a concept we’ve touched on before, a “Hippocratic Oath” for AI, first proposed in Microsoft’s The Future Computedand further elaborated upon by Oren Etzioni at TechCrunch. We’ve been thinking hard about how our past and future work fits with the values stated therein and the values of our company as a whole.

We’re proud that our core competencies in Natural Language Processing, Computer Vision and Automated Speech Recognition are already making workplaces, classrooms, and ultimately the world at large more accessible, safer, and easier to navigate.

On top of that, we’re also thrilled to be in the process of developing several inspiring pilots for use cases as diverse as improving healthcare interfaces to detecting preventable natural disasters (think wildfires) before they have a chance to spread out of control.

The AI sector as a whole is just scratching the surface of how the technology we create can improve the human experience. We look forward to continuing our partnerships, and forging new ones, with companies we truly do consider as beacons of our industry. We can’t wait we work side-by-side to unlock new use cases and technologies that truly can make our world a better place.

That’s why we do what we do. It has been from the very beginning.

10 Tips For Building a Successful Chatbot

“Building a bot is easy. Building a bad bot is even easier”-  Norm Judah (CTO-Microsoft)

Intro:

Globally, businesses spend $1.3 trillion on 265 billion customer service calls every year. As a result, brands across industries are investing in chatbots as a way to save time (99% improvement in response times) and money, (30% average drop in cost-per-query resolution) while increasing customer satisfaction.

But, that holy trifecta only comes to fruition if the bot gets things right every single time. Without precision training data, models trip up on simple tasks, consumers get frustrated, and the whole thing falls apart. 

While an average company may look at chatbots simply as a means of cutting costs, industry-leaders understand that AI opens the door for entirely new and innovative products. Take banking customers, for example, who identified their top priorities in a study by CGI group as follows:

  • To be rewarded for their business
  • To be treated like a person
  • To be able to check their balance anytime they wish
  • To be provided with wealth-building advice
  • To be shown spending habits and given advice on how to save

Forward-thinking banks know that by investing in a chatbot today, they’re laying the groundwork for a technology that, down the line, will allow them to hit every single one of those customer priorities. They’re investing accordingly and according to the McKinsey Global Institute, they’re building an insurmountable advantage as a result.

With that in mind. Here are my top 10 tips for keeping a chatbot initiative on the road to long-term success:

1. Know The Story:

Intents are the fundamental building blocks of task-oriented chatbots. Think of them as the problems that your agent will need to be able to resolve. In a banking scenario, these could be anything from checking an account’s balance, to wiring money, or checking branch hours. You need to understand your customers’ needs and map them out into well-defined actions (intents). Make flowcharts that delineate every possible flow of a conversation from point A to point B. Understand how the customers intents are interlinked, and determine whether there is a logical order between them. If you don’t do this exhaustively, your bot will be thrown by even the slightest variations.

2. Get Your Entities Straight

If intents define the broad-level context that determines a chatbot’s capabilities, entities are the specific bits of information the bot will need in order to execute those actions. That means when a bot recognizes an intent, like wiring money let’s say, it also needs to know the recipient and monetary amount to be transferred (at the very least). Intents can be as complex as needed, containing both mandatory and optional entities (like source account or currency, in the money wiring scenario).

3. Divide To Conquer

Don’t expect intents to come with all their requisite entities in just one turn. People leave things out. Nobody types, “I’m looking to wire $500 from my savings account to Mike Watson.” Things like “Wire $500” are much more common. Consider what further steps your bot will need to take in order to fill in the gaps. Zoom in on those flowcharts from step 1 and, for each intent, map out all the possible entity combinations. Design the conversation flow accordingly.

4. “If I Remember Correctly …”

Your bot needs to remember things! Keep track of recent interactions (intents and entities). People tend to ask follow-up questions, and it’s a nice touch to be able to answer without the redundancy of requesting information they’ve already provided. Imagine that a customer asks for a specific bank branch address. The bot successfully responds to the intent, and then the user asks: “And when does it open?” The best chatbots will answer immediately, understanding that the conversational subject is still that same branch. Keep in mind that the same can be true of intents: A customer may ask “What are the Greenwood branch hours?” followed by “What about Capitol Hill?”

5. Know What To Do When You Don’t Know What To Do

Prepare to not understand everything your customer wants, and know how to respond accordingly. You can simply say, “Sorry, I didn’t get that,” but the best bots (like the best customer service reps) provide more useful responses, such as “I didn’t quite catch that. Do you want me to perform an online search?” Or, “I didn’t quite catch that. Do you mind asking the question a different way? Or shall I connect you to an agent?”

6. “Let’s Run It From The Top”

Even though you’ll do everything in your power to avoid it, your bot could get lost in complex conversations where customers express a high number of unique intents. That’s why users should always have the option to restart the conversation from scratch. A clean slate beats a long stream of frustrating interactions from which you won’t be able to recover.

7. Control what You Can Control

You can’t control what the customer is going to say, but you sure can control how your bot will respond. Invest in variability. Different greeting and parting phrases are a nice touch, as is addressing customers by name.

8. Quality Is Variability. Variability Is Quality.

People express the same intents and entities in a multitude of different ways. Investing in data collection that gathers comprehensive variants for how people express certain bits of information is one of the most important steps on the road to building successful virtual agents. Only then will your bot understand that “How much did I spend between November 1st and November 31st” is the same as “How much have I spent this month.”

9. Sound like a Local 

People in the Pacific Northwest might refer to their savings accounts as “rainy-day” funds, whereas customers in the deep south may prefer the term “honey-pot.” On the global scale, in the US, people like to say “checking account,” but in the UK, “main” or “current” are the more popular terms. A globalized company looking to serve a broad customer-base needs to understand how different consumer blocs speak at a granular level. That way, their bot can properly interface with every customer. Here, once again, the world’s most clever algorithm won’t save you. It’s all about the data.

10. Precision. Precision. Precision. 

To quote Google’s Peter Novig, “More data beats better algorithms, but better data beats more data.” Collecting a lot of variants and running them through intent classifiers and entity-taggers only works if that data is annotated correctly. When a customer says, “check balance,” your bot needs to understand that “check” can serve as both a noun and a verb depending on the context. Otherwise, your costumers will be ramming their head against the wall with something as simple as checking the balance of their savings account. All the data in the world does you no good if it’s improperly annotated.

An Interview With Dinheiro Vivo

At last week’s Web Summit, we were lucky enough to sit down with Dinheiro Vivo, a leading financial publication in Portugal. Our conversation touched on everything from the quality-focused approach to training data to AI use-cases across industries. Watch the full interview here (in Portuguese), or check out the English transcription below:

Dinheiro Vivo [DV] – For those who still do not know your work, what does DefinedCrowd do?

Daniela Braga [DB] – We are a data collection and cleaning platform for Machine Learning and for Artificial Intelligence. Our platform combines crowdsourcing with machine learning. A mixture of people and machines working at the same time.

Artificial Intelligence is the imitation of a human brain, artificially. To develop our own intelligence, we go to school, we read many books. It takes a lifetime for a person to be able to react and make decisions in their daily lives. The way machines learn is similar. But with the computational capacity that is currently possible using the cloud and our platform, in just 3 months we can combine thousands of human brains in the same computational memory.

Our platform combines crowdsourcing with machine learning. A mixture of people and machines working at the same time.

DV – And then (the data) is used in applications that we use every day?

DB– Namely Apple Siri, Google Assistant, Alexa. Self-driving cars and even more industrial applications like machines that are doing quality control instead of having people doing it. Or at airports with automatic flight controllers.

DV – And this year was a dazzling year, an investment round, new partners, an office in Tokyo. What have you done to achieve this success?

DB – Especially in the United States, our largest market is still the United States, followed by Japan and followed by Europe. Clients are more open, they’re investing in Artificial Intelligence.

There are many companies in machine learning but there is practically no one doing data cleaning and treatment like us. It’s like the spades and pickaxes of the gold rush. We are making the shovels and picks of AI, of modern times.

DV- And coming to Web Summit also makes all the difference, right?

DB– Our growth milestones have been basically aligned with those of Web Summit. We have been here since 2016, which was the first year Portugal hosted Web Summit. We had closed our seed series. Last year (2017) we basically met the group of investors to close the series A. And this year we are here to be on the list of the top 10 AI companies in the world.

DV – And finally, when you leave Web Summit, what do you expect to take with you?

DB – This year it’s basically a visibility and recruitment maneuver- we are in an aggressive recruitment phase. We want to demonstrate that this is really the best place to work in Portugal. We’re also looking to continue developing partnerships, and solidify go-to-market strategy. Next year, I would like for 50% of our revenue to come from partnerships.

Wrapping up Web Summit

According to The Atlantic, it’s the place “where the future goes to be born.” The always descriptive New York Times has dubbed it “a grand conclave of the tech industry’s high priests.” After three consecutive trips to Web Summit, we’ll go ahead and call it one of the most thought-provoking events we attend year-after-year. We’ve already marked our calendars for 2019 and can’t wait to see faces, old and new, next November in Lisbon.

This year’s highlights included a DefinedCrowd pitch on the Growth Summit Stage, a fascinating panel on the future of AI featuring our founder, Daniela Braga, alongside Sam Liang (AISense), Jean-Francois Gagné (Element AI), and Vasco Calais Pedro (Unbabel), all leaders of scaling startups with their fingers on the pulse at the crossroads of technological innovation and global enterprise. It was also a pleasure to share the stage with António Mexia, CEO of EDP, which is the largest energy provider in Portugal as well as one of our clients and investors.

“We learn during years and years. We go to school, we read many books. It takes a life for a person to be able to react and make decisions in their daily lives. The way machines learn is similar. But with the computational capacity that is currently possible, with the cloud and with our platform, in 3 months it is possible to combine thousands of human brains in the same computational memory. “ 

-Daniela Braga DefinedCrowd Founder and CEO 

Missed those speeches and panels? Don’t worry. Daniela spent a lot of time off-stage sharing thoughts (like the one quoted above) on the present and future of AI with outlets such as Expresso, RTP, Journal Economico, Noticias ao Minuto, TVI 24, and Dinheiro Vivo. We’ve linked them all here [in the original Portuguese] for your convenience. Go ahead and have a look!

Of course, the most enjoyable part of events like Web Summit is the conversations that are sparked after a speech, panel, or simply as a result of wandering in front of the right booth. We know how valuable those discussions can be as they develop into formalized relationships with future investors, clients, thought-leaders, and partners.

We’re happy to have started a lot of these dialogues with contacts across industries and media at booth G111. To those we met, it was a pleasure. Let’s keep talking. To those we didn’t (or those who misplaced our business cards), we’re always available at pr@definedcrowd.com. We’re looking forward to hearing from you.

See you all next year!

We’re teaming up with IBM to make high-quality training data more accessible than ever

Our product Integration with Watson Studio means researchers can now access high-quality training data to train, test and build models all on one platform.

Today marks another huge product milestone here at DefinedCrowd; I’m very excited to announce our product integration with IBM’s Watson Studio. A lot of hard work went into my being able to type those lines this morning, all well worth it for us and our end-users. With DefinedCrowd’s data solutions embedded within Watson Studio, customers can now source, structure, and enrich training data directly from the Watson Studio.

From a logistical perspective, users will be able to set up DefinedCrowd’s customizable data workflows through a dedicated user interface unique to the Watson Studio, the goal being to offer a seamless, one-stop solution for researchers looking to build, train, and test AI models all on the same platform.

In addition to more accessible data, we’re also providing users with quality guarantees that will ensure high-performing results. We’re launching this collaboration with two of our most in-demand workflows, Image Tagging and Text Sentiment Analysis. It’s crucial that these sorts of datasets are sourced and delivered with precision and accuracy, as we’ve detailed in our series on data-labeling. Earning a tech giant like IBM’s trust in handling such critical workflows is a real testament to our work so far.

It’s been a treat to work alongside IBM during this process. A product integration like this doesn’t just increase our exposure externally. Internally, we also get a chance to really examine our core product offerings and focus on how they can be expanded and improved. My team is hard at work on exactly that, and we can’t wait to share all the things we’re building with you. I promise you’ll be hearing from me again very soon.

This integration with IBM fits with an emerging pattern of joint-initiatives between various tech leaders and DefinedCrowd. Earlier this year, we were chosen as an official Amazon Alexa Skills partner, and we’ll be announcing another big collaboration later this year. Joint-efforts like these are an enormous part of our efforts to improve our product offerings to serve a wider array of clients.

We’re ambitious here. Our goal is— and always has been— to stake our claim as the first-choice service provider for high-quality AI training data. Product integrations like this one with IBM are massive for expanding our product capabilities—always my number one priority— and diversifying our client roster. We’re well on our way on both fronts.

So, if you’re an IBM Watson Studio user, we’ll be right there to help the next time you’re building an image recognition or text sentiment analysis model.

Not a Watson Studio customer? Or, need something other than image tagging and sentiment annotation services? Worry not. Check out our wide array of data solutions or email us at sales@definedcrowd.com.

Why precision data-labeling remains the essential ingredient for successful AI

When we talk about AI, things like driverless cars, lightning-speed medical diagnoses, and smart infrastructure tend to dominate the conversation. That makes sense. On any given day, you might find us talking about those same scenarios around our water coolers too. But, we’d be naive to think that future use-cases like these are simple inevitabilities. For AI to deliver on its potential, we’ll have to peel back the curtain and scrutinize how these models are made.

It’s no secret that the vast majority (90%) of data floating through the digital realm is unstructured. As a result, it’s critical that AI models are properly trained to make sense of that ever-growing pile of text, images, audio and video.

That’s why precisely annotated data is to AI models as high-quality ingredients are to a fine meal. With strong datasets as a base, AI “chefs” can confidently focus on their craft. Without it, they’re trying to make French Onion Soup with no butter and a bag of rotten onions. Things can only end badly.

While we’ve had thousands of years to perfect the art of cultivating produce and harvesting grains, we’re not so far along when it comes to AI training data. Of course, everybody knows that better ingredients make for better products, but as an industry, we’re still in the early years. Right now, that means we’re constantly finding out all the minute seemingly inconsequential details that can cause a training dataset to “spoil.”

AI and ML scientists know this all too well. Right now, most of them spend more than half their time retroactively “scrubbing” tainted training data, trying to salvage what they can.

Take text sentiment annotation, for example. The goal is deceptively simple: Does this sentence express positivity, negativity, or neutrality? However, when you consider the domain-specific, ever-in-flux slang that dominate subcultures across social channels, you start to understand all the ways that can go wrong.

To illustrate the point, let’s consider the following two sentences. “What a screamer!” and “What a howler!” On the surface, those are two sentences with the same structure and meaning. Agree? Good. But now, let’s pretend we’re tweeting about the World Cup Final. In soccer lingo, a “screamer” connotes an epic goal, while a “howler” indicates a boneheaded mistake. Those two sentences we agreed were effectively the same now have completely opposing meanings and correlated sentiments.

That seemingly small variation would make a world of difference for, say, a Sports Marketing firm deciding when to put a jersey on sale, or unfortunately but more crucially, where a police force might need to deploy extra protective measures after a big match.

Soccer Player executes overhead kick
If he makes it? It’s a “screamer,” If he shanks it off his foot? A “howler.” In soccer parlance, the two are worlds apart.

Not only do data researchers need to be cognizant of specific social contexts, they also need to pay close attention to the biases that arise out of common social contexts. In the realm of computer vision, particularly facial recognition technology, we’ve seen how harmful poorly considered datasets can be in perpetuating inequity by excluding people from access to new technologies.

Truth is that, while data annotation may not garner the same buzz as the sci-fi future use-cases we all know so well, if we don’t really scrutinize and refine our processes for cultivating precision datasets, we’re going to see a lot of firms trying to serve full tasting menus with empty pantries.

That’s why at DefinedCrowd, we’re always pushing for new ways to scrutinize data collection and annotation processes and anticipating the edge cases that other firms tend to let slide through the cracks. Check out our use-cases to learn more about how we’re able to guarantee clients high-quality data at speed and scale.  To see what high-quality data can do for you, request a trial or email us at sales@definedcrowd.com.