by William Walsh | Jul 11, 2023 | 2023
As previously published on 6/28/23 in MarTech
By Scott Gillum
Estimated read time: 5 Minutes
What if they are wrong?
When responding to questions about AI replacing humans in certain roles, most ‘experts’ claim that AI will replace some jobs, but will be a much more valuable tool for augmenting human intelligence and ability.
In all of the hype associated with this latest technology wave, an important trend is occurring across industries that could significantly change the impact of AI – the retirement of the knowledge worker.
We need to look no further than the last wave of intelligent technology – the “internet of things” (IoT) to see the impact.
The term ‘Internet of Things’ was coined in 1999 by computer scientist, Kevin Ashton. While working at Procter & Gamble, Ashton proposed putting radio-frequency identification (RFID) chips on products to track them through a supply chain.
“Machines talking to machines” started rolling out in early/ mid 2010 making their way into manufacturing, precision agriculture, complex information networks, and for consumers in a new wave of wearables.
Now, having about a decade of experience of how IoT has impacted certain industries and markets, perhaps it can give us some interesting insights on the future of AI.
In 2010, Cisco launched the “Tomorrow Starts Here” IoT campaign at the time when communication networks were transitioning from hardware “stacks” to software development networks (SDN).
The change meant that in order for carriers to expand their bandwidth, they no longer needed to “rip and replace ” hardware. They only needed to upgrade the software. This transition began the era of machines monitoring their performance and communicating with each other, with the promise of one day producing self healing networks.
Over this same period, network engineers who ushered in the transition from an analog to digital began retiring. These experienced knowledge workers are often being replaced by technicians who understand the monitoring tools, but not necessarily, how the network works.
Over the last dozen years networks have grown in complexity to include cellular, and the number of connections has grown exponentially. To help manage this complexity, numerous monitoring tools have been developed and implemented.
The people on the other end reading the alerts see the obvious, but have a difficult time interpreting the issue, or what to prioritize. The reason is, the tool knows there is an issue but is not smart enough yet to know how to fix it or if it will take care of itself. Technicians end up chasing “ghost tickets,” alerts that have resolved themselves, resulting in lost productivity.
The same thing is repeating itself in marketing today. As one CMO told me; “I can find people who know the technologies all day long, but what I can’t find is someone who thinks strategically. Ask a marketing manager to set up the tools and run a campaign and they have no problem, but ask them to write a compelling value proposition or offer for the campaign, and they will struggle.”
It’s easy to get sucked into the tools. AI generators are really intriguing and can do some amazing things. But based on what we have seen, the tools are not smart enough to fully deliver on their promise…yet.
Here’s the warning from IoT – as tools become more knowledgeable, the workforce operating them is becoming less. It is leaving a knowledge gap. As that knowledge is transferred from worker to machine, we need to ask ourselves what we’ll be left with. Will there be enough experience and expertise in our workers to know if what comes out of the machine is accurate, factitious, or even dangerous.
In a recent WSJ article, Melissa Beebe, an oncology nurse, commented on how she relies on her observation skills to make life-or-death decisions. When an alert said her patient in the oncology unit of UC Davis Medical Center had sepsis, she was sure the AI tool monitoring the patient was wrong.
“I’ve been working with cancer patients for 15 years so I know a septic patient when I see one,” she said. “I knew this patient wasn’t septic.”
The alert correlates elevated white blood cell count with septic infection. It didn’t take into account that this particular patient had leukemia, which can cause similar blood counts. The algorithm, which was based on artificial intelligence, triggers the alert when it detects patterns that match previous patients with sepsis.
Unfortunately, hospital rules require nurses to follow protocols when a patient is flagged for sepsis. Beebe could override the AI model, if she gets doctor approval, but faces disciplinary action if she’s wrong. It’s easy to see the danger of removing human intelligence in this case, it also illustrates the risk associated with over relying on artificial intelligence.
AI will free us from low value tasks, and that is a good thing, but we need to redistribute that time to better developing our people, and our teams. The greatest benefit from these game changing technologies in the business to business environment will be realized when we combine equal amounts of human intelligence with machine intelligence.
by William Walsh | Feb 21, 2023 | 2023
By Scott Gillum
Estimated read time: 5 Minutes
Imagine adding one of the world’s greatest artists to your creative team. Or, how about saving time and money on creative brainstorming by starting with a first draft to inspire the team. How about creating dozens of creative concepts in the same time it currently takes to develop a handful.
Intrigued? This is the promise of AI creative engines for B2B marketing. And with that possibility, could it also help make B2B marketing as sexy and exciting as B2C advertising?
The big story of this year will be the widespread adoption of AI creative tools. Chat GPT is just the beginning. Expect to see agencies widely embrace AI for all types of creative – not just content. Open AI’s CEO, Sam Atlman, sees the “greatest application of AI for creative use,” not in replacing blue collar jobs as many had predicted.
After experimenting with AI tools for the last couple of months, you may be interested to know that I totally agree with Sam’s statement. In fact, the image used for this blog was created using Jasper’s AI image generator.
I selected Salvador Dali as my inspiration, acrylic paint as the style, the context of creating an exciting and bold prediction for the future, out pops the image you see. There were almost endless options of possible combinations that could be used from style to mood, and everything in between.
Rather than these tools being utilized to replace people, they have the potential to be incredibly useful tool sets that in a sense, may serve as an “inspiration engine” for creatives. The potential for increase in productivity is huge!
In fact, think of the possibility of having a famous artist inspire the creative team. Instead of replacing people, you’re gaining access to an incredible talent pool that would otherwise be impossible. Talk about shaking up the world of B2B that has a tendency to lean on technical language and images of products.
The creative process is nothing if not iterative. Imagine how fast the team can play around with concepts by using AI content generators to create first, second, and third drafts that are then edited and approved by a human as final copy.
Consider the time saved by having a designer build the exact image they want, rather than scanning Getty Images for hours on end. Oftentimes, just getting started creates a delay. Working off an AI generated first draft could accelerate the process – at least, that’s the hope.
Now the warning. It is very easy to write long form and short form copy using AI tools, especially for digital ads, blog posts and emails. For B2B marketers, this could mean that there is an easily accessible cornucopia of content to blast out to prospects. Please use these tools judiciously.
The assumption that more content equals more engagement is incorrect. Relevant content is closer, but it still requires insight and strategic thinking, which you will not find in AI tools. The new generation of AI generators is very powerful, in particular, with the coming release of GPT-4 and the advancements made in language modeling.
The tools still require time to learn and understand how to best utilize their power and get the output you desire. There is, however, a learning curve – it is not as simple as input and output. There are numerous variables that need to be refined or manipulated by humans in order to achieve a quality output. The expression of “crap in, crap out” still holds true.
Agencies and companies who carefully consider experimenting with these new tools will now be better off in the long run as technology continues to advance. And here’s another warning based on our learning in young marketing using technologies, they are not the goal.
It’s not enough to only know how to use the tool, you still need to think creatively. AI generators can be great tools to aid the creative process, don’t let them become the process.
They don’t become a threat for replacing humans if you understand how to use them properly. And no, I didn’t use an AI content engine to write this blog…or did I?
by Sonita Reese | Sep 25, 2019 | 2019, Observations
by Glen Drummond
Estimated read time: 6 minutes
Part Two in a two part series
Recently, I published an article with a provocative observation. While much attention has been devoted to the need for organizations to adopt Artificial Intelligence as a core capability, we should consider an even-more-pressing need for “artificial empathy.”
If you did not read part-one, I’ll retrace some footsteps here. The corporation is a creature of human invention. But the creature has grown so enormously in its size, capabilities ,and power, that we the people now encounter a diminishing sense of agency for ourselves and an increasing sense of agency for corporations to shape our future on issues including privacy, equality, safety, the environment, and the behavior of public institutions that once governed these things. Not to mention the stuff of everyday experience: stupid IVRs, impenetrable clam-shell packaging, and infuriating password implementations, just to name a few.
The ramifications of this observation extend beyond marketing strategy. But still, people who think deeply about the relationship between people and brands will play a role in how this narrative unfolds.
And here’s why: In our fast-thinking minds, we perceive the brands that stand for corporations as if they were other people.
Now, people – except for sociopaths – are naturally empathetic. And moreover, we expect them to be so. When we sense a sociopath, the hair on our neck springs, and adrenalin shocks our bloodstream.
As social creatures, we are born pre-wired with miraculously-adapted endocrine and neurological systems that reinforce our empathy in a positive feedback system known as friends and family, community and kin. But corporations are not born with anything of the sort.
Do you see the problem?
At least in our hearts, we have an expectation for brands to behave in a way that they are poorly equipped to fulfill. Expectations disappointed are brands diminished.
Organizational scale amplifies this problem. (We all know what “faceless corporation” means.) So does the doctrine of maximizing shareholder profits. Are there signs that both society and corporate leaders are beginning to discern that the corporation has gained such power, that the power needs to be matched with greater empathy? The recent “statement of social purpose” by 181 corporate leaders suggest this might be so.
The question is how? Some people who read my first post may have been under the impression that I had a plan for how “artificial empathy” could be created. Rest assured this was far from the case. I’m sympathetic to the aspirations of the customer experience movement, but I’m skeptical those aspirations are advanced by continuing to ask socially clueless questions that amount to: “How do you like me now?”
Still, having once stumbled upon the problem of artificial empathy, it’s tempting to speculate. So, with apologies for pairing a ten dollar question with nickel and dime answers, here are some preliminary thoughts.
Biomimicry
If you’re familiar with the literature on biomimicry – you will know that many industrial inventions begin with the observation of patterns in nature. Could we re-conceive the information systems used by corporations through this lens?
In that case, the challenge of “artificial empathy” would cause us to think about a system involving a sensory apparatus, a cortex that integrates the signals from the senses, real-time feedback, amplifier mechanisms and so on.
It does not take long to see that analogues for each of these things already exist within the information systems of corporations – but what’s lacking is an architecture marshalled by the imperative of empathy.
For humans as social creatures – empathy is essential for survival. Embracing the biomimicry idea in an IT architecture geared to artificial empathy would mean that the selfish subjectivity of the corporation would need to be subjugated to human experience and dignity. Do we have engineers this creative and leaders this courageous?
Philosophy
There is a branch of philosophy, “epistemology,” that deals with the question of how we know what we know. Historically, for corporations, and indeed any large organization, to operate at scale has required that an internal representation of customers and prospects is shared across the organization. Sometimes this internal representation goes out of date. Sometimes it is simply wrong-headed from the start. Invariably this internal representation is reductive.
Done well, the disciplines of customer segmentation and personas offer steps in a journey away from the most reductive internal representations of the corporation’s publics. But too often in practice, people mistake the map for the territory. In a product-centric world-view with no imperative for empathy, mistaking the customer map for the territory is standard operating procedure – “best practice” even. In a corporation seeking to attain the capacity of artificial empathy these old habits must die.
While corporations have raced to hire data scientists and put them to work on the analysis of customer behavior and customer responses to various stimuli, they have not been as quick or adept at hiring and training people in the discipline of keeping separate the map from the territory while the study of people is underway.
The pairing of these disciplines feels important going forward. Data scientists are in demand now. Data scientists with a flair for philosophy will be the rarest and most valuable of all.
Artificial Intelligence
Setting aside the semantic arguments about the existence of AI, we now can access algorithmic tools that can explore data-sets to find multiple features of interest about people, and discover patterns of difference, similarity and prediction that are more subtle than those derived from averages, demographic co-variates, single-touch attributions, and the other mainstays of traditional customer analytics.
Indeed, if we are going to operate with less reductive representations for people, and if we are going to simulate the biological mechanisms of empathy within a corporation, artificial intelligence may be the disruptive game-changing technology that finally enables meaningful progress against a problem that has been building for some time.
Final Thoughts
None of these answers by themselves is a prescription for artificial empathy. The confluence of all three may point in a worthy direction. Still, some journeys are worth taking, even when the destination is distant and the route uncertain.
This might be one.
Get more thoughtful content on how to ‘think differently’ on marketing, business, and work by subscribing to our newsletter.
by Sonita Reese | Jun 28, 2019 | 2019, Observations
by Glen Drummond
Estimated reading time: 7 minutes
Part One in a two-part series
Empathy. It’s such a defining human quality, you could say it’s in our bones. For sure, it’s in our brains. Neuroscience reveals that we have “mirror neurons” that cause other people’s emotional experiences to become our own. That concept would be astonishing if it were not so familiar. Empathy runs in our veins. The hormone oxytocin – makes us closer to those we’re close with.
Beyond this, there are the mental gadgets that history has draped on our biology. For instance, our fine-tuned sense of justice, fairness, and balance. These qualities also incline us to prosocial behavior, such as helping a stranger on the street, supporting a local non-profit, separating our recycling…
So if empathy comes naturally, why call for “Artificial Empathy?” (Presuming, of course, that such a thing could even be possible?) The answer begins with an observation about a trend in scale. Human nature developed over a long period in which there were rewards for co-operation within groups and competition between groups. But compared to today, the groups were small. It’s not clear that biologically-rooted empathy equips us adequately for the scale-change.
It’s not merely that there are more of us, although the human population has tripled since 1945. It’s that the nature of connectivity between us is transformed. As members of media-fueled electorates, our mood-swings are damaging institutions that took centuries to build. As members of a global economy, our collective emissions are generating planet-scale impacts on the environment.
There are broad conversations underway about these forms of our connectivity. Less so about our participation in corporations. Arguably, no prior form of connectivity rivals the modern corporation’s capacity to pursue its objectives with such speed, scale and precision
And big corporations are getting bigger. The World Bank reported in 2016 that among the 100 largest revenue-collecting entities in the world, 69 are corporations; 31 are nation-states. A decade ago, the US Supreme Court awarded corporations a human right: freedom of speech. The Danish government has appointed an Ambassador to liaise between the midsized nation and giant tech corporations.
If you have spent your career inside corporations, you know there are instances where scale acts as a liability as much as a strength. The world knows that something went wrong at Volkswagen, at Facebook, at United, at Boeing. And while the particulars are different, the circumstances rhyme. A group of people sincerely felt it was their job to do something that the public would come to hate and the owners would come to regret. What corporation is free from this risk?
So why does business need “Artificial Empathy?” It’s partly because natural empathy is poorly matched to the scale of the modern corporation. And it’s partly because the consumer and the public are not going to let corporations off the hook for un-empathetic behavior.
Here’s the basis for my confidence in that second observation. People imagine brands as if they were other people. The marketing practice of managing brands using a system of archetypal characters speaks to this fact. So does the blow-back that follows when corporations act in notably inhuman ways. There’s even neuro-imaging research that shows we look at logos and faces in surprisingly similar ways.
So here, in a nutshell, is why brands need artificial empathy:
- Because we imagine brands as if they were other people, and
- Because we expect other people to be inherently empathetic, so
- We also expect brands to be inherently empathetic too, and
- Brands have no natural capacity to fulfill this expectation
This fabric of observations explains a lot. Corporations, pursuing their interests without paying attention to this prevalent expectation, violate customer trust. And sometimes, public trust too.
Only on the rare occasion does this violation happen in the dramatic ways cited in the cases of Volkswagen’s emissions masking or Cambridge Analytica’s democracy hacks.
Far more common are violations so banal they barely register. Robotic voice response systems that remind you: “please continue to hold, your call is important to us.” Departure lounges that add acoustic assault to the list of insults suffered by air passengers. Manipulative marketing and sales tactics like the email that arrived this morning in my inbox, by no coincidence, at 9:18 AM with the subject header, “9:00 AM Meeting.”
Viewed through the lens of empathy, (and the lack thereof) the distinction between the dramatic and undramatic instances becomes only a distinction of degree, not kind. And that observation is potentially helpful because it offers some guidance on what needs to be done.
Now, you might say, “Ah, you’re talking about customer experience,” and yes, in a way that’s true. But insofar as the term “customer experience” stands for a department, a performance measure or one in a set of parallel business disciplines, a “customer experience” capability will only act on symptoms while failing to address the root cause. (Sociopaths are known, after all, for their ability to charm.)
Or, you might say, “Ah, so you’re talking about corporate governance.” And yes, again in a way that’s true. But how much real capacity do the people charged with such weighty responsibilities have to intervene in the minor daily violations of the customer’s expectation of empathy? It’s been observed for some time, that “The road to hell is paved with good intentions.”
Since empathy violations appear to take place despite the ubiquity of “customer experience” and “corporate governance” functions the empathy gap – the delta between customer expectations of empathy and the level of empathy corporations are presently organized to muster – is a real business problem.
It seems like a problem that would be worth taking risks to explore, based on the value of the potential outcome if it could be solved.
To summarize, let’s retrace our steps.
- Corporations are large, powerful, engines of collective influence and action.
- They are growing increasingly large, powerful, and influential in the lives of people.
- People expect them to act empathetically, but corporations have no natural inherent capacity, like people do, to fulfill that expectation.
- So, we should expect the empathy gap will grow with the power and reach of corporations, until such time as either corporations design a technology of empathy – “artificial empathy” if you will – or face a more concerted backlash directed at individual brands (“United breaks guitars”), at industry sectors (say, “big tech,”) and at corporations in general.
Despite all the technical progress, investment and hype devoted to it, there remains a debate over whether “artificial intelligence” (AI) actually exists. The concept of “artificial empathy,” if it were to enter the public discussion, would be subject to a similar philosophical challenge.
So why talk about it at all?
Because corporations have plenty of resources for tackling challenges once they can be identified. This one is staring us in the face.
Since the processes, which we call “artificial intelligence” will inevitably shape more of the experiences that corporations project and customers and the public will absorb, is there any question that the need for artificial empathy will grow with each passing day?
The conjunction of “artificial” and “empathy” is a provoking framing of a problem that exists. It matters greatly to a corporation’s stakeholders and deserves far more rigorous thinking and effort than has been devoted to it thus far. Rather than being a zero-sum game, “artificial empathy” will be a project that aligns the interests of shareholders, employees, customers, and the public. Rather than being a departmental problem, “artificial empathy” will require a systems-level response.
I’ll leave for a subsequent article the questions of how “artificial empathy” might work and what resources it might draw upon. For now, suffice it to say if corporations need empathy and don’t have it as a natural quality, then the commercial incentive is there to synthesize it.
The ingenuity and organized effort that has made predictive science – machine learning, deep learning, expert systems, big data, or more generally, “artificial intelligence” – such an important component of corporate strategy today, provides at least a framing metaphor for this initiative – and maybe some important tools too.
But intelligence (natural or artificial) is no substitute for empathy. No matter what strides we make in AI, brands need to make progress now on Artificial Empathy. And if AI begins to make strides on its own, there’s a good chance brands will need to pick up the pace.
Don’t miss out on what’s new at Carbon Design Co., join our email list here!
by scott.gillum | Dec 11, 2017 | 2017, Marketing
We are in a “Digital Revolution” as futurist Ray Kurweil stated in a recent interview. With machine learning, artificial intelligence (AI), and cognitive computing enabling everything from Apple’s new IPhone X to autonomous driving vehicles, it’s hard not to talk about the technology. And considering the herculean effort to construct and configure the tools, it’s hard to fault them for doing so.
Unlike the latest wave of technology innovators like Uber and NetFlix who disrupted industries and business models, this “Forth Industrial Revolution” brings with it a healthy dose of personal disruption. From robots to artificial intelligence, it has the potential to impact everything from how we work to how we live our daily lives. And with that comes some very real concerns about the future and our privacy.
Futurists like Stephen Hawkins and Marc Andreessen have helped give the media fuel for the fire. In an interview with the BBC, Hawkins warned that the development of “full AI could spell the end of the human race.” Andreessen has been quoted as saying that in the future there will be two types of jobs: “people who tell computers what to do and people who are told by computers what to do.”
In fact, the technologies receiving the highest amount of investment are those that are focused on making machines more “human.” According to Venture Scanner, deep learning, natural language processing and image recognition make up the top three funding categories within AI. Kurweil believes that we are only 11 years away from passing the “Turing Test,” the measure that determines if humans can detect the difference between a human or a machine.
Unfortunately, what may get lost in the noise is the great potential of this new generation of technologies. Autonomous vehicles are predicted to save 30,000 lives a year from traffic accidents. Robots are being programmed to help give the disabled more independence. Advancement in the diagnosis and treatment of certain types of cancer are already being seen and some believe that AI could lead to the end of cancer within our lifetime.
Why isn’t the focus on the benefits of these new technologies rather than on the concerns? Professor Theodore Levitt, a former professor at Harvard Business School in the 60’s may have the answer. Levitt was a thought leader in sales and marketing but may best known for the phrase “People don’t want to buy a quarter-inch drill; they want a quarter-inch hole.” The abridged version “Sell the hole, not the drill” has been uttered by sales managers for decades and it’s particularly relevant for the latest wave of new technologies.
We’re in the early stages of this “revolution” so much of the talk is about the “drill.” Explaining the process of building the “drill” is necessary for audiences like investors or partners. It’s also aimed at potential users/customers in hopes they will be able to define the holes to be drilled. The tricky part for marketers is that there are parts of the drill that have the real potential to threaten or scare audiences.
This is the tightrope technology marketers are going to have to walk for the foreseeable future. In order to develop the apps (the “holes”) marketers need to find and convert early adopters. The messaging that appeals to that audience may put others on high alert. It’s a classic “Crossing the Chasm” challenge as described by Geoffrey Moore.
Early adaptors, as described by Moore are comfortable with risk. Unfortunately, when things go wrong, like Google’s DeepMind experience with UK’s National Health Services where their initial work on mobile apps was found to have violated the UK’s patient privacy laws, it makes the “Chasm” grow between the early adopters and the early majority.
Here’s the learning for marketers, one of the four characteristics of visionaries that alienate pragmatists (Early Majority) is the overall disruptiveness of the technology. To be successful in building a bridge over the “Chasm” you may need to tone down your “disruptive” messages. Build a roadmap that gently walks them over the bridge step by step, given them reassurance along the way.
We also know from CEB/Gartner that buyers make purchase decisions based on personal value they perceive. To market “human-like” technologies to humans you have to understand their fears, concerns, and behaviors. Just because your technology can do something as well as or better than a human…doesn’t mean you need to actually “say it.”