Welcome to the Visionary’s Guide to the Digital Future. If the best way to predict the future is to invent it, then let’s get you ready to do just that. This podcast is created for the visionaries of today who are charged with creating the digital experiences of tomorrow.
I’m your host, Paul Lima, Managing Partner at the Lima Consulting Group. From Wall Street to the Pentagon and Fortune 500, I’ve been a part of some of the largest digital transformations ever done. We promise three things here: a strategic perspective, content geared for decision makers, and actionable insights to the real problems that digital visionaries can apply immediately. Those who listened to the Visionary’s Guide to the Digital Future have an unfair advantage as they invent the future with their finger on the digital pulse, having invested in their digital fitness, and having gained a long-term perspective mixed with practical ways to apply what they’ve learned within minutes.
Let us know if we’re helping you accelerate your business objectives. Subscribe to our show, message me directly on social media, or email me at [email protected]
This week on the Digital Pulse, we explore why ChatGPT is turning the future of digital experiences upside down.
So let’s get the vital stats out of the way quickly. The building blocks of chat is GPT3, which is the third generation of the generative pre-trained transformer. We’ll tell you what that means throughout the episode today. On the 14th of march, GPT 4 came out. GPT 3 is only one generative model of many. It’s a foundational, AI based system that other models can leverage. Other models include DALL-E, Stable Diffusion, Midjourney and Alpha Code. And these systems power a lot of the technology that your staff and service providers and creatives and writers and, well, anyone who has responsibility to think and create, could be and should be using in the future.
Remember when you heard the curious word ‘Google’ for the first time? Well, this is that moment for AI if you’re not familiar with these companies already. ChatGPT is an artificial intelligence program, a large language model, or LLM, to be specific, developed by a company called OpenAI. In 2015, five people who each have 100 pounds brains founded this non-profit. They were Elon Musk; Sam Altman, former President of Y Combinator; Greg Brockman, former CEO of Stripe; Ilya Sutskever, a former researcher at Google Brain; and Wojciech Zaremba, another researcher from Google Brain and Facebook’s AI research team. While OpenAI has other programs, it introduced the first version of GPT in 2018. OpenAI’s Chat GPT released on the 30th in November of 2022, and then they upgraded it on the 15th of December.
Within five days, the two had a million active users and recent estimates forecast over 100 million active users by the end of January. That’s pretty extraordinary. That’s the fastest to 100 million yet. It says as much about the socioeconomic factors and the state of global infrastructure as it does about the product itself. So yeah, adoption was faster than Instagram and Twitter, Facebook, Snapchat and Amazon Alexa. TikTok took roughly nine months to amass 100 million users, so five days is indeed pretty impressive. ChatGPT is based on GPT3, the third model of OpenAI’s natural language processing project. It’s a neural network. It’s pre-training ingested theIinternet as it existed around 2021, at which point the model stopped training. So don’t expect current events to pop up out of the output. GPT3.5 finished training in early 2022.
So let’s define the F, the P, and the T. G is for generative, P is for pre-trained, and T is for transformer. So the G: the generative AI (GenAI) is the part of the artificial intelligence that can generate all kinds of data, including audio, code, images, text, simulations, 3D objects, videos and so forth. It takes inspiration by using existing data, but also generates new and unexpected outputs, breaking new ground in the world of product design, art, poetry, programming, architecture, legal documents, and other creative endeavors – things that we typically associate with those who create intellectual property. So imagine if Chopin were playing music…what it’s able to do is listen to that music and have it spit back something that sounds like it was written by, say, one of his students. But Ray Kurzweil demonstrated that on national TV in the early 1960s, and he actually use that very analogy some time ago.
So what does pre-trained mean? It’s the astonishing part of work that involves the coaching and feeding of a neural network. So let’s say that we show the neural network, let’s say 100,000 pictures of a cat, and it places those little dots on the joints in various fixed points such as the eye, and creates a model of a cat. Feed at the 101,000th picture and then ask it, is this a cat? And it spits back a probability that it matches the cat’s dynamics and measurements. But if you store your pictures in one of the cloud providers, say, like Amazon Photos, you probably already know you can search your albums for objects. So here’s a search I did on my photos for guitars. Sometimes you’ll notice it got confused with the violin.
T is for transformer. No, not that. Nothing to do with Bumblebee or Optimus Prime. A transformer is an algorithm that evaluates the context of natural language and does things like conjugates verbs and determines the next best action, like what words should come next, or even what syllable should come next. With its pre-training, it knows if, probabilistically, there should be a comma after a dependent clause – so it does grammar. It’s a thesaurus optimizer. It evaluates like-for-like phrases and selects the best word choice and it leverages the learnings over 175 billion parameters, making it the largest language model ever trained. And with GPT4 for the inputs increased exponentially and it’s using over 100 trillion parameters.
You might be benefiting from a transformer algorithm if you use the editor feature in Microsoft Word or Outlook or in Gmail – that little feature that auto suggests phrases. So that might be why Microsoft invested $10 billion in OpenAI and gained a 46% stake ownership of the company, which in January was valued at around $29 billion. It’s Microsoft’s Azure platform that powers OpenAI. The reason for this exponential growth? Well, GPT has unlimited, universally appealing use cases.
The bot can write an email response to a friend or colleague, debugging and write code, write poetry, dialogue with you about life’s problems, and even past a Wharton MBA operations management exam. ChatGPT took the following exams and passed them: SAT college entry exams, it scored in the 93rd percentile; of the bar exam, it scored on the 90th percentile; and it even passed the United States Uniform Medical Licensing Exam, you know, that test that doctors need to pass to become licensed. And it does all of that in 12 languages and offers the ability to translate text among these languages. It conducts sentiment analysis and can do something called part of speech tagging to dissect each word and phrase into their parts. Remember, like those dependent clauses you learned about in high school?
So what do visionaries and senior leaders need to know about it? Well, first of all, it’s not new, but it flips the model from indexing to inquiry. Human cognition will shift from focusing on information to focusing on inquiry. Our digital future is intertwined with machines. What is essentially some significant portion of the totality of human knowledge is now accessible to us, in essence, a single database. And rather than search a card catalog or a list of links to find an answer, the cognitive part of the model essentially does that and provides back the answer.
As I often like to do, let’s ground ourselves in a little bit of history. How did the world organize access to large volumes of information before the Internet? Many of these capabilities are extensions of what we’ve been doing since the Great library of Alexandria, around 285 BC. This is the first instance of a card catalog – it literally is, they are credited with the first card catalog. Do you remember the card catalog and the Dewey Decimal System at your local library or your school library? There used to be this big set of drawers in the architectural center of the library. In the drawers were index cards using a set of organizing principles that were created by Mr Melville Dewey, who came up with what is today called the Dewey Decimal System. And on those cards were numbers related to the last name of the author, the title, and the topics of the book. Learners could enter the library and search these catalogs to look for topics such as cats or books listed by people with the last name of Poe. And most importantly, each card represented only one book and gave the location of the book using a set of coordinates of where that book is in the library. It’s important to also note that each book may be in the card catalog several times over, such as in a section for the author’s name, its title, and a simplistic topical categorization. Determine how you want to search, say by topic, open the catalog, search the cards, walk to the book, open the book and read it and assess if we have an answer.
Then along came search engines. Today, when we need to find an answer to a question, we go to Google and get a predetermined, cached list of links similar to the card catalog. From there we start flipping through the virtual cards, the links, and we navigate through the pages served us just as we used to do when walking around the library and finding the book. And finally, we spend a great deal of effort reading each page and assessing if we found the answer to our question. So first we had to determine how we want to search, say, by topic. Then you need to type in the search in a search engine with a keyword, click on the links, open the pages, read it and assess if you have an answer.
So why does ChatGPT turn the model upside down? Let me ask a question: have you ever had one of those really infuriating tests where they give you a question and it’s multiple choice and they give you four correct answers and the instructions say, “choose the best answer”. Well, the best answer…come on…you’re testing my ability to make a professional judgment instead of asking for me to recall random facts. Well, that decision-making process is called the next best action algorithm. Rather than providing many answers, ChatGPT uses algorithms to provide you a single and comprehensive answer. Up until now, our search engines have always worked like digital card catalogs. Instead of returning a bunch of cards, they spit back a list of predetermined links. So SERPs or search engine result pages, they give us links, lots of websites, lots of pages, there’s a lot of potential in the list, but only some of the links are relevant. So it’s up to you to sift through the links. So search engines give us lots of answers, but we’ve never had a tool that provided the answer. With the advancement of tools like GPT 3 and GPT 3.5 and 4. We’ve gone from providing access to many possible answers to a tool that goes beyond recall: it analyzes, it synthesizes the totality of the body of knowledge, and succinctly provides a single answer – a tool that provides the answer. You determine how you want to search, say, by topic, you type in the nature of the question with as much detail as you can, and then you assess the answer.
The next level of maturity for search will relegate the concept of presenting users links and make that feel like searching through a card catalog at a library – old, archaic, outdated. Instead, AI tools like ChatGPT will feel like you’ve just interviewed one of the world’s most renowned experts on any given topic. This is the future of search, and there’s a reason that major search engines, notably Microsoft and Google, are investing. In the future, our children will look back on our current search engines that display links in the same way that you look back on the old library card catalog.
What are the characteristics of the future search? Your ability to ask a question will be more important than ever before. That’s always been the case. Let’s face it, the Socratic method uses that insight, and he was using it around 450 BC. The ability to ask the right questions will become more important than ever. The better the question, the more the neural network can span the totality of the dataverse and identify many pieces of the puzzle and begin to stitch a coherent response, in context, having provided a thesis with supporting facts, and ultimately having synthesized a unique response. The superpower that educators should be teaching, that innovators should be aspiring to do, that leaders should be inspiring, is to encourage us to ask better questions. Questions that involve context and that can lead to exploration of solutions and new ways to find new answers. For example, using the Socratic method, steer a student to get help with math without giving them the answer or using a style of writing similar to, say, insert your favorite author, and answer this question. Computers can be programmed to do a few things well, so what’s happening now is that many of these disciplines are being brought together in mew ways, and that’s changing the way that we engage with the world around us.
Consider how the merging of hardware and algorithms was put to the test in one of the world’s most complicated games – Go. You’ve probably heard of Lee Sedol, the Go champion who lost to DeepMind’s algorithm called AlphaGo. But this was in 2016. He said that AlphaGo broke lots of conventional wisdom by rapidly extending to the outer layers of the game in ways that expert players considered to be frantic and reckless, violations of a set of norms that, if used, would signal to more experienced players that you’re probably a rookie. But when AlphaGo did those same moves, it accelerated the late phases of the game, and it was surgical and devastating. It was brutal because AlphaGo preserved the economy of force in the game and could close down opponents and cut down options. In other words, human players couldn’t process the number of scenarios far enough in advance to avoid stepping into a trap. The degrees of foresight were astonishing and led the world’s top Go champion Lee Sedol to declare: “The algorithms will always be ahead of us”.
I really don’t like to think of these tools as adversarial. They’re just tools and they’re converging, being brought together. Natural language processing, or NLP, is merging with large language models, LLM’s, and Next Best Offer and Next Best Action algorithms and a host of other innovations. It’s one thing to experience exponential growth in a given model, such as 175 billion parameters in ChatGPT. It’s another thing entirely to combine innovations across a dozen of these areas and stitch together an experience that begins to approximate sentience, cognition, and even consciousness. You heard about the Google Developer who was concerned about that, about the AI developing consciousness.
So consider how many algorithms are embedded in each one of these innovations. Murat Dermis, a researcher, just published an interesting book in January: A Primer to the 42 Most Commonly Used Machine Learning Algorithms. Within it, he covers these models. Notice that GPT3 is only one of the 42. Remember that innovation doesn’t mean the same thing as invention. To invent is to add something new to the body of knowledge – it’s to discover, to create. To innovate is to synthesize knowledge, rearrange it and apply it. It’s essentially the difference of creating versus applying. A similar analogy could be said of knowledge and wisdom – to know and to do. Essentially, data helps us learn. It’s our activation of that knowledge that delivers our desired outcomes.
So where will machine learning capabilities go in the future? What will be the next stage of maturity? Well, I believe that the next phase of maturity will happen when the machine intelligence begins to develop answers and research questions that have yet to be asked by us. Essentially, once it can learn to ask questions. In other words, they’ll go to the frontier of our human knowledge and step forward and uncharted territory to provide us with answers to questions we haven’t asked yet. Not only will it do as it does today, that is provide answers, but it will use Next Best Offer and Next Best Action algorithms and point its power towards asking the most important questions, questions that we may not have thought to ask or even ways of asking them.
How many inventions were discovered by just changing a part or a single word of the question, or even how many inventions were created that were created because of an accident? Take, for example, just changing one word. “How can I mine diamonds?” That might have been the question we’ve asked before, which assumed that we’re going to get in the Earth and pull them up. Let’s change one word. “How can I manufacture diamonds?” Maybe I can bring in nanotechnology in biological processing and put them together – and that’s possible. Providing context and drawing on similarities and nuances can change the way we frame a problem. It can provide us insights. And as we concentrate or, a word I want you to put in your vocabulary, concentre, our body of knowledge, perhaps the chemistry of ink and herbal medicine can help us cure diseases today, say malaria. What other knowledge, ancient or new, can help us address questions where we’ve never made any connections before? Incan medicine is a great example – it’s astonishing. They were the first to use quinine extracted from tree bark as a treatment for malaria. In fact, many of the ingredients in the Incas’ herbal remedies are found in our modern pills today.
So how will a mathematical approach flip the brute force method of experimenting with thousands of answers and trying each one instead of using computational methods and solving for them to arrive at the answer. When I was in grad school, I had an opportunity to visit one of the largest drug discovery factories in the world. It was a massive, sprawling drug discovery operation and had warehouse after warehouse of hundreds and thousands of test tubes where tiny variations of chemical compounds were dropped into liquid containing the disease, and researchers were using brute force methods to evaluate the impact of each test tube. Why not solve for the single answer using math? We can do that. It’s called computational biology, and we’re beginning to do that in just about every industry.
Pick an industry, and it’s likely you can throw the word ‘computational’ in front of it and find a professional association and a set of conferences that have even been underway for at least a decade. Don’t believe me? Let’s pick the graphic design industry. Did you know that there’s an area called computational creativity? You can attend the International Conference on Computational Creativity in June of this year in Waterloo, Canada, and the inaugural conference began in 2010. The Association for Computational Creativity has a sub-organization called the International Conference of Computational Creativity. They published the Journal of Computational Creativity, and they even have a task force of committees that are addressing the future of their industry and who are establishing governance of standards within it. And in case you want to attend one of their events or chapters, they host conferences across the globe in a city near you. And we could do this exercise for computational finance, computational agriculture, computational physics, and even computational politics. One of tremendous interest to me personally is computational ethics. Computational ethics uses math to balance machine ethics with human ethics. And we’ve come a long way from Isaac Asimov’s Three Laws of Robotics, and I hope that we can display the principles I’ve worked so hard to promote around diversity, equity and inclusion in my personal life, in the ways that neural networks currently treat their outputs to better reflect all of us. It turns out if the data that went into these neural networks only featured, say, one race or didn’t have representation, there is no equality or justice. It turns out that our social constructs aren’t going to be magically fixed by AI. As I speak to student groups, one of the messages I always feel compelled to share is that if the whole tech thing and this data thing isn’t your bag, we still need ethicists. Our advances in tech are undoubtedly outpacing our humanity.
We have a theme on this podcast, and in my career, that innovation should be used for good. And while there are forces that, well, don’t use innovation for good, we’re not naive. It’s our hope that we can help the change agents of today to leverage innovation for good. Now, we’re not trying to boil the ocean, but if we can deploy a grass type strategy to help other visionaries and change agents to advance innovation in directions that are ethical, that are relevant, we hope that we can collectively use this power to enrich our customers’ lives and make living a little better. That means we don’t bury our head in the sand when new innovations come around, but we look to see how we can embrace them. And that brings me back to the maddening division that human beings have always displayed since we invented fire. As the stoics always said, when confronted with uncertainty and invention and new information or news, it could be good or it could be bad.
Let’s consider how educators are responding to ChatGPT. Some educators have the attitude that machine learning is here to stay and that we should begin incorporating it into lesson plans. I have a colleague who’s now a professor of leadership at the United States Military Academy. I suppose you could believe West Point at face value as it claims that it’s America’s preeminent leadership development institution. So why would a professor, whose primary focus is on teaching leadership, alter a syllabus using what he learned in ChatGPT? Haven’t they been teaching leadership since 1802? Professors in its department of behavioral and leadership sciences may be among the world’s top experts on the topic. So when one of them admitted that they improved their lesson plan with things that they learned from ChatGPT in assessing his own course syllabus, it was a great example of how teachers are leading by example. That sounds like leadership in my book.
And that brings us to Ethan Malik, associate professor at the Wharton school, who doesn’t just encourage students to use it. He requires it, and he even teaches them how to do citations to provide proper academic attribution.
But this podcast is about helping visionaries navigate the digital future. So here’s my message to you: This isn’t the first time that a new innovation has been able to provide knowledge workers with access to information. From the Library of Alexandria to GPT4, if you’re a student of espionage, you know that having access to information that is difficult to obtain can be highly advantageous. So who should be able to decide who gets to use generative models? Why should a school board or a single professor or a single executive be granted that power to say that we can use it or that we can’t? Why should business leaders do the same? It’s our opinion that this question is irrelevant. It’s a tidal wave over a beach chair. The tool is here to stay and everybody’s going to be using it soon. The better question, just as in the craft of espionage, is how can receivers of this information, determine its source and can I trust what it says? That’s where astute learners can apply the principles of countermeasures. We’ll have more to say on how to develop countermeasures and around content governance. But suffice it to say that the process of accelerating your content velocity. What does that mean? It means accelerating your ability to publish content and doing personalization at scale are going to need to leverage tools and bring in new skills to help you protect your brand and protect your customer experience.
We need as business leaders to accept that the creative processes are changing. Focus more energy on the accuracy of the answers and use tools to help you do so. You may consider creating a content assurance team and invest in this capability to check the accuracy and verify if the content you’re producing is free from plagiarism, that it’s relevant, and it’s based on accurate inputs from the model. There are lots of tools to help you do this now, so encourage your staff to leverage these tools. And I’m going to share with you a quick one here. It’s a countermeasure tool that you may be able to use as you begin to assess how your organization can leverage chat. And this brings us to a concept called generative adversarial networks or GANs. Sounds pretty spooky and spy like, right? A GAN is like a line graph – say, from -1 to 1. Two models are simultaneously trained by opposing processes: a generator and a discriminator. The generator model, the kind we’ve been talking so much about, is trained to be an artist and create, say, an image or text. The adversarial model or countermeasure is designed as a discriminator. It’s trained to be, say, the art critic as opposed to the artist and tell the real images from the phonies. So during training, the generator progressively becomes better at creating images that look real, and the discriminator becomes better at telling a real one from a false one. And the process reaches equilibrium when the discriminator can no longer distinguish a real image from a fake one. In other words, a perfect image of a cat might be +1 – the artist got it right. And the art critic gives the image a perfect score of -1. So equilibrium is reached when the sum of the artist and the art critic equals 0.
Just a few weeks after the original bot released for ChatGPT, countermeasures became available, developed by possibly the most unlikely party – an undergrad. He’s a Princeton computer science student. His name is Edward Tian and he released GPTZero over the Christmas break in 2022. GPTZero is an open source AI plagiarism checker, also known as a classifier or discriminator. And I love how he got all Plato on us, and he put the tagline on his home page of the site, “Humans deserve the truth”. It doesn’t say that now, but it did. The input a minimum of 250 characters in after assigning the selection a perplexity and bursting this score, the countermeasure AI returns a verdict as to the likelihood of AI involvement in the authoring of the input text.
OpenAI released its own classifier at the end of January with a few more caveats than Tian’s side project. The firm readily admitted that the classifier isn’t always accurate. During testing, OpenAI found that only 26% of AI written text is labeled by the classifier as likely written by AI. 9% of the time, the tool thinks that human written text was actually written by the AI, and that would be a false positive, meaning that something was convicted or guilty when it shouldn’t have been. How unfair. Open AI also includes a note that AI generated text can be edited easily to evade classifiers. This means that educators and business executives are not safe from clever students or employees or ad agencies or other content creators, just like governments aren’t safe even when they deploy counterespionage and countermeasures. All that to say that change agents will need to embrace the change and adopt countermeasures. These measures will require new skills and new technologies that you may not have purchased before. We’ve got the experience in this area and we can help you navigate the landscape.
Some educational systems like New York City and the Seattle Public school systems have entirely banned the use of ChatGPT. The site’s been blocked on all district devices and networks. School systems and business leaders alike should seek to prepare the young people of today, your employees, to become the next generation of leaders and innovators. We shouldn’t be modeling behavior that we fear innovation. To make a better digital future, we shouldn’t arrest change. It’s the only thing that’s brought us progress. We can’t ignore it. Let’s not fear these capabilities. Leaders and agents of change, visionaries, we have an opportunity to adapt. What will future generations say of those who withheld knowledge, if in the future that knowledge will become pervasive? Who is advising your board on how to adapt? This is just another dimension of a solid digital transformation strategy. This is happening, folks, so let’s make sure that we’re not on the wrong side of history on this one. Use it – employees, your vendors, your students at home – they’re going to be using it regardless. So incorporated – build informed guardrails. And if you’re fearful, do what we’ve always done – develop countermeasures. Embrace it. It’s here to stay. And that’s my beat on the Digital Pulse.
Welcome to the Quick Hit. Let’s invest in our digital fitness and get busy with some of my hits from the speaking circuit. The Quick Hit is designed to ground you and an insight that you can immediately apply in creating the digital future.
This season is all about data, and in today’s Quick Hit, we’ll explore the Data Maturity Model. The original models for data have been around since the 60s. But if you stick around, we’re going to look beyond what the models can do today. And I’ll point out some of the things that we can expect them to do in the future.
Digital maturity is a leading indicator of financial success and customer satisfaction for organizations across every industry. So where is your organization’s digital maturity? By the end of this segment, you should be able to self-identify. So let’s get started.
The five phases of digital maturity represent the progression for how organizations collect data, analyze it, activate it, automate it, and finally, how they optimize it.
So last episode, we learned that oil powered the Industrial Revolution and that data powers the Digital Revolution. While oil powered machines, data powers decisions, that is, the decision sciences, and there’s about 1,000 different decision sciences. As you advance through the continuum of data maturing, the power and potential that your data can deliver increases exponentially. So let’s dive into the model and here’s a word of caution: while some of these words may sound a little academic, stick with me because I’m going to walk you through what they mean. And once you internalize what these names mean, you’ll see that, hey, they’re actually pretty good names. So what are the stages? Well, here we go: Descriptive, diagnostic, predictive, prescriptive and cognitive. At least those are the names that I use in the model here at Lima Consulting Group.
So let’s dive into the first phase, which is all about descriptions in context. The descriptive phase is the base of your digital foundation. And what is happening here is that your organization is just getting your arms around the data, the raw, unprocessed, unrefined data. We spent a lot of time in the last episode talking about the value chain of oil where we took a raw commodity, refined it, and enabled machines to convert that commodity into motion, into derivative products and innovation. With oil, the first stage was about extracting deposits of naturally occurring carbon from the Earth. With data, this stage is about extracting and recording the observations out of the world around us, and that’s actually what data is – the recorded observations of the world around us. Well, that’s how I define it at least. Raw data is a glimpse into the past at a single point in time. It’s collected in a silo. It could be a temperature reading, it could be firmographics, which is the demographics of a company, or the psychographic spectrums about a single person. It could collect observations from within a single department or Excel file or database or line of code. This first phase is simple. We’re just collecting it and we’re storing it, and that’s it. And so is everyone else in the enterprise and therein lies the problem: silos. We’re all collecting it and we’re collecting some of the interactions, some of the phone calls, some of the social media engagements and posts, some of the analytics for mobile engagement, some of the clicks of our ads, and some of the sales that are made online. Descriptive analytics tells us what happened, and in many cases, these tools are collecting sampled data, meaning enough of the data to be able to project the total, but not all the interactions. D
id that’s how Google Analytics generally works? And members of your ad agency and maybe your finance team may be satisfied to remain here. But monthly reports attract vanity metrics, and high-level insights are prominent at an organization in this descriptive phase. Over time, in this first stage, the organization begins to drown under the weight of unusable, unstructured, and siloed data. It’s too much, it’s too messy, it’s everywhere. And because, in many cases, the data is sampled, it’s not even accurate. And if it’s going to be an accurate reflection of your processes, meaning your value drivers and cost drivers, well, for these reasons, and here’s the point, it has to be trustworthy. If you’re a senior executive, you know in situations where data may be untrustworthy that there’s risk in making a decision, so it may be better to trust your gut rather than the data. In this moment around the table, in the meeting, the eyeballs all look to you and your rank will beat data. That’s how it’s been since the tribal council sat around the fire in the Serengeti. These data points from customer interactions are scattered across the organization and don’t provide a holistic look at your customers’ journey with your brand. To break through to the next phase of digital maturity, you’ll need to focus on verifying, validating and uniting your data. And for those who make that investment, decision makers can begin to, well, start trusting their data.
When leaders don’t trust the raw data they have, they hesitate to finance initiatives that make use of it. They don’t see data as a tool in decision making. And that’s why so many visionaries can’t get their digital transformations properly funded. Whenever leaders feel like they can’t trust their data, what do they do? Well, they trust their gut. In this stage of digital maturity, that’s why rank beats data. So how do you know you’re exiting this stage? When your organization concenters and concentrates access to data. When the organization funds the first data warehouse, or maybe the data visualization project that you’re thinking about, it’s powered by accurate data, then you’re beginning to exit the descriptive phase. In other words, initiatives that span silos and that begin to provide multi-channel visibility over those silos will provide the insights that will generate return. Once the returns are proven out and those stories make their way back to the decision makers, it’s a bit easier to fund transformative initiatives.
With the power of past customer actions recorded and at your fingertips, your organization graduates the diagnostic stage. The key concept here is that you begin identifying causality in the aggregate, meaning like at a segment level. I like to say that the causality is attributable in a one-to-many basis, that is, one cause may have impacted users to do something, but you don’t know any linkage between which ad caused which users to act. So one cause will link to many actions. Here you’re able to cross channels and databases and analyze segments of customers across entire customer journeys. The power of organized data can provide insights that can lead your team to analyze customer behaviors, their preferences, and then think of your team’s response, say, the next best offer or the next best action. You’re going to be hearing more about NPO and NBA algorithms throughout this podcast, so put those terms in your hip pocket. We can then start to talk about NBA and NBA algorithms in the next phase, because in this phase, your teams are coming up with manual-based ideas for specific campaigns for large segments. Maybe you’re also beginning to experiment with rules engines to consider personalization at a segment level or a category affinity level. S
o, for example, if someone clicks on men’s clothes or they happen to go into the digital camera section of the website, if it’s an e-commerce website. They might be a man if they go in a men’s section or they might be a photographer if they go into a digital camera section. If we can make super quick guesses about demographics or preferences, we can instantly create segments and begin invoking personalization and rules-based tools. These are first generation approaches, but if you’re in this phase of data, it’s an appropriate tool. Your data isn’t ready to adopt programmatic or multivariate tools if you’re in this stage. So with this insight, we begin to assemble actionable intelligence. You’re able to determine causality. If you push on the left side of an organism, something happens on the right, you can directly affect and diagnose general customer behavior based on their observable actions. The problem is you can often only attribute causality on a one to many basis. For example, a new ad campaign results in an uptick in sales. From a summary level, you increase this week’s digital ad budget by, let’s say, x, and you saw an increase in sales by y. But are they causal? What degree of attribution can you source to the ad? You see, you still don’t know which ad specifically motivated customers to make the purchase or if customers were motivated by the campaign at all. You know there was an uptick in sales, but at the end of the day, you don’t know if the ads had a specific impact and if they were impacting the end to end journey of any one actor throughout the experience. So which ad resulted in order x? You can’t do it.
You have to tie together at least three IT systems in order to be able to determine that: the pre click system, say, Google AdWords; the post click systems that you own, say, on your website like Adobe Analytics or Google Analytics; and then the post conversion systems, so your call center or your CRM and ERP systems. So right there, that’s five systems in order to get to that degree of insight across the entire customer journey.
So in this second stage, data can be accessed in real time. And while it’s true that more people across the organization can access it, there’s still a lot of in-fighting and a variety of opinions and perspectives because everybody’s coming at the data from the silo in which they live. Salespeople are attributing their proposals and sales skills to the uptick. The ad agency and the marketers are similarly doing the same with their amazing creative. And data is still in summary form and not linked organization wide, and everyone is either claiming victory for the uptick or casting shade for the downtick. Conversations are stuck making sense of what happened in the past. In this stage, a well-organized data structure may trump rank, but answering “why?” is going to elude you. Everyone’s opinion tends to be about their contribution to the journey rather than the observable journey across the enterprise that the customer just traveled.
So how do you know you’re exiting the stage? You start stitching user profiles across the silos. That means that your ads, your social media posts, your clicks, your phone calls, your chats, your emails, the proposals, the purchase history, the loyalty data, they can all be attributed to a unique individual. And here we’re talking about getting down to the PII level, or personally identifiable information level. When the customer journeys throughout an organization is centrally mapped and readily accessible, you’re exiting this diagnostic stage. Typically, we see organizations investing in customer data lakes and customer data platforms in order to be able to either make these assessments in the weeks after a campaign is done, or in the case with the real time CDP, to be able to put the totality of that information on the edge of the cloud so that machines can query it in real time and use it for future NBO and NBA algorithms. And we’ll come back to this in a minute.
And if you’re at that stage, you’re moving into the third level of maturity. That’s called the predictive stage. It’s called that because we were able to discover and here’s the key word an ‘equation’. While the variables may in some cases be set to zero in those equations and the values in the variables may change, the construct of the equation is now known. We can also close the marketing and sales loop by improving our digital analytics and drawing a one-to-one line of sight between marketing campaigns and the end of the sales funnel.
A lot of times in this podcast we mix the word for data with the word for information, and we do it interchangeably. And I really should be more careful because data is observable actions of ourselves and the world around us, while information has been refined. It’s been processed like the refining process we talked about with oil. Remember, just that innovation alone was what Rockefeller used as an insight when he named his company Standard Oil. That trust in data needs to happen for your company too, the way he had to convince the public that oil was safe. Data has to be processed in order to give it context. And with that context, it’s no longer a raw commodity. It becomes something more valuable. It becomes information. So, as crude oil is refined, it becomes fuel. And when you give context to data, it becomes information.
So let’s talk about it a ‘for instance’: let’s say that we run an ad in Google AdWords, and within the ad we place some nomenclature about the naming convention of the ad, say, a UTM code. It’s Ad #4 on Campaign #2. When a user clicks on the ad, we can capture a unique session ID in Adobe Analytics using something called the Marketing Cloud ID. From there, we can use another tool to place individualized phone numbers on the website , so when they call, we can join or stitch a cookie ID or a Marketing Cloud ID to a specific instance of a phone call. And that’s how we cross digital channels to a call center. From there, the call center agent speaks with the caller, and they may hand off the user eventually to a salesperson, at least, that would be typical in a B2B example. And that quote might be stored in, say, a CRM system. And eventually the quote becomes an order and then cash. So the lead-to-quote and order-to-cash processes in your ERP systems are going to be in play. True, the cookie is dead and there are changes to how these technologies work, but through the deployment of zero-party and first-party tactics, sophisticated organizations can really do this: They can really see the entire journey.
If we’re really going to get serious about tying all of these interactions together, then we should be able to attribute every touchpoint to a customer purchase and a lifetime value and a cost to acquire each customer. These are the kinds of things that excite a board. It’s the ability to stitch together a single user across all systems and definitively know exactly, whether that be in an anonymized way, a pseudonymous way, or an identifiable user, we need to be able to definitively identify each user’s individual behavior across the entire journey. This kind of data doesn’t just help us evaluate the past, it can help us see what’s to come. With real access to quality information, we can begin to predict future behavior to a degree of statistical certainty.
And this brings me to the point about census versus sampled data. Sample data is used in tools like Google Analytics, where they take statistical subsets to save money about what users are doing. That way, they don’t have to record and track and store every interaction. Tools like Adobe Analytics collect census data, meaning every single action of every single user is recorded. To be able to effectively leave the stage of data maturity, you will probably need to use tools that are census collection tools. There are some exceptions to this, but they’re rare. And to advance to the fifth stage of data maturity, there really are no exceptions – you’re going to need census data.
By creating and actioning on specific customer segments and lookalike models, we can run experiments in the marketing lab to directly track how marketing assets can affect a specific group of customers. We can attribute every engagement to an outcome, every marketing action to a sale or other call to action, and ROI can be calculated with any given start and endpoint in the funnel. So maybe you just want to evaluate the effectiveness of, say, your graphic designers or your writers, or maybe you have two ad agencies and you want to compare the impact and effectiveness of each one. That is how you do it.
So in this stage, the thing you as a leader should be trained to be looking for and to celebrate in your teams are the equations. Great digital leaders stop meetings when these antidotes are being shared and they train their staff to push themselves to find the equation. Get the equation out of your team. So, for example, this keyword yielded x% conversion rate on this particular product for these particular segments of users, maybe on these particular dayparts. Right hey, that’s an equation. And data powers, equations. That’s what we’re after. So as a leader, look for them and push the team to document them and celebrate them.
How do you know you’re exiting the predictive stage? But once you begin collecting equations. That area of analytics is called advanced analytics, and that’s the math part. And with that, data science methods can begin to use automation in either a rules-based way or in a programmatic way. What’s the difference? When you set up rules, a human being needs to figure out the equation, and a human being also needs to determine the next best action and program those actions into something like a personalization engine. So say let’s talk financial services. If the credit score is lower than 590, then promote the Credit Builder product that the bank may be wanting to promote. If the score is higher, then promote the Credit Defender offering and then have your teams build the creative assets and upload those offerings into the personalization engine. That’s how rules-based tools work. Your team would determine the score, they would input that number, and they would determine all the creative assets. So tools like HubSpot, Maximizer, Monetate, Adobe Target, Optimizley – those tools would work and those tools span many categories of software from marketing automation to ad buying and e-commerce engines. It’s just a feature in these types of platforms.
Rules based capabilities manifest in these platforms with features like AB testing, recommendation engines and email campaign sequences. But both the equation and the assets are discovered and manually entered by your staff and, over time, applying an equation that Markowitz won a Nobel Prize for called modern portfolio theory (MPT) and we learned how to advance and let the machines determine the equation and eventually even recommend the creative assets. This is what we refer to as programmatic. In programmatic, you let the machine help you determine the rules for how to do the automation. That’s what programmatic means. The next best action for a given customer profile or customer segment can also be generated, and it can be all done in milliseconds – 30 milliseconds, to be precise. If you’re buying programmatic solutions, ask about that, because if it’s not done in 30 milliseconds, it can negatively affect the user experience. Page load times need to render, at the most, in 3 seconds. So that means you only get 10% of that time to determine the page rendering. The other 90% is in the return route from your servers back to the end customers device. So in that 30 milliseconds, that’s what has to happen up on the cloud computing. That’s why you can’t use tools like Microsoft SQL databases to do this.
In the fourth stage, the predictive stage, we found equations for customer success and what can we do with equations? In the prescriptive phase, we automate them. We automate the equations. We can prescribe math the way a doctor prescribes a medicine. Once the diagnosis conforms to statistically significant thresholds, we ought to know what to do. It’s here where we leverage the power of math. It comes masked in lots of fancy buzzwords. But it’s just math. All of it. It’s logic. It’s systems engineering and math.
I’ll give you a little sampling of the tasty goodies you’re going to be able to use once you break into this degree of maturity. Personalization engines, programmatic ad buying, marketing automation, propensity modeling, look alike segment, lead scoring, AB testing, multivariate testing, and robotic process automation. No other phase has more innovation underway than the prescriptive phase. It’s where the adults aspire to vacation. But most organizations just aren’t ready. They haven’t put in the work. They skipped leg day. Most organizations need to spend more time and energy and investment to methodically advance to this stage before onboarding tools that use advanced analytics.
In this stage, it’s the machines that can automate the next best action for a given segment, but the sales reps of these vendors, they promise the power of math and fail to tell buyers that their engine needs petroleum 8. Actually, that’s what my Abrams tank used to run on jet petroleum 8 when I was an army officer. In kind of a side note, the most powerful weapon on that tank wasn’t 120 millimeter smooth bore cannon – it’s the engine. And it’s the same thing with our data. It’s not all these features that are in these tools. It’s the data that goes into the tool, that’s the secret sauce of all these programmatic tools.
So, notice that, like in the diagnostic stage where we’re automating experiences on a one to one basis, in this case, we’re going to be also optimizing experiences on a one to one basis. Let’s just make a compare and contrast. In the diagnostic stage, we were automating experiences on a one-to-many basis. For example, the ‘blue grass baby boomers’ customer segment, maybe we’ll send them the economy 4th of July vacation package and our ‘coastal CrossFit career woman’ will send her the luxury 24 hour mountain getaway. So that’s relevant, right? But the problem is that once you reach this stage, your content creation needs are going to explode. They’re going to accelerate the demands of your team to provide creative assets. And we measure the army of content creators in your organization and your content partners by their content velocity. And it’s going to break. The wheels are going to break the axles and they’re going to come off the bus. You’ve got to create a high volume of compelling offers to fill the prescriptions for a growing number of detailed customer segments.
You know that you’re exiting the prescriptive phase when over 50% of your touchpoints are being influenced, or that they’re participating in, programmatic tools or prescriptive initiatives. You need to provide the entire customer data set end to end so that it’s accessible for the algorithms. It doesn’t mean a customer data lake built in tools like SQL. What that means is that the customer data platform is on the edge of the cloud, and that the algorithms can strike it within 30 milliseconds or less and get to three things. Number one, the right equation; number two, the right values for each variable within the equation; and number three, the right creative asset.
By now, we know our customers. So personally that we can interact with them in ways that feel very relevant, very personal. We can begin to treat each person, well, uniquely. This is where cognitive computing and personalization at scale can become a reality. In the cognitive phase, every digital interaction can be uniquely designed and prepared for presentation by computing power using a discipline called computational creativity, machines can leverage the power of generative models to create art, to create offers, to assemble a picture that allows brands to create content at scale. If we could spend a few hours researching everything we knew about a registered user to our web site, and then have the entire force of our creative teams and our marketing teams and our advertising partners at the table and at the ready to create a unique experience, that’s what we’re able to do here in 30 milliseconds.
Using a user’s previous interactions over their entire relationship with our company, we can predict the next best action on a one to one basis. For example, a sports retailer could use algorithms to recommend the garment that the consumer is most likely to purchase next. And they know that because it matches the other garments in that color that pairs nicely with her previous purchases. Using some very neat technology, it’s possible also to choose the actual physical model, the person, and that model’s pose and then AI can do this magic trick where it puts the garment on the model and makes it look like it was actually a photo taken with a live model wearing the garment. In reality, that photo was never taken. It can be done through the power of computational creativity. And then you can place the model in a photo where the model is playing their favorite sport and weather that mirrors exactly what’s happening outside her window in an outdoor environment that matches that user’s region.
This phase takes marketing back to its roots, to relationship building. Creating that volume of content would be impossible without the power of on-demand content creation. The creative assets are assembled at the moment of consumption by sophisticated software. We have tools that can place clothing on models. Every garment on the site is already photographed, and if you have photographs of your models, you could let the computational engines render every combination of clothing on every model, in every position. But then consider the various poses, the various sports, maybe the props in the picture, and the environments where the model could be placed. Consider the various offerings: is it 2 for 1? Is that better than buy online, pick up in store? How should we consider placing and matching the previous purchases made by that user in the clothes that we place on the model that we select for her? The factorial mathematics needed to perform this move into the trillions of options pretty quickly on most major retailer websites.
But Paul, this can all sound like something futuristic, right? Well, a lot of these tools are very accessible and can work very rapidly if you have advanced through the maturity model without trying to skip a stage. Many of the capabilities I’ve described are already embedded in the digital marketing clouds you may have already purchased. The point is to balance your ability to get smart using data with your ability to get return. And that’s the activation.
So data is knowledge. Get to know your customer. Let them know that you know them. Anticipate their needs. Give them an experience that enriches their life, and you’ll be successful and praised in your net promoter score. Just treat them like you would a friend and make their life better.
You know, I bought a used German car when I was a second Lieutenant. I loved it and I loved that, even after owning it for a few years, I was occasionally finding little new features and marveling at the brilliance and the genius of the engineering. Maybe that says a little bit more about my not understanding the manual or about my mechanical prowess more than it does about this German car manufacturers engineering mastery. Years later, in my master’s program, I had a chance to visit a factory that manufactured this brand of cars in Germany. And at the conclusion of the tour, well, I learned something. While I may have thought that my original car was a modern wonder, what I really learned after visiting the factory was that this manufacturer was able to produce that marvelous product over 1,600 times a day. That isn’t a modern marvel, that is the power of machines. And it’s a miracle. Oil powered the Industrial Revolution. But it’s data that powers the digital revolution.
One of the most underappreciated skills in digital transformation is the ability to empathize with your customer. Ask any great product company, and they already know that the secret to great engineering is empathy to deliver. One enriching experience is indeed a modern marvel, but to do that at scale is transformative. It’s miraculous. And it’s within your power as a digital transformation leader.
Congratulations you’ve leveled up with your digital fitness with this episode’s Quick Hit. There’s a lot more that we can learn now that you’ve mastered the Data Maturity Model. And that’s exactly our topic for the Decision Maker’s Advantage.
Welcome back to the Decision Maker’s Advantage, our segment of the show where we reveal the unlocks that will augment your professional judgment and help you accelerate your corporate business objectives and personal career trajectory.
In the Quick Hit, we talked about the five phases of digital maturity. The digital maturity of your organization is important when it comes to efficiently executing internal initiatives, but the real miracle happens when conversations and actions shift to advancing the customer experience at scale. How does a customer think, feel and act when interfacing with an organization at each phase of the digital maturity lifecycle, in their customer journey? What kinds of conversations can an organization have with the customer at scale? How do those engagements change as your organization invests in its digital fitness through the five phases of data readiness? Well, today, let’s go on a quick journey through the evolution of digital maturity.
So today what I want to do is give you a real-life example of a specific company and their own maturation of digital maturity. And that way we can make some of these academic terms a little bit more real. So over the years, people have told me that some of these models might be a little bit hard for them to understand and to put into context. And so what I tried to do is provide some actual events that are characterizations of what I would call a composite character of actual customers that I’ve served over the years. So these events aren’t made up. This is a composite sketch of scenes that I’ve encountered rolled into a storyline that might make these models relatable for you.
So let’s talk about a fictitious company called TechX. TechX is a large enterprise, high tech firm based out of New Jersey. Their solutions include both software and physical products, and both can be implemented across a wide variety of organizations and industries. And in many industries, they have derivative products based off of their flagship line. It has positive earnings and a management team that are experts in technology, but only the technology manufacturer. And TechX prides itself on keeping a pulse on its customer needs and the market at large to understand new trends. They also believe they’re pretty digitally advanced, and how couldn’t they be? There are high tech companies who employ a lot of smart engineers, a marketing team who are super knowledgeable about their products, and sales veterans who’ve been in the industry for over 20 years.
So next, I want to introduce you to one of TechX’s potential customers. We’ll call her Jane. Jane’s company is headquartered in Seattle, but she happens to live and work in Miami. She manages a regional office outside of Miami, Florida, for a company that is a perfect fit for TechX’s offerings, her office is on the hunt for a solution like the one TechX sells. She became acquainted with the company at a regional trade show. She talked to a very informative member of the TechX team, and even though the conference was in Miami, where Jane lives, the TechX rep who greeted her lives in Boston. You see, the marketing department was short-staffed and found a few other sales reps on the East Coast to assist the marketing department at the booth. The sales rep scanned Jane’s badge and gave Jane a glitzy brochure, some product samples, and information to help Jane’s estimate of the costs so that Jane could begin to start budgeting for the solution. The case studies in the packet were short, but they were for Jane’s industry and they demonstrated that TechX had experience in helping other customers like hers with her pain. So Jane agreed to receive a call to schedule a demo while at the trade show.
So now let’s go back to TechX. Let’s take a walk up to the marketing department. They have a small team that manages events in event marketing team, and they use this cool app that helps orchestrate events and even has this tidy kiosk with those iPads in the booth. But the trade show company used a different app and provided the scanners to scan the badges of participants who stopped by the booth. So the event marketing team got back from the trade show and asked all the sales reps about how the trade show went and if there were any leads. They also wrote the event organizers and asked for the list of names of the people who stopped by the booth, which they sent. Only problem was it went to the only email address on file, which was to TechX’s accounts payable department. The sales team from the event organizer only happened to record the email for who to send the invoice to in their CRM system. So the event marketing staff at TechX was frantically asking the event organizer for the file. And after three attempts, they finally got it, but two weeks after the trade show and what they sent was a pipe deliniated CSV file. But the event marketing team had never seen a pipe deliminated CSV file and they didn’t know how to open it. The furthest that file got in Jane’s customer journey was to a folder located eight levels deep in the file hierarchy of TechX’s SharePoint enterprise content management system.
The glorified share drive just isn’t unlocking the power of data for TechX, but for the VP of marketing who prioritized brand awareness, the fact that the event was held was the desired outcome. And the VP of sales, well, her focus is on the late-stage pipeline and close ratio of each rep. And then we come to the Sales rep from Boston. She doesn’t view Jane’s business card as data. She put it in her computer bag and there it went to die. If only she had entered that into CRM, things could have been different. And the event marketing team doesn’t consider KPIs related to the ROI from the event. They are incented on activity, on the number of events they hold, not the closed deals attributed to them. So there’s no concentration or concentering of the people at the accounts they are serving.
And this story, or similar comedy of errors, repeats on far too many of all of their events. And let’s say they do see nine major trade shows a year and they do 24 local dinners with their sales teams every quarter. Jane didn’t get an email thanking her for filling out the form at the show and prompting her to find time on the calendar for a call with the local Miami rep. Should Tech X interact with Jane in the future, it’s likely that they’ll never make the connection as to when they first met her. And they did contact her. See, a recent college grad was hired in a new sales development representative role and got Jane’s contact information out of an industry database tool like SeamlessAI or ZoomInfo.
He sent her an email about the next trade show coming up in Seattle, and attached to the email was a link to explain who TechX is, but ZoomInfo had her company listed in the wrong industry. He also assumed that Jane lived near her company’s corporate headquarters. So the new SDR sent the wrong industry positioning to Jane and promoted an event to a faraway location. TechX had one chance to make a first impression and they blew it. See, Jane only sees one TechX, but TechX sees many instances of Jane. Yeah, I know it’s B2B marketing and it’s different than B2C marketing, but we need to get past that and remember that companies don’t make money, people do. So we really should be embracing the concept of person to person marketing. Now, I don’t really care if we’re talking about business to government or business to consumer, or even direct to consumer or any of the other permutations. At the end of the day, human beings, we’re actually rational actors, are interacting with other human beings. We have faulty memories, we have imperfect information, and we respond to relationships, to relevant information and to the people and brands who have our back.
Relationships start with showing others that they can trust us, and we do that by demonstrating that we know you and that we understand your pain and that we can add value. It’s a pretty simple equation, actually, and experienced digital leaders build their multi-year plans using those three points in their digital transformation roadmaps.
On an organizational level, the issues about the independent eyeballs who view Jane are signs that TechX doesn’t understand the value of their data. Sales has one view, and marketing has another. TechX can’t determine the return on its investment of their trade show investment by tracking each deal that was either attributed to or influenced by their trade show sponsorship and participation. Instead, the purely anecdotal reports of the marketing department, the ad agency promoting the event, and the salespeople, reign supreme in the meetings where the trade show is discussed. And the nature of the questions revolve around what happened. How many impressions do we get? How many people stop by the booth? How many meetings were set up? And sometimes when marketing assesses the number of leads they got and sales assesses the number of leads they got, it’s really an interesting moment. And generally, in my experience, those estimates can be off by an order of magnitude. I hate to say it, but both departments, marketing and sales, have bias. Bias is when instead of being wrong, your probability of being wrong is that you could be over or under at the same estimate on any given guess. But when you’re biased, you’re always wrong to the same side. And that’s what’s happening a lot of times with marketing and sales departments. Marketing thinks that the average deal size and average close rates are toward the upper end of the threshold of whatever the largest deal size was that closed last year, and the close ratio for the deals are all late-stage pipeline. Conversely, sales may think that the leads aren’t very good, and so they tend to think that the average deal size is towards the lower end and that the close rate is abysmally low because the leads are low quality. So you’re left with some finger pointing. Marketing says our sales team don’t know how to close. And sale says the leads aren’t very good. No one believed it was possible to tie the leads from the show to specific sales that sales would eventually close. The sales process is too complicated since it’s B2B. That was the old story. So the metric for success was the action, not the results. And the sales team thinks it would be great to go back again to the conference.
Luckily, a few months after the show, a budding young analyst on a small customer analytics team at Tec X puts together a persuasive presentation outlining the benefits of how the sales team can connect CRM to their event marketing. A few of the rookie salespeople were recruited to run a pilot. You see, a tTechX marketing is responsible for event marketing and the leads that come out of the show. There’s a silo. And sales account executives are responsible for the customer relationships and the updates to CRM for their accounts. That’s a separate silo.
So everyone manages their data in these different silos. Event leads were managed in marketing automation tools like Marketo and the account executives who manage your accounts and CRM were using tools like Salesforce or HubSpot or SugarCRM. And of course, there are more than a handful of the experienced kind of sales reps who view their relationships as personal assets rather than shared assets of the company. So they resist using CRM at all. I’ve heard more than once that spreadsheets don’t work as well as I going to my customers children’s weddings. I own that relationship, and neither software nor a marketing desk jockey can replace what I do in the field. Right… However, that young analyst’s PowerPoint presentation worked its way up to the chain of command at Techx, over a one on one between the chief growth officer and the chief marketing officer, they gained consensus to end the sticky notes and local spreadsheets and fund the integration of the event marketing platform, marketing automation platform and CRM platforms.
Digital transformation had to be led by the C-level, and it needed to be inter-departmental, and it needed to be multidisciplinary. They decided to start breaking down the silos and getting a central view of their customers and make everyone smart at the same time on all of the customer’s interactions with the brand. After a few months and with more than a few grumbles, the data began to be put in motion. You remember that spreadsheet from the trade show that Jane attended? It stopped being a spreadsheet, and the event management software pushed each customer record, in real time, to CRM the second it got scanned on the trade floor. The marketing automation system was listening for any new entries in CRM and using some rules about the industry that Jane worked in, it sent her an immediate email thanking her for stopping by and providing Jane with an email and the contact information for her dedicated sales representative assigned to her in Miami. And better yet, that sales rep was the guy who said that CRM was going to take away his control and screw things up for him. But you know what? The email signature was personalized, and to Jane, it appeared he sent it just to her. I suppose it’s safe to say that automation gave him more control.
A year went by and Jane returned to the trade show. She stopped by the booth again and the sales rep scanned her badge. The API from the event management software had been connected to their marketing automation system, and within a minute, Jane received an email. It will be the first of many pre-planned emails that the marketing team have built for people in Jane’s industry, who have Jane’s role. The first email subject was “Thanks for stopping by. I am your account rep.” The next 30 days, Jane received additional emails which included two industry specific case studies, a how to buy from TechX and an ROI calculator that the company recently launched. Jane clicked on the ROI calculator and revealed her annual budget, the size and the scope of what she wanted to accomplish, and even selected between the annual versus the monthly payment option based on the purchasing rules of her organization. She basically shared all of her pain. She also clicked on the “Share the results of this ROI calculator” button with three of her colleagues within her firm who have her job on each continent. That’s when the sales rep realized that there was an enterprise-wide opportunity with Jane to sell globally, and he researched Jane’s company. But at that moment it was Jane who called him and the timing couldn’t have been better because she was an informed buyer and the sales rep had already done his homework. Jane scheduled a demo with TechX and the salesperson inquired if other regions would be interested, and Jane invited her peers from the other three global divisions to the demo.
At TechX, the European team and the Latin American team ran their CRMs on different CRM platforms, but since the rollout of the customer data lake, all of the data went to a central repository and the differences were then pushed to all like systems. So everyone had a view of all the contacts and the company records, no matter which CRM system they used. So the sales rep was able to connect the contact records of all of Jane’s peers to the same company. After the demo, the sales rep put together a quote for a global deal with Jane’s company. And since CRM and the marketing automation tools are integrated, marketing was able to attribute the source of the deal as the trade show, and consequently they were able to demonstrate their direct contribution that marketing was making to sales. A new KPI: marketing’s contribution to sales.
With a full 360 degree view of each company and the associated contact records within each individual at that company, the team looked into the concept of establishing lead scoring, but because the sales staff at TechX is relatively small, the marketing team wanted to make sure that the leads they pushed to the sales team were highly qualified. So they did a workshop and defined the difference between a marketing qualified lead, a sales accepted lead, and a sales qualified lead. The lead score consisted of a weighted score and then accommodated the dimensions of 1) the firmographics of the prospect, 2) the demographics about the contact at the company who was filling out the form, and 3) the behavioral score of how the contact is engaged with our brand, so that way a smaller company with a highly engaged person who has authority to purchase might be a good fit for our sales team. And of course, our sales folks always want to know about larger companies too, even if the engagement may not be as high or the original contact may not have had authority to purchase. Every digital interaction we have with those contacts registers and accumulates points. And only the leads that hit the sales qualified lead threshold are sent forward to the sales rep. And, of course, leads that become proposals and revenue can be tracked to determine if the SQL or sales qualified leads close at a faster rate than the average lead and to demonstrate the revenue attributed to the marketing department.
The marketing team wanted to help address churn. They use the power of data science to divide all users and companies into five groups, like a histogram of users and companies from ‘Superusers’, who access their software frequently, all the way to users who rarely logged in. With that knowledge, TechX correlated those accounts that are most likely to discontinue the service. Customers who fell into the ‘at-risk’ category were put into an email campaign to help them capture more value from TechX’s products and services and gain access to specialized training that was specifically created for this purpose.
Finally, with the digital foundation and data readiness and high stage of maturity, TechX was ready to reach the cognitive stage and take advantage of content at scale. One of the deliverables of the campaign was that this new infographic tool took the top facts about each account and created custom infographics for each account based on the value that each customer company is realizing from the TechX solution. The graphics included the types of KPIs that align to the reason that the customer bought the software in the first place and were different for each customer, contained different images for each customer relative to their industry, and contained a look and feel that represented the industry of that customer.
There’s no way the graphics design team at TechX could have possibly made such personalized and industry specific and company specific infographics for thousands of customers. Now that algorithm is always on, and if at any time the thresholds are met for a company that enters into the last quintile, they automatically are entered into the sequence to receive the infographic. And those accounts that were entered into the churn prevention campaign were retained 35% more than before.
Lastly, they enable the chat bot that steers users to the FAQ’s to answer questions and reduce the strain on the call center. The chat bot was extremely effective at reducing frivolous calls and the types of things that you can handle with an automated system. So when customers do call, the call center and our salespeople can understand better how to help and the nature of those calls are higher value. Of course, all of this story relies on users being willing to access TechX’s digital properties, and when they do, they need to log in.
Everything about this story is impossible today and is compliant with the regulations for data protection and privacy. It relies on TechX providing digital experiences, that their customers are willing to log in their website in order to access. Data maturity isn’t just dashboards and analytics, it powers winning customer experiences. Its impact touch is well beyond the marketing department. It touches sales, the call center, operations, customer success, IT, legal, product development, research and development, and even finance and accounting. Data maturity makes sales teams better commissions. It helps product teams to build a better product and equips executive teams to lead their organization to take market share by understanding who the customer is and then enhancing the customer’s experience.
Earning customer loyalty is really, well, it hasn’t changed much since the dawn of time. It’s about relationships. It’s about building trust by showing that we know you, we understand your pain, and that we can provide something that is valuable, that we’re going to serve you and that we’re going to help you. Companies that invest in their digital experiences earned the right to be trusted, and customers love that. They trust brands and people who know their name, who know their context, and who are providing relevant value. And when you can do that, you can build advocates. The way that an organization deals with its data changes the foundation of its customer experience. Data powers decisions. In today’s Decision Maker’s Advantage, we showed you why.
What else did I miss in our digital maturity story? Where do you agree or disagree? I’d love to engage with you about your perspective. Stay tuned to The Visionary’s Guide to the Digital Future, where we explore all of that and more. In the meantime, share the graphic below and join in on the conversation to weigh in on your opinion about what senior executives need to know about how to unlock the value of their data. Please share your feedback with us on social media and follow us on YouTube, Twitter, LinkedIn, Facebook and wherever you get your podcasts. I’m Paul Lima, host of the Visionaries’ Guide to the Digital Future.
I’ll see you in the digital future.