The Visionary’s Gen AI starter kit

Thumbnail_Episode03 (2)

I’m your host, Paul Lima, managing partner at the Lima Consulting Group. From Wall Street to the Pentagon and Fortune five hundreds alike, I’ve been a part of some of the largest digital transformations ever done. We promised three things here. A strategic perspective, content geared for decision makers, and actionable insights to the real problems that digital visionaries can apply immediately.

Welcome to The Visionary’s Guide to the Digital Future. If the best way to predict the futures to invent it, then let’s get you ready to do just that. This podcast is created for the visionaries of today, who are charged with creating. The digital experiences of tomorrow.

Those who listen to The Visionary’s guide to the Digital Future have an unfair advantage, as they invent the future with their finger on The Digital Pulse. Having invested in their digital fitness and having gained a long term perspective, mixed with practical ways to apply what they’ve learned within minutes. Let us know if we’re helping you accelerate your business objectives. Subscribe to our show, message me directly on social media, or email me at [email protected]

Today, we have an opportunity to speak with Lorien Pratt, a PhD based at Denver, Colorado. I’m delighted to bring Lorien on the show to speak to the digital marketers, the decision makers, the change agents and visionaries about what’s happening in the world of Chat GPT, artificial intelligence, and machine learning. With so much buzz in this space, it can be difficult to find the folks that are really responsible and knowledgeable.

For a lot of the innovations that we’re creating today. The folks who really, literally, are rewriting the rules and the playbooks. And so we’ve brought in one of those types of authors who has literally done just that. Matter of fact, part of what she has developed is actually embedded in Chat GPT.

Lorien is the chief scientist and cofounder of Quantelia.

A company that provides world class applied AI, machine learning, and a new category of artificial intelligence called Decision Intelligence.

And she’s been responsible for lots of services and software solutions that are embedded in a lot of the technology platforms that you’re aspiring to use today or that you are using today with Chad GPT.

Lorien’s the author of the first book ever written on Decision Intelligence. Matter of fact, she literally coined the name. And the title is called “Link, How Decision Intelligence Connect data actions and outcomes for a better world”. This July, Lorianne is releasing her second book called “The Decision Intelligence Handbook”.

Lorien has over 40 years of experience developing machine learning models, and some of her work is embedded in some of the major applications that you’re probably using today, like Chat GPT. She’s been featured in two TED talks and is a frequent speaker on national radio, television, keynotes, and webinars.

I’m delighted to have Lorien on the show. So welcome, Lorien.

Hey, Paul. Honored to be here. Why don’t I just start off by asking you a little bit about your 40 years of experience in delivering machine learning models. And I think it’s important for the audience to recognize, and it’s one thing to bring in an academic it’s another to bring in someone who’s really been on the road in the trenches, delivering artificial intelligence, machine learning, these really high powered machine learning models, it in an industry.

So tell me a little bit about your experience.

Lorien: “Well, thank you, Paul. It’s been a really wonderful journey, and I was originally an academic where the incentive is publish papers than to get grants and sort of to build a technology out into a new direction.

“But where my heart always lay was the transfer problem. How do we take that wonderful toolkit that we have, whether AI or machine learning or digital twins, and how do we actually use it to achieve business outcomes?

“Or to solve some of the big problems that we face like climate and poverty. And so my my journey has been one of starting in academics where the idea is just to extend the existing research, and then I swung around and and was a market analyst for a number of years, and I realized that the research wasn’t really making it in any sort of a big way into the hands of decision makers. There were a few use cases and marketing and advertising for machine learning, but there was so much need for evidence based or more structured decision making on the part of leadership worldwide, and it just wasn’t happening. And so that’s where decision intelligence came from. And the rest of the sisters. Great”.

Paul: I’ve got a chance to watch some of your speeches and one of the things that really I love about the way that you go about telling the stories is that you you found a way to incorporate the story line with your mom and the story line with Bowie, your dog.

And and and really, what what I what I’m trying to get at is there are PhDs in computer science that’ll work you through the facts and then they’re storytellers.

And I think this is one of the superpowers that you bring, and I and I hope to be able to hear some of those stories today. I remember you tell us the story a little bit about your work and and you got a law a large variety of applications that you have applied these techniques to, there was one in particular that caught my interest, which was the Human Genome Project. Can you talk to us a little bit about your experience on that one?

Lorien: “Sure. Thanks to the Human Genome Project. I was funded through graduate school, and that was a really exciting time because we were taking the DNA based pair sequences, you know, the a c t g g g, whatever’s, and we were trying to learn new things about them. In particular, there’s some DNA you might have heard of that’s kind of junk DNA that doesn’t have any purpose.

“And so the quest question is which of the DNA actually creates blue eyes or, you know, ears of a particular size? And which of the DNA is kind of older and and doesn’t really activate? And it turns out you can use machine learning to figure that out. And it’s really representative of a lot of machine learning problems and that you have this giant data set.

“Right? In this case, it was just a c t g g g a c t going on for kind of a billion letters. Right? And then you have some people who’ve done some very hard work that say, here’s the place where the junk DNA starts, this ACTG, and here’s this place where the junk DNA ends, the t t g g a.

“Right? But what is that pattern? And people have been trying to write computer software to figure that out. Well, maybe it’s when there’s two Ts and five G’s, etcetera. Right?

“And it turns out machine learning was better at that task than people writing software. And so those of your audience who don’t know what machine learning is, the really simple definition of machine learning is instead of building software by writing code like in JavaScript with if thens, etcetera, you write software by giving a computer examples of the input and output. And that’s what we did on the Human Genome Project, as well as on many other projects. We said, Here’s where those junk DNA things are that we know about.

“Can you can you computer figure out what is it about that pattern that starts and stops those junk DNA sequences? And so that has been a template for hundreds of projects I’ve done over the years where we don’t know how to write software or something. But we do have examples of the input and output, and you give that to a machine learning system and then it figures out how to go from an output. And it’s incredibly powerful and has been for 40 years”.

Paul: Well, that’s one of the comments I made, I was talking about in our last episode And, basically, I make the comment in last episode if you just put the word computational in front of any discipline.

That there are applications for it. Computational politics, computational ethics. It’s not just computational physics or computational finance. I had class in grad school called computational finance in in about 2001.

But when you start applying this this power of computational creativity, computational ethics, computational politics, computational, insert name of academic discipline, there is usually an opportunity to use large data sets and machine learning. From what I gather, your experience has been more of working in many different industries rather than, say, exclusively in one. Right? So how it what would you add to to that comment there about putting the word computational in front of just about any academic discipline?

Lorien: “I think you’re absolutely right. And I think we see a lot of organizations what we might call a digital transformation journey of which machine learning is a step along the way. And so at the beginning, you know, nothing’s digitized. And then they decide they’re gonna you know, have some data that they’re gonna capture and track.

“And then a few years later after they’ve been tracking some data, they say, hey, Maybe that historical data has some value in it if we build a machine learning model using it. So I think, you know, the most common example is we have historical data of when customers churned. This is probably the most widespread machine learning example, and churn means we lost a customer from transcription plan or or from a telecom. And hey, we have this historical data that showed all these characteristics of a customer, how often they call the call center, and and their other behavior characteristics.

“Can we use that to build a machine learning model that’ll predict churn?

“Right? And so that pattern where partway along the digital transformation journey, organizations realize they have this big data set and that dataset represents these historical inputs and outputs, and then they feed that to a system that automatically builds software for them. Churn is just one example. Customer lifetime value is another one you can use.

“What’s the likelihood of an intervention? If I pick up the phone and call them, will they not leave me? Or should I send them a direct mail campaign? These are all examples of situations where you can use this historical data.

“To build automatically build software that helps to make a prediction. And in many domains these days, Those predictions are better than humans could do. We’re seeing machine learning get levels of accuracy that humans aren’t capable of doing because it detects very subtle signal Like in the churn thing, we discovered in in one that, you know, our most likely customers to churn are the ones in this ZIP code because they’ve received a direct mail campaign, but only if they’re women, heads of household, less than thirty years old, and at least one child. Right?

“And, you know, that’s impossible to come up with that very complex pattern just by eyeballing a million rows of data. Right? But machine learning can find that pattern and it’s very, very common”.

Paul: You know, that example reminds me of the story around beer and diapers.

I remember learning from Kirk Born who I love a great deal. He’s such a fantastic guy.

And and Kirk has this story. He talks about this relationship between men who go into a convenience store usually after work and they stop in and they buy diapers. The most highly correlated product they buy with the diapers is beer. And if you just look at the data, you can’t really make that correlation.

But what is the what is the hidden factor that is creating the this this correlation between beer and diapers And, you know, anybody who has a toddler at home knows knows it’s a crying baby. What was the other external factor that’s not in the data? If you’re not here, raise your hand. Right?

And so in the example you gave about churn, it might be that the competitor offered for that particular segment, you know, a very sweet deal about maybe get your cell phone for free for your kiddo, right? Because you said that it was women who have children enough. And you you won’t know that that’s not in your data set. And and these little subtleties sometimes will pick up factors that are being that are impacting in ways that the data itself doesn’t give you the chance to reveal alone.

It just gives you the output.

Lorien: “Well, that’s a really good point. And and I really like that you brought that up, Paul, because it illustrates what I think is is one of sort of a misdirections that machine learning is in right now. Which is it’s been so successful in certain use cases from creating software automatically from these datasets that it doesn’t, as a field, really engage with these causal relationships, like the one you just described, that aren’t in the data.

“And what I have found is that when I’m helping organizations to make decisions that are very impactful in complex environments, Most of the pieces of that decision are not in any dataset, but they are in the heads of humans. You know, humans have a like a causal mechanistic understanding of how the world works. We understand that crying babies might make you want to go buy beer. Right?

“But there’s no data set for that. And, you know, what’s interesting about machine learning is data was so successful that we’ve got like this tunnel vision. If it’s not in the data, then it must not be relevant to our decision, and that’s why I invented the decision intelligence discipline in 2009 is exactly the reason that you just said, is machine learning was only useful if the information was in the data and we were ignoring these understandings of how the world works that we’re in humans heads. So decision intelligence is about eliciting that ideally from a diverse group of people.

“You know, what is the structure of the decision you’re making? And we can talk about that more in a minute. And then after you know the structure trip a decision. There might be some machine learning models that fit in to that decision.

But it it really turns everything on its head. We don’t start with the data. We start with somebody in a particular role who’s trying to achieve some business outcome, and then what are the actions that they can achieve And let’s make sure we get their understanding of the diapers and the beer and how everything fits together. And then we can go look for some datasets that might inform that”.

Paul: Well, this podcast is predominantly for those responsible for customer periods. And so last episode, I talked about basically how the the maturity model for data. It starts with just collecting it So that’s kind of diagnostic. Then it’s descriptive or we’re getting some causality, maybe at a very general level, then it’s predictive.

Where where we’re starting to to look at and find equations, and then it’s prescriptive where those equations can be automated, they can be solved, they can be optimized, And then finally, the the wrinkle that we put on it is cognitive, meaning that that the machines can actually help us further go backwards and collect more of the data and collect more of the diagnostic and causality actually calculate the equations and then actually on their own begin solve them and using cognitive or computational capability. So what I wanted to go back to is, at each one of these stages, there are symptoms that happen in the organization.

In the very beginning, it’s, hey, we don’t have the data. So in descriptive, we’re just capturing the data. In diagnostic, the catch phrase that I always like to use there is that title trumps data. And and that means that the HIPAA, the highest paid person’s opinion in the room is always making the decision.

And those senior executives, well, they’ve been doing it an awfully long time. You might hear that, oh, I don’t trust the data or the data isn’t really accommodating all the nuance is that, and I’ve captured in my career as, you know, a first line supervisor or what have you, But what I hear and this is where I wanna get to, what I hear in decision intelligence is that you’ve crafted a discipline that accommodates both gut and data, right, and developing decision intelligence. Right? So how would you say that you’re able to incorporate the human dimension along with what comes along with the data, in decision intelligence?

Lorien: “I’ll tell the dog story now in order to answer that question. So my dog, I train him. Right? And he lives in this world of what we call antecedent behavior consequent. So the antecedent is he’s in the kitchen, and I say sit.

“And he does the behavior. That’s the b part of it. He sits. And the consequence is he gets a cookie. Right? That’s the c part of it. It turns out that hippos and other humans also live in a world of antecedent behavior, consequent.

“And in decision intelligence, we use slightly different language that’s a little more business which is context, actions, and outcomes.

“And when we start to draw pictures of leaders’ decisions within this framework and take what’s in their heads right now, that that political guy who’s imagining, well, if I did this, it’ll lead to this, it’ll lead to this. And he’s sort of thinking it through in his head, if we take that out of its head and we draw a picture of it, and then we have invite others to collaborate around that picture. We get much much more intelligent about decision making, and then we can start to bring in some tech to to use a computer instead of trying to imagine if I if I, you know, invest in this NPS campaign or I add this new product feature, or a market to this new demographic, all the decisions that a product manager or a marketing lead might be making, instead of just having that happen invisibly between their their ears.

“Let’s get that into a diagram and invite others to collaborate, and one of those collaboration partners is also AI. So that’s how that’s how we bridge from their reality, right, which is, you know, how humans think it’s how my dog thinks. Down to the AI as we ask them about their antecedents’ behaviors and consequences.

Paul: You’ve got a really good knack of being able to dumb things down. So in a word, what in a sentence, what is decision intelligence?

Why did the world need a new I’m gonna say discipline within the artificial intelligence, discipline of DI? What is it?

Lorien: “It is about taking this giant tech stack and fitting it in to your business outcomes and your business actions”.

Paul: A great way as we were in the green room talking about this earlier, we talked a little bit about a causal diagram.

And if we were to actually think about the outcome of what all those models and all those moments are that Senior executives can use to make a decision, there’s all sorts of opportunities to use machine learning at I’m gonna say things like econometric models and all sorts of other capabilities. But the way that I was kind of thinking about it is that it might be a model of models.

Right? That that would really architect at the highest level a decision. And you say, well, oh, we already use we already do a lot of this in the marketing world using a discipline called design thinking.

So how does decision intelligence? What would be the outcome of this model of models? And and how does it fit with design thinking as you’re beginning to attack a marketing problem or a customer experience problem? How should we be thinking about on ramping decision intelligence into that effort and work stream?

Lorien: “So if we think of design thinking as starting with the end user in mind and really understanding their reality and their context, then decision intelligence is design thinking, it’s a design thinking methodology and technology, that allows you to work better with data and AI. I think we could think of DI as a subset of design thinking.

“When we have those particular goals in mind, when we want to be more evidence based or more data driven, specifically for new use cases. And let’s be real clear machine learning for advertising: solved. Right?

“There are a number of use cases that are completely done. But if you’ve got a new use case that no vendor out there can solve, that you think it’s could be could be informed by better data, could be informed by AI. And the whole point of that is that there’s an ROI on the use of the AI. And so you have some business outcome that you want to achieve, then then that’s where where drawing one of these causal decision diagrams fits in and There was a time before Gantt charts.

“I don’t know if anybody’s old enough. But, you know, NASA invented Gantt charts because the complexities of of of the Apollo missions were just too big for people to keep all of the assignments of which astronauts was gonna be in which mission in their heads. But before Gancharts, people were doing this very informally. There was not some standardized way of saying we’re gonna do this and that’s a box, we’re gonna do this and that’s a box, and there’s a start start finish dependency between those two.

“And so they invented Gantsharks for that reason. And A causal decision diagram really serves the same purpose for decisions in complex environments. And we are we are the equivalent of massive before get charts. We were trying to keep all these complexities of choices in our heads, and we long ago in many organizations reached a complex to be ceiling, and that’s why we’re arguing, that’s why there’s so much tension.

It’s we’re pretending we can keep all these pieces of the decision in our head. I mean imagine if we were gonna build a skyscraper without a blueprint, it’s the same sort of thing whenever humanity kind of comes into a new complex discipline we use some kind of a visual metaphor. We use design. Right?

And design is often a diagram of a thing that’s some level of fidelity to the real thing. And we’ve never designed decisions before. Well, now we can. And so this is a CDD like a blueprint for a decision or like a Gantt chart for for a decision in a complex environment”.

Paul: So this this causal decision diagram, I mean, this this is something that for a lot of folks may be new, and I think, you know, well, we have customer journey map. And those are generally outcomes that we might do in a customer oriented design thinking workshop. And we also have business process modeling notation two dot o. Here at Lima, we a consulting group, we generally use a tool called Signavio.

To document those. Many of you may use tool like Visio or even PowerPoint to draw the maps with the swim lanes and the little diamonds that represent a decision. How is that different than a CDD or a causal decision diagram?

Lorien: “Super good question. And if your audience has gone off to get a coffee this is the time to take that coffee and drink it because this is a) A little gnarly to understand, and b) One of the most important things to understand in the world right now, because without this understanding, we won’t solve complex decision making. And and I’m gonna do it with an analogy. Think of the difference between the decision to charge a price for a product.

“And the process that you go through to implement that price in your in your computer systems.

“Both of those are important things to do. Right? Like if we charge ten dollars. Well, what’s the thought process we go to as we decide whether to charge 10 dollars or 5 dollars for a for a product?

“Or as we another thing is is, you know, adding a a new feature to a product. Right? Well, the thought process we go to is if we charge that And then we market to a particular audience. We’ll have a certain number of people who buy it, and that’ll translate into a certain revenue.

“There’s a chain of events that we imagine will happen as an automatic, and that’s the keyword, automatic consequence of that choice to charge about 10 dollars. It is outside of our control.

“And a business process diagram, the boxes mean things that we do and and so like we open up the website, we go to the page where the price is listed it, we call up our sales representatives and say the prices change. These are activities. Right? And the boxes are just activities that happen one after the other. In a causal decision diagram, the boxes are not activities. People are always tempted to think that they are. They are consequences of your action. Right?

“And so it’s that chain of events that’s set in motion by the decision to charge a price. So if I’m a government official, I’m setting a policy for, you know, some, subsidization for a for an agricultural company I’m not gonna take the activities. I’m not gonna map the activities of that company. Instead, I’m setting a policy, and how do I decide what policy I should set? I’m imagining the consequences that would have on the economy or the consequences that would have on a particular sector that are outside of my control.

“Right? So the key thing here is in in a causal decision diagram and you can see this one on the screen. On the left hand side, you have things that you can control. And then as it flows from the left to the right, you get to the things that are the consequences of those things that you control, the consequences of those actions. Just like my dog Bowie, Right?

“He takes an action, and and then outside of his control, he’s, you know, ultimately gonna get a cookie right now. In business, it’s a longer causal chain than just action response, but it’s the same kind of thing”.

Paul: Well, I love that, right? Because we’ve we’ve got, you know, this o go back to your ABC’s, we’ve got these inputs or antecedents, this context tool environment that we’re dealing with.

It’s highly dynamic. Perhaps that’s what’s on the left column there. And then we’ve got these behaviors, and these are actually the nuances of what’s going on in making the decisions. And then, you know, we’ve got these outcomes with consequences.

So are these boxes all within, particularly the purple and the green areas? Are they all consequences? Or Do some of these actually represent the moment at which the process of the decision is being made?

Lorien: “Great question. The yellow ones are actions. On the left hand side, the pink ones in the lower left are the externals as the formal word for the context. The things you can’t change, but you can make assumptions, you know. You can assume that your competitor won’t ever charge more than 12 dollars.

“Right? I can’t control that, but I can make an assumption about my external environment. And then the green stuff on the right are the outcomes. And everything in the middle is kind of our map of the complex world, and sometimes there’s feedback loops in this so it gets really complicated.

And so you really need a diagram to understand it. It’s it’s the map of of how those actions within the context of the externals lead through a chain of events ultimately to the outcomes”.

Paul: So if I’m an organization, I’m thinking about using machine learning, econometric models, artificial intelligence, chain a I. What I loved about these CDDs as I was going through your book is that this represents the model of models. Right? So if I have, like, oh, but we aspire down over here to do, for example, increase our content velocity.

That’ll help us with our personalization.

And inside that, we’re gonna use some machine learning and maybe conjoined analysis or maybe some busy techniques to be able to do that personalization.

But before that and after that, there are other models and other datasets that are being used. So how does a CDD help us kind of be able to get that picture of all of the models and and and points at which technology and and machine learning and math can help us?

Lorien: “So first, I think the most important thing to know about a CDD is it comes out of human brains because as we talked about at the start, we usually can’t get this kind of diagram out of some datasets.

“So how how does this help us understand how to use those models? DI says leave the data out of the room at first. Leave the AI out of the room because you don’t wanna be, like, looking for the solution under the data or AI lamppost.

“Right? You wanna be looking for the solution through the lens of your decision maker and the business outcomes that they care about. What we do is we go through a process where we draw this collaboratively, and then we iterate on it until we like it. And then once we have this with all the lines on it, we look at each line and some number of the externals.

“And we say, is there a dataset that can inform this? Is there a research study that can inform this? Is there an econometric model? And so in this one, this is a decision made by a facilities manager.

“When should he let into the room. And and one of his choices is you might market to everybody to wear a mask. Well, maybe we have a a machine learning model or a research study that says how many signs do you need to have up in your in your facility in order to get people to wear masks? And what’s the relationship between your investment in marketing, mask wearing, and actual people wearing mask marketing.

“And that is mediated by the context because if he’s in one city, might get a lot of compliance. If he’s in another city, he would get less compliance or country. Right? So he’s making a decision within a context of a particular geographic region, which then leads to math compliance.

“And that that’s where there’s an opportunity for an econometric model or research study or machine learning model or a number other things. So we build these things first, stop the data in the room, and then we go through them one piece at a time and say, well, where where am I the tech inform these link”.

Paul: Just like great design thinking, which begins with the customer, and the first part of design thinking is around really putting yourself in the shoes of your customer and demonstrating empathy.

Right? And I think in this case, if I’m a senior executive that is beginning to expire, to leverage computational creativity, generative AI, and so forth. Where does this causal diagram, you know, begin in my journey. Should I be doing this early in the in the stage, or is this something that’s done you know, at the end, which is so typical.

Right? Is this something that’s done during the design thinking workshop at the end before, what would you tell the senior executive?

Lorien: “Well, I would I would start by saying the least empathetic thing you can do is to walk into that room and show him your data. And yet, I have seen that happen the vast majority of times when when there’s an engagement where a senior executive wants use more data. He wants to use more evidence.

“Well, what does he do? He invites the data to machine learning people in the room, and they start by showing them their data. And it’s just completely wrong. Don’t do it. Okay?

“So you start by leaving the data out of the room because the moment you start talking about data, you will melt their brains and you will use up if assuming they can understand what you’re saying. You’ll use up almost all of their cognitive capacity in understanding what the heck you’re talking about. Okay? And but chances are they they’ll tune out. Right?

“Because you’re not speaking their language. You have to meet if you’re a data person or a technologist or a consultant, you have to meet people where they’re at. And what I learned in the year of interviews I did when starting all of this and then the intervening 15 years, It’s seeing your executives are incented for their outcomes, and so you talk to them about what they’re incented for, what are their strategic goals, you meet them where they’re at. And they think every day in terms of the actions they’re gonna take.

“And so and so this is like a universal language you can use to be empathetic with people. And I’ve used it in, you know, dozens of different problem domains. And just like my dog speaks this language, it’s so universally that animals do it. Right?

“It is also universal from one problem domain to the mix. So I’ll go in with no clue about I’m working on a sweet potato agriculture project right now. I don’t really think about sweet potato growing. But I can be an effective consultant because I sit down with the sweet potato growers and I say, well, what goals are you trying to achieve?

“This year, two years from now, etcetera? And they’re like, oh, you care about my goals. Thank you. And it’s a it’s a wonderful moment because I’m not just asking them, but I’m drawing them on a map, and so they don’t have to keep it in their heads anymore.

“And so it’s like there’s a moment of relief in the room because all of that effort it took where people are keeping very complex, you know, where are all my goals, my short term goals, my they’re keeping it in their heads. They have no way of mapping it, And then I say, well, what are the actions? And and it’s just this wonderful thing you go through.

“And only after you’ve drawn the map can and and you’ve agreed to the map, is data allowed into the room because otherwise your people won’t be able to think straight, you know, whether you’re empathetic or not. It has to do with just cognitive load. You’re just overwhelming too much”.

Paul: Wow. I mean, years of experience of of doing this in the field.

I mean, it really demonstrated a little bit of some lessons learned in pitfalls, which was my next question. So you you kinda preempted it in a really great way. So if I’m a senior executive and I’m contemplating a new digital experience, a new customer experience, I’m thinking about loyalty. I’m thinking about how to better leverage my data.

I’m thinking about accelerating my content velocity and producing more content to support an ambition of doing more personalization.

What’s my on ramp? What’s the first thing we need to do if we want to leverage math?

And leverage the capabilities of all these, you know, chat GPT, and so forth. What’s the first thing we ought to be doing?

Lorien: “Well, the first thing you ought to be doing, you should be doing whether or not you’re using any tech, which is to sit down with your team and be crystal clear about measurable outcomes that everybody’s going to be held accountable for. Right?

“Make sure that you’re aligned that it’s EBITDA net of CapEx measured after 12 months and that our goal is that that grows by two percent. Without hurting our EBITDA net of CapEx after twenty four months, which needs to grow by five percent. It needs to be so crisp and measurable that you’d be willing to make a bet on it and you’d know who won the bet. Right?

“And simply having that discipline and then revisiting that outcome because the world drifts and goals drift and just reconnecting with your team about those outcomes that you’re all trying to achieve. Because people make like ten thousand decisions a day, I promise only a tiny fraction of them will ever be informed by tech. Or informed by decision intelligence, and so simply having a crisp and clear alignment of where you’re aiming for is so foundational And in my world, so rarely done. It’s like everybody sort of assumes that we all know where we’re heading. And, I go into these teams and I say, we’re gonna talk about the outcomes. And I used one of my pitfalls lessons learned as I assume people had already aligned around this.

“And and lately, you know, fifteen years, they’re never aligned around it. And may might have been aligned around it three months ago, but it’s drifted and just this this rechecking in and realignment about what we’re trying to achieve. And, you know, just write those down. Right?

“I draw them in little boxes, but you can do it however you like. And then a line around the the actions. What what are authorities? What are the things that you have choices about?

Can we choose a price for this product and what’s the constraint on it? It can be between 2 and 12 dollars. You know, we got some some requirements for management. Map those.

Just write those down. Make sure you’re all in agreement. It’s what your actions are. And then and then get some agreement about how those actions connect to the outcome.

So this is when it gets a little more gnarly where you’re drawing this pause and effect change. You don’t need to read my book to do that. You just need to listen to the last sixty seconds in this podcast.

And the books go into it and excruciating detail, right? And there’s best practices that I teach a course, but you can get an awful long way from the last 60 seconds.

Paul: Yeah. So it sounds like well, one of the tools that we’d like to use is a balanced scorecard, and that helps us figure out the why, right?

That breaks down. And I talk about that a little bit in episode 1 of this of the podcast. Then we go maybe the next thing, what I heard you describe was maybe this causal diagram. And that might really figure out the what?

What are the impacts and the decision, what are the consequences, what are the activities, and then we schedule the design thinking workshop once we’ve got the causal diagram down and we understand what it is that we want to do, right? And then we can figure out the how, which is in the design thinking workshop. So the outcomes there might be a design thinking workshop outputs would be a service design blueprint. It could be a customer journey map.

It might even be later from that a business process modeling notation process diagram. But it sounds like the causal diagram ought to be an input going into the design thinking workshop, if I understand.

Lorien: “I think it goes both ways. Because decisions happen within certain business processes.

“So you could start with the b p with the business process model, if you’ve already got one, And you could say for each step of that business process, are there some decisions inside it? Right? Or you could start with a causal diagram and know, be at the next level up and say, okay, these are the decisions we’re making, and then there’s probably some process steps we need to take in order implement that decision. So I’ve seen it go both ways, actually.

“They’re quite complementary. Right? Because, you know, again, business process modeling is about steps we’re going to take and decisions is about the why we’re going to take each step because it’ll achieve certain things”.

Paul: Yeah, sounds like it can be variable depending on the inputs that you already have and process maturity in your organization.

The reality has been doing this 19 years. I would say more often than not in the marketing area, in the customer experience area. If there are process maps, they’re old. They’re from the two generations back of the website and the digital experience. They’re not up to date. And it’s it’s I think it’s hard. Not been a it’s not like in, you know, factory and retooling where there’s process maps for how that ABB machine is gonna make the the car off the assembly line in marketing and customer experience. It’s so dynamic, more often than not, I don’t see the process maps already having been crafted. In this discipline.

Lorien: “And I think if the goal is to work more closely with data and AI as a collaborator, we need to have the discipline to to to be precise about these decisions and to keep them up to date. And that’s just what what DI what data and AI asks of us is that we we are a little bit more buttoned up”.

Paul: So Lorien, everyone’s trying to figure out how to leverage genAI in Chat GPT 3, 3.5, 4.0 What are you doing with Chat GPT? And how is it influencing the art of decision intelligence?

Lorien: “Great question. I think Decision Intelligence is the killer app for Chat GPT. Let me tell you why. By killer app, What we mean is alone only decision intelligence justifies its existence for the value to the human race. Okay? So what we’re doing is remember the CDDs we had before with the actions and the outcomes and the externals?

“We’re asking chat GPT to give us ideas for actions and externals and outcomes. And I actually back tested chat, GPT on a project I did a couple years ago, where we built the whole CDD with a team of 20 people And then I I just in the last few weeks, I I said, hey, Chat GPT, are there some actions we didn’t think of? And it came up with a whole bunch that these 20 people had never thought of. And then it said, hey, have you thought of these unintended consequences?

“Because I’ve been modifying chat GPT, not just to do like normal chat, but I’ve been teaching it to do CDD elicitation chat. So it knows about actions and outcomes and consequence, it knows about CDDs, right? And so it came up with a bunch of unintended consequences that this team had not thought of. And so in summary, chat GPT, you know, what what we talked about earlier, how your The CDD is not in data. We’re getting it out of human brains.

“Well, you can only get it out of the human brains that are in the room, but chat, GPT gets it out of all the human brains. Right? Because it’s going out and it’s spidering the whole world to say, for this particular decision, are there some actions that might not hurt to you or some unintended consequences of outcomes that you might not have thought of. And, oh, by the way, it also gives you ideas for the machine learning models that fit in the middle.

“And so it’s this really awesome like co pilot companion for the construction of CDDs. And I’ve got a couple contracts right now where I’m chat GPTifying, like, these specific problem domains so that it automatically creates DVDs as part of your dialogue. It’s super super cool”.

Paul: So if I if I hear what you’re saying is that you’re leveraging chat GPT to actually build out the causal decision diagram, now I remember in another conversation we had that did you build an did you build an API?

I mean, were you actually adding some capability to chat GPT through the API that you were building?

Lorien: “No. I’m I’m building on top of chat GPT. So my thing is calling chat GPT, but other people can’t just like use chat GPT and get to my stuff.

I’m layering on top of it, not layering behind of it if that makes any sense”.

Paul: Yeah. And you were saying you were training it as well. So are you, you know, tell me a little bit about how you’re using the datasets and the intelligence that is in the organization and bolting that on?

Lorien: “This is gonna get slightly gnarly. Okay. Let’s say you have 300 pages of knowledge and information from about an organization, and it doesn’t, you know, just all your PDF Right? You can’t get that all into chat GPT.

“It’s it’s there’s something called fine tuning, but it’s it’s doesn’t really work for that. So what you do is you use another AI technique called semantic search. You take what you say to chat GPT. You use that to take the five hundred pages and to narrow it down, into like the 3 pages that are the most relevant.

“So you do this pre thing. So like you type something to to the specialized Lori and Pratt chat thing. You use that to search the documents, so you get like 5 not 500 pages, but 3 pages. Then you send that to chat GPT.

“And in the background, you say chat GPT, here’s some specialized knowledge that nobody else has, and you certainly don’t have because it comes from inside my company. Right? Please use this specialized knowledge along the way to answer the question that I just asked. Right?

“So this is called embedding plus completion for those of you who are techie.

“And I just got it working, like, actually, two days ago, and it’s just amazing.

“It’s the best of both worlds. Right? It’s the general purpose knowledge. And chat the current version of chat, you know, was finished training like 2021. So it doesn’t even know anything recent.

“It certainly doesn’t know anything behind your firewall. Right? So it’s the best combination of that general purpose knowledge with your specialized knowledge that you can then inject into it. It’s been fun writing that code. I’ve had a chance to roll at Wesley’s encoder again lately”.

Paul: So Yeah. Back to your roots. Right?

And I think that’s really important to for for folks to understand is that the power of Chat GPT is in each of those letters, generative, pre trained, transformative. Those of each algorithms.

And what I hear you saying is that you’ve basically expanded the P, the pre training. To include not only what’s included in either Chat GPT 3, 3.5, 4 , or any of the other models coming out from Amazon and Google and so forth. But you’re giving it through semantic search and semantic capabilities, its own unique industry or maybe company specific information, ingesting that in the model and then leveraging the capability of chat GPT.

Lorien: “Yeah. And I know some people are worried about privacy implications of that. But if you do chat GPT inside Azure, they have a way or they claim they have a way that you’re not exposing that information to the world at large in any way. And so there are versions of Chat GPT that are quite protective of your company data”.

Paul: “Yep. And if, you know, they’re not familiar in the last the Digital Pulse of last episode, we really talk about Microsoft’s involvement with Open AI and the fact that they bought I believe it was 45%. So it’s really, you know, Azure and the connection, the connective tissue with GPT 3.5 or 4.0 is really tight. And I think it’s not a surprise that you mentioned the connection there with Microsoft Azure.

Lorien: “Got that API working two days ago.

“It’s really cool. How cool was that? It’s really cool. Yeah”.

Paul: Yeah. And and I think, you know, that just goes to speak to, you know, the fact that we’re having you on show and you really are a visionary and an agent of change and how we can leverage these tools to solve modern problems.

So Lorien, you have some books and you’ve got some mechanisms and training that and I wanted to just give you a chance to you know, talk to us a little bit about how we can engage with you, in order to better you know, take advantage of decision intelligence.

Lorien: “My course is: gettingstartedwithdi.com

It’s the world’s largest URL. Getting started with di, for decision intelligence.

The most recent book, which is the “DI handbook”, which is just step by step how to do this, including some exercises.

It is dihandbook.com

And it drops at O’Reilly in about a month and a half in the middle of July.

And then my previous book that you can get right now, it’s called “Link”

linkthebook.com

And that’s more about the the high level.

It’s meant be like my mother, Reddit, she’s 86. Right? So it’s meant to be very accessible. It makes a great graduation present because it sort of gives this grand overview of where tech is gone, and then how tech is now getting integrated into decisions”.

Paul: Lori, this has been a lot of fun. I have a little exercise that I wanna do with you. I’m gonna give you five words. You’re gonna say the first thing that comes to mind as we wrap up.

Alright. So the first one, just to to get our our creative juices going, is

Design thinking: Necessary

Chat GPT: A dam bomb. It’s changing everything.

Decision Intelligence: 21st century.

Bowie, your dog: My dog, happy.

And Lorien Pratt…

Oh, that’s a good one. Lorien Pratt: Underappreciated

Paul: You know, Lorien, you’ve been at this an awfully long time. And and really, I think anyone who is an innovator and an agent of change and someone who’s really ushered in a new plan for the twenty first century may feel that way I think many innovators do. So I appreciate the work that you’ve done, the two books, the TED Talk, webinars and you spending time with us on The Visionary’s Guide to the Digital Future. So this has been Paul Lima with my guest, Lorien Pratt. Thank you so much, Lorien.

Lorien: “Thank you and thank you to your listeners for hanging out till the end. I appreciate it”.

Posted in

Leave a Comment

Your email address will not be published. Required fields are marked *