Can AI write a strategy?

with No Comments

If company strategies risk sounding the same when written by people, what happens when they get written by AI? In this post I examine an AI-generated strategy statement for what it says about the abilities of AI and creating strategies.

Three years ago, I asked if large companies all had the same strategy. Perhaps their strategies all sounded the same because managers picked up the same ideas from MBAs and consultants, or because they hired the same copywriters. Last month, a new source of non-differentiating strategy appeared – strategy written by AI.

GPT-3

The AI in question is GPT-3 from OpenAI, which has been getting a lot of attention lately. Here’s a quick introduction to GPT-3: it is a language prediction model that autocompletes text from the input that you give it, like you see when you use Google search.  It’s able to complete many different kinds of text, giving it a wider range of application than other models.

Its power comes from its sheer size. It has been trained on a huge amount of text from the internet, and it has 175 billion parameters in which it stores the patterns in that text. Its response to an input is the text that is statistically most likely to come after it. So the more examples it has, the better it can match the input.

Strategy written by AI

Last month, by way of a test, a VC investor asked GPT-3 to write the second half of his investment memo. It produced several paragraphs of text under the heading of “strategy”. What’s interesting about this text is that is sounds very plausible but isn’t quite right.

Here is some of the input that GPT-3 received:

Sword Health is a full stack digital physical therapy provider. They are the first group in the world to create remote autonomous digitally guided therapy […]

An important aspect of their go to market strategy is to offer the solution to payers as an acute care and preventative solution, which the cost structure of current in-person PT centers cannot support. They then use this as a wedge to expand into displacing PT centers.

And here is the beginning of the AI-generated strategy:

First, we would acquire or acquirer an ambulatory primary care center that has PT, sports medicine, and physical therapy expertise. Then build a Sword center within that facility, allowing Sword to scale without having to scale its own brick and mortar centers.

The first thing to note is that, apart from the grammatical errors, this is a coherent paragraph that covers relevant topics and presents an argument in a logical fashion. It’s impressive. The content also sounds right at a first glance.

But… let’s look at what it says from a strategic point of view. For a digital service start-up intending to scale rapidly, acquiring brick and mortar centres is a slow and expensive way to scale, and would leave them with physical assets that they lack the experience to manage. It is not a logical move within their go to market strategy that targets payers, let alone the first step to take in their strategy.

So the level at which the output isn’t right is quite subtle, and it needs to be checked carefully to see it. A quick glance isn’t enough.

What AI does

What’s interesting about this example is that it illustrates how AI and predictive analytics work. We’re familiar with AI being used to answer closed questions like “People who bought this item also bought…”, where the desired answer is to select one out of a fixed set of items. GPT-3 seems to answer the more ambitious open question “Please complete this text in any way you like”, but it is in fact answering a closed question: “Which words are most likely to statistically follow the words in the input?”

There is a limit to what kind of answer the AI can provide given its design. GPT-3 doesn’t “know” anything except the frequency of patterns repeating themselves in text that it has seen before. It handles text indiscriminately and treats all input text as equally valuable. It does not generalise the text into knowledge rules, nor can it apply logic or reasoning, nor check for inconsistencies in the answer. As the author of the investment memo says, GPT-3 doesn’t understand if its output is true or not.

So this example nicely shows that prediction errors from AI models can be quite subtle rather than glaringly obvious, especially as the models become more sophisticated. How much those errors matter, and how to mitigate them, will depend on the application using the AI. Keep it in mind when using AI in your business.

Critical thinking

I wrote in my earlier posts that a good strategy would do something different than competitors. Since the AI pulls together elements from existing strategy texts, it cannot create something original. I can imagine using an AI as a “copywriting tool” that generates pieces of text that you critically examine for ideas (do we want brick-and-mortar centres, and if so, what benefit would we get out of having them), or as a warning of what to avoid (hmm, if that’s what often comes up as a strategy, what happens when everyone does it).

In the end, if you want your strategy to give you a competitive advantage, you still need to do the hard work of critical thinking.

© 2020 Veridia Consulting