August 7, 2023
10 mins

A CTO on what marketers should know about generative AI

From large language models to data security, a CTO answers marketers’ questions about generative AI.
MarTech

Table of contents

1. Quick AI history lesson

2. Predictive AI

3. Generative AI 

4. Large Language Models, or LLMs

5. Data security and generative AI

6. Enterprise vs. consumer software 

7. Building your own generative AI environment 

8. The biggest misconception marketers have about AI 

I’m a content marketer. 

My whole job is to create content that informs, empowers, and entertains marketers when it comes to technology. 

And as a content marketer that markets AI-powered marketing software to marketers (say that five times fast), I cannot overstate how often I think and talk about generative AI. My LinkedIn feed is filled with takes on how generative AI will help or hurt marketers; I recently helped create a website for an internal generative AI product; and many of the team meetings I am in are filled with updates on how the technology is being built into our products. 

Yet despite all the above, I still have questions about generative AI. 

Luckily, it turns out I’m not alone. 

Stagwell Marketing Cloud’s CTO Mansoor Basha has been building a private generative AI portal to support all AI projects and needs across Stagwell’s network of over 70+ marketing agencies. Throughout the process of creating, scaling, and evangelizing this new tool, he’s often asked many of the same questions. 

So I decided to sit down with Basha and put pen to paper. Where did large language models come from? What is the difference between enterprise and personal AI tools? What are the key facts marketers don’t understand about generative AI? 

To start, I wanted to get a better understanding of the different types of AI that are relevant to marketers’ work.  

What types of AI should marketers get familiar with?  

Long before TikTok algorithms could suck you into a timeless scroll fest, or even before the internet existed, AI was being developed. 

…and I mean long before. 

In 1956, researchers came together at the Dartmouth Summer Research Project on Artificial Intelligence and witnessed what is considered by many to be the presentation of the first AI program. While the conference failed to align researchers on the path forward for AI, Herbert Simon and Allen Newell presented The Logic Theorist, what is largely considered to be the first AI program. 

“[We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind,” Simon later said.  

Flash forward to today, and Basha highlighted two types of AI for marketers to focus on: predictive AI and generative AI. 

First up is predictive AI. 

Predictive AI 

To explain predictive AI to me, Basha started by referencing a financial model.  

If you have five years of financial modeling data in a dashboard, you could use an AI model to predict what the next day, quarter, or year might have in store. AI can use previous data to recognize patterns and trends then apply that to a future-looking view. 

Take that same idea and ratch it up a couple of notches and you’re using the same logic to create a self-driving car.

“Hey there is a person in front of you: Should I stop, or should I ram through them?” Basha prompts.

Predictive AI models can predict the next best number, word, or move. 

If you think about this in the context of marketing, predictive AI is used to help determine the success of your campaigns. “Will this research report be valuable, will this Twitter trend continue, will an influencer actually be able to influence people or not,” are the kinds of questions Basha asks to illustrate the ways predictive AI appears in a marketing context.  

Large language models (LLMs) are machine learning models that are trained with huge volumes of data to process and understand language.

And while for the last decade or so, large language models (LLMs) have helped build predictive AI tools, there has been a recent shift in the availability, scale, and processing power around large language models. 

Large language models (LLMs) are machine learning models that are trained with huge volumes of data to process and understand language. 

This is where generative AI comes into play. 

Generative AI 

Generative AI builds from the logic of predictive AI but turns the computer from a predictor to a generator. 

Using the constantly expanding LLMs that also power predictive AI, generative AI can produce images, text, video, and audio using prompts that come from a human. 

Despite the recent fervor for generative AI, it isn’t actually new technology. The reason generative AI has become widely accessible has to do with the amount and cost of both data and systems. 

“The cost of the systems and the chips have become cheaper. The amount of data which you can read and learn has become cheaper. You could use generative AI ten years ago…it just took a lot of processing,” Basha explained. 

Like all AI, generative AI is only as good as its data inputs—and the data inputs have gotten really good. 

“The cost of the systems and the chips have become cheaper. The amount of data which you can read and learn has become cheaper. You could use generative AI ten years ago…it just took a lot of processing,” Basha explained. 

If you’ve ever wondered what the GPT in ChatGPT stands for, it’s an acronym for “generative pre-trained transformer.” This is what the first LLM developed by OpenAI (the human brains behind ChatGPT) is called, and since 2018, they’ve released five different versions. 

Today’s ChatGPT bot is powered by GPT-4, which has been trained by a massive amount of data, although OpenAI hasn’t disclosed exactly how much data the model ingested. As LLMs continue to improve, generative AI will as well. 

What are large language models, and how do they work?

LLMs are the foundation of generative AI technology. 

And the road to the LLMs we know today has been long. Here’s a quick timeline to help chart the decades-long journey to GPT-4. 

A timeline of the development of LLMs from Databricks’ “A Compact Guide to Large Language Models.”

LLMs are models that have been built to process and understand language. They analyze huge amounts of data (inputs could be things like websites, books, and other text examples) and build a model based on the language sets they are trained on. 

This training process can be extremely expensive (tech analysts estimate that training an LLM like GPT-3 could cost up to $4 million—and some larger models may teeter up into the high single-digit-millions) and sometimes includes human intervention through RLHF, or “reinforcement learning from human feedback.” When the model starts learning, it may just produce gibberish. But over time and with continuous learning, it will become more and more human-like. RLHF gives humans the ability to provide feedback on the model and continue fine-tuning it. 

The result is a model that has taken in massive amounts of information. The final model will be a complex neural network, full of billions of connections between words and their contexts—this is how it can predict and generate text as if it were a human. 

Open source LLMs are usually trained on publicly available content, but businesses may build on top of these to make the models more specific to their organization’s needs.  

Basha is doing this at Stagwell: For example, he’s using generative AI technology to access all the private data The Harris Poll has collected since 1963. HarrisGPT ingests years of polling data using GPT-4 and provides users with the ability to ask questions about that data—it generates unique answers and provides citations for further exploration. Imagine being able to find all data related to Americans’ evolving attitudes around hot button political topics by simply prompting a bot using natural human language. To achieve this previously, we would have needed to spend significant time in the archives searching, summarizing, and collating to chart these kinds of trends.

And LLMs aren’t just used for chatbots. They’re also used across business for activities like code generation, sentiment analysis, language translation, virtual assistants, and more. 

What do marketers need to know about protecting their data when leveraging generative AI? 

Chances are if you’re reading this, you’ve been through some kind of IT security training before. 

Maybe you had to watch a series of cheesy videos that explained why you shouldn’t use your work devices for personal use. Or maybe your IT team randomly sends you fake phishing emails to see if you can spot a security risk when you see it. 

Even though this training can seem tedious or obvious at times, there’s a reason it exists: to protect your company’s data and security.  

With the rise of consumer AI products like ChatGPT, IT and security teams everywhere have started to sound the alarms.

“Responsible use of generative AI products wasn’t covered in our security training!” they panic. Not without reason—putting proprietary information into a public tool is a security risk. And if people don’t know this, they may be compromising their business’s data. 

Companies like Verizon, Apple, and Accenture have put bans or restrictions on the use of tools like ChatGPT on their company servers. Samsung reportedly banned ChatGPT after sensitive source code was leaked by a staff member. 

With the rise of consumer AI products like ChatGPT, IT and security teams everywhere have started to sound the alarms.

But Basha is insistent that this doesn’t mean generative AI shouldn’t be used in a work context. There are absolutely ways to derive value from generative AI without the security risk. The first thing to know is the difference between enterprise and consumer software.  

Enterprise vs. consumer software

“The main way to address businesses’ concerns with AI security is to make a clear distinction between enterprise and consumer AI products,” Basha told me. “Businesses need to use enterprise AI tools.”

What does that mean? 

Enterprise software is software that is used by organizations, not individuals. Think about your company’s tech stack: You probably log in to an HRIS (Human Resource Information System) like ADP or Namely to access your pay stubs and update your mailing address. You may use a CRM (Customer Relationship Management) tool like HubSpot or Salesforce to keep track of your customers and their activities across marketing channels. 

For your own personal use, you aren’t buying ADP or Salesforce—these are powerful, expensive tools built with businesses in mind. They have robust security standards, are technically complex to implement, and require routine maintenance and upkeep. There are often full teams dedicated to the implementation, management, and optimization of tools like Salesforce. 

Most companies have robust standards (security and otherwise) when it comes to onboarding new tools: Software must tick boxes across security, compliance, automation, cost optimization, resilience, and more. 

Consumer software, on the other hand, is built for ease-of-use, smaller scale tasks…or just for fun. I might use a consumer version of Google Docs to create a resume or hop on Instagram to see what my friends are up to.

A key difference between enterprise and consumer software (and the one most relevant to this topic) is the difference in security standards. For an enterprise solution to be successful and trusted by brands big and small, it needs to put security first.

Most companies have robust standards (security and otherwise) when it comes to onboarding new tools: Software must tick boxes across security, compliance, automation, cost optimization, resilience, and more. 

Building your own generative AI environment

“What we should advise brands and agencies to do is to look at using enterprise-grade protected large language models—which Azure, Google, Open AI, Anthropic, all of them provide—and set up an internal infrastructure to access those enterprise-level capabilities,” says Basha. This way businesses know that their data is protected inside their own enterprise cloud environments, and they can act quickly in the case of bad actors. In private instances, your data is not being shared publicly—and it’s not being used to train publicly accessible models. This is key for businesses. 

“You might have to spend a little more money to set this up, but then you can create and manage your environment and make sure that all your private data is in the same place.”

AI isn’t going anywhere

“The first thing we need to understand is that AI is not going to go away. Whatever we are doing with generative AI and large language models is going to continue moving forward,” Basha said.

“The first thing we need to understand is that AI is not going to go away. Whatever we are doing with generative AI and large language models is going to continue moving forward,” Basha said.  

While some companies restricted use, Basha emphasizes that restriction without internal development isn’t the right approach. We need to accept that this will transform the way we work and get on board as soon as possible to be ahead of the curve. 

What is the biggest misconception marketers have about generative AI?

“I think people are still learning right about what it is,” Basha told me. 

The generative AI space is rapidly advancing and extremely nuanced. 

Think about a cell phone. There are key cell service providers, a number of phone manufacturers and models, and numerous operating systems. Two people may both have an iPhone 13 on a Verizon plan, but one could be running iOS 15 and the other running iOS 16. This results in key differences between their experiences (most importantly, an expanded or contracted emoji library). 

This is a great way to think about the current generative AI landscape. 

Players like OpenAI, Anthropic, Microsoft, and Google (who are the cell service providers in this comparison) are rushing to create new products (ChatGPT, Midjourney) and constantly updating the LLMs behind them (GPT-3, GPT-3.5,GPT-4).

“In the early days, you didn’t know all of that stuff,” said Basha. “You just said, ‘I got my phone from Verizon.’ Which phone? What operating system? What apps? People didn’t know the nuances about that.”

The same thing applies to the marketing world right now. Even though there are a variety of tools, versions, and providers, we have on ChatGPT blinders from the overwhelming amount of coverage on the tool. It’s almost like the Band-Aid or Kleenex of AI—the name has become a catch all for any generative AI tools marketers use.  

Basha says this is limiting our progress: “How do we apply the right tool so we get the right benefits?”

Sarah Dotson

Sarah Dotson is the Editorial Content Manager for Stagwell Marketing Cloud.

Take five minutes to elevate your marketing POV
Twice monthly, get the latest from Into the Cloud in your inbox.
We’ll be in touch!
Someone from our sales team will reach out within 24 hours.
In the meantime, you can learn more about our product groups.
Oops! Something went wrong while submitting the form.
Related articles
Is your creative team still wasting time on A/B tests?
Generative AI tools can help make creative asset production more efficient.
Media