With this post, we’re kicking off a Fuel Cycle blog series on artificial intelligence and its impact on insights practices. In the coming weeks, various Fuel Cycle team members will post on a variety of topics, ranging from practical applications of AI to serious challenges that currently exist with respect to insights. We view this as a running dialogue with the market and look forward to sharing our findings openly.

We’ll attempt to keep prognostication to a minimum. There are plenty of articles, conference talks, and breezy LinkedIn posts that purport to know how the future of AI will play out. While this can be fun, we will focus on the practical impacts of AI.

Make no mistake, we are confident that AI will fundamentally change all forms of insights practices, from qualitative interviewing to report writing to the development of new research practices. In fact, we’re betting it will, and are making investments in AI solutions that help our clients by increasing speed, driving cost efficiency, and improving the actionability of research.

Why should researchers care about AI?

AI development is progressing at a blistering pace, and will quickly extend from general commercial applications to impact research-specific workflows.

In the summer of 2022, image generation tools like OpenAI’s DALL-E were mere toys that produced weird imagery. Today, image generation solutions like DALL-E and Midjourney produce exceptional imagery (see comparison below). Note that both images were generated using the same whimsical prompt, “Crocodile skating at the beach, photorealistic.” The first image was generated in July 2022, and the second was generated in June 2023.

Not only has the quality of AI generation improved, but the user experience of many solutions is advancing. Both ChatGPT and Google’s Bard can not only generate information, but they are also capable of searching the internet and summarizing current information. SlackGPT and Microsoft Copilot live inside chat applications and summarize conversations. AI applications now live on smartphones, giving users access to a vast corpus of human knowledge.

Not only are AI solutions rapidly improving, but they’re easily accessible and require little effort for developers to add them to software, meaning AI will be ubiquitous in nearly all software. As a result, a June 2023 survey from ProductBoard found that 90% of VC-backed software companies intend to embed generative AI solutions in their products. The majority of those companies intend to accomplish this in 2023.

But what about researchers?

The Impact of AI on Research

The omnipresence of AI solutions will doubtlessly come to research and insights as well. Already, academic projections about the impact of generative AI on market research are astounding. For instance:

• A March 2023 paper authored by researchers at OpenAI and the University of Pennsylvania found that survey researchers one of the jobs most exposed jobs to AI-based automation. Multiple models identified a significant percentage of current human labor in research processes might be automated.

• In another 2023 paper, researchers from Microsoft and Harvard Business School found that GPT models trained to simulate consumers in a set of conjoint experiments displayed behavior consistent with general human behavior. In theory, synthetic respondents could simulate real-world behavior more cost-efficiently, increasing the volume of experimentation that can be tested.

In our own testing, we expect to see substantial advancements in data processing and basic reporting across both qualitative and quantitative data sets. In addition, we expect that advanced research methodologies will become more accessible, enabling non-practitioners and experts alike to implement a broader set of insights solutions.

Questioning AI Outputs

Despite amazing advances in AI, there are many questions raised about the accuracy and fidelity of research conducted utilizing AI. Rightly so! Generative AI models are prone to hallucinations, in which they generate inaccurate or nonsensical answers when asked a direct question. In addition, AI models tend to be great at creative tasks and less capable of logic tasks. A recent Google blog from their Bard team states, “[LLMs are] extremely capable on language and creative tasks, but weaker in areas like reasoning and math. In order to help solve more complex problems with advanced reasoning and logic capabilities, relying solely on LLM output isn’t enough.”

Advanced reasoning, logic, and math are critical capabilities for most forms of market research. So, questioning the outputs generated by AI models is warranted. But, hasn’t it always been common to question the fidelity of data in market research?

What Comes Next?

In 2019, we published a positioning statement on the use of machine learning that concludes, in part:

“We expect researchers to increasingly adopt machine learning for unstructured data because, like online sampling, machine learning allows researchers to conduct research faster, more efficiently, with certainty that is “good enough” for many research applications.

Without reservation, Fuel Cycle believes that the research question at hand should dictate the research methodology and never the other way around.”

Our position now on generative AI is consistent with this statement then. Automation, driven by AI, will increase speed, drive efficiency, and improve the actionability of insights. That doesn’t mean it’s a panacea for all research challenges and shouldn’t be wantonly applied to every research question.

We are, however, excited about the potential of AI solutions to allow every business decision to be validated with research. We look forward to sharing our learnings openly in the coming months and invite you to follow along our journey in this blog by subscribing here.