Hosted by:
Annie McDannald - MRS Global Vice President
Listen on Spotify
For our very first episode, we’re starting things off with a bang!
Today, we're diving into the exciting and ever-evolving world of AI. I'm your host, Annie McDannald, Global VP of Civicom Marketing Research Services.
With the rise of Artificial Intelligence, market research has undergone yet another seismic shift. AI systems have been the talk of the market research industry recently, with three questions always coming up in conversation – “What is AI?”, “How does it affect how market research is conducted?”
And most importantly, “Is my job safe from ChatGPT?” – which is the name of our episode for today.
To help us answer these questions and more, we're joined by a true expert in the field.
Timestamps
- 0:00 - Introduction
- 0:56 - Meet: Ray Poynter, President of ESOMAR
- 2:03 - What AI is from your perspective and how it's being used in Market Research today?
- 4:52 - How can those provide greater benefit for market researchers?
- 6:30 - It can improve the market research process, is that true?
- 6:48 - Are there any key advantages to using AI, particularly for qualitative research?
- 9:25 - Can you expand a little bit on your thoughts about those ethical concerns?
- 13:55 - How do we regulate emotion like that?
- 16:51 - Do you think that will have an equal impact as people start to continue adopting AI?
- 19:03 - Do you think AI systems will replace traditional methods almost entirely?
- 20:57 - What do you think are the implications of Musk's proposed moratorium on AI systems?
- 23:23 - How do you regulate that?
- 24:56 - Can you give us your own predictions maybe for the future of AI on market research?
- 28:10 - What is your take on that perspective?
- 29:00 - How can we adapt as market researchers?
- 30:53 - Tips on how to properly integrate this technology
- 32:35 - Tips on how to ask the right questions or prompts in the context of AI
- 34:52 - A piece of advice
- 36:03 - Outro
Transcripts
- 0:00 - Annie McDannald
Hello, everyone. Come ride the tides of innovation in market research with us here at Civicom and make waves. For our very first episode, we are starting things off with a bang, and today, we're diving into the exciting and ever-evolving world of AI. I'm your host, Annie McDannald, Global VP of Marketing Research Services here at Civicom. With the rise of artificial intelligence, market research has undergone yet another seismic shift. AI systems have been the talk of the market research industry recently, with three questions always coming up in conversation, “What is AI? How does it affect how market research is conducted?” and most importantly, “Is my job safe from ChatGPT?” which is the name of our episode for today. So, get your pen and papers out listeners because we'll be having quite the discussion today here at Making Waves, riding the tides of innovation in market research with Civicom. [Music] First, let's get to know our guest for this podcast. He has spent the last 45 years at the intersection of insights, research, and new thinking. Having held director-level positions at the research business in TeleQuest, Millward Brown, and Vision Critical. Our guest is committed to the research and insights industry. Having been a member of ESOMAR for over 30 years and a fellow of the MRS. Recently, his work has focused on training, writing, speaking, and sharing. He has run training workshops for a variety of national and international organizations, including RANS, TRS, JMRA, MRS, and ESOMAR. He's written textbooks, taught at Saitama, Nottingham Universities, regularly blogs, and is active in social media. Lastly, in 2023, elected president of ESOMAR. So, please welcome to the show, Ray Poynter. [Applause] Welcome, Ray.
- 1:51 - Ray Poynter
Thank you. Thank you for the introduction, Annie.
- 1:55 - Annie McDannald
Yes. Thank you so much for joining us today. I'm really excited to touch on this topic with you. Maybe to start us off, can you give us a brief overview of what AI is from your perspective and how it's being used in market research today?
- 2:10 - Ray Poynter
So, there's a bit of a slippery thing about AI, artificial intelligence, because very often, as soon as something works, we stop calling it artificial intelligence. So, when you talk to Siri, and Siri answers you back, that is really clever artificial intelligence. All the automated transcription of videos and interviews, automated translation, all of that stuff is artificial intelligence, but we start calling it, “Oh, well, that's just this. That is just this.” So, we need to be careful that we don't just mean the things that don't quite work yet when we talk about artificial intelligence. We should talk about all things that do. The European Parliament are proposing new legislation that is going to govern it, and they have a very simple definition. If an algorithm decides what's going to happen, it's artificial intelligence. So, in fact, when you run cluster analysis, that is artificial intelligence. You are not making the decision. It's using iterations, it's using an algorithm, and it comes up with the decision. So, anything where the decision is being made by an algorithm, then we should really think about it being in that world of artificial intelligence. So, in that context, we have been using it incredibly heavily for translation, for transcription. Whenever we go out to do in-home interviews, and we're using the satnav, we're using artificial intelligence. So, that's one level. Below the surface, companies like the panel companies have been using for years artificial intelligence to try to catch the frauds and the bots, and the fraudsters are using artificial intelligence to create the frauds and the bots. So, there's quite an arms race going on there. We're seeing more and more use of artificial intelligence in integrating data. So, I get several bits of data, and it will do some of the work in combining that data into a new data set, and we're seeing a lot of artificial intelligence in terms of searching data to find messages. So, we're seeing a bit of a shift from primary research, where we're collecting new data to secondary research, where we're saying, “What do we already know? Can we find the answer?” So, all of those are interesting areas where AI is already at use.
- 4:35 - Annie McDannald
I think that puts some perspective on how long we have been able to access it in some capacity and is just getting all of this big burst of energy, I think, with the ChatGPT evolution. So, that's really interesting and fascinating. How can these systems that are available to us now, and they're changing every day, how can those provide greater benefit for market researchers when they're conducting their research?
- 4:59 - Ray Poynter
So, let's assume that nothing goes wrong with the large language model. So, things like Bard and ChatGPT are large language models. They will get wrong, but to make life simple, we'll start up assuming they won't. So, if you want to design a questionnaire, you can ask ChatGPT, these other systems, for suggestions, what questions to ask, who would a suitable sample be. You can design your research; you could design a discussion guide. It will not be as good as the best 25% human researchers. It will be way better than the worst 25%. So, for a lot of people out there, it would today improve their research if they were to use these sort of models in designing the research, and then when it comes to doing the analysis, creating that analysis. When you write a report, there's things like Grammarly, of course, which are AI, but actually, you can say to the ChatGPT, “Could you give me a shorter version of this? Could you give me a longer version of this?” So, we can use it as an assistant in all sorts of ways, shapes, and forms in designing research and interpreting research and in moving the research forward. So, in those sorts of ways, it can be really quite helpful.
- 6:21 - Annie McDannald
I think that's fantastic. So, there's an indicator that AI, if used properly [Laughter] and not just in place entirely of the human insight, it can improve the market research process. Is that true, then?
- 6:33 - Ray Poynter
So, it can improve the market research process for everybody. For some people, it will only make it cheaper and faster, which is what most clients want at the end of the [Laughter] day. For some people, it will make it better.
- 6:47 - Annie McDannald
Fantastic. Are there any key advantages to using AI, particularly for qualitative research? You touched a little bit about how we're using AI a bit to find the fraudsters and quant and survey responses. How does that translate on the qualitative side?
- 7:02 - Ray Poynter
Well, we need to be careful about whether it is qualitative once it's done it. So, qualitative is about interpreting information, and so, inherently, we use small amounts of information. If I've got a million essays to read, I'm going to take a sample of them and read them. With the changes we're seeing in the technology, it's going to be quite realistic to get the device, the AI, to read all million and to come back with solutions, suggestions, synopses, reports, recommendations, thematic analysis, all of these sorts of things are going to be possible within that. There's a dispute about whether this is quantifying qual or whether it's qual at large scale, and what exactly it means. That is one of the areas where we're going to see probably the biggest challenge is in A. analyzing large amounts of qualitative unstructured information, but secondly, in having those conversations, having conversations with bots. If anybody out there hasn't played with ChatGPT yet, please do it because you'll find you can have a very plausible conversation. You could easily understand how you could program this using the apps to interview every single undergraduate coming into a university every three months to get a really good indication of mental health stress, maybe, rather than asking people to use numbers. So, it can deal with very qualitative topics, it can deal with them at scale, and increasingly, for some types of people, it will be able to ask the questions as well as generate the answers.
- 8:46 - Annie McDannald
It seems a little scary in a place where those answers can be autogenerated like that. I think a lot of fear factors, and I think AI does have a lot of pros. There's a lot of benefits that you hear about and that you're speaking about with simplifying that analysis process, wrapping your heads around that massive amount of data you get, especially in qualitative research. There are also ethical considerations associated with its use. Certainly, potential biases, hallucinations, data privacy issues that are related to AI-powered market research. They're raising a lot of concerns. I hear a lot of conversations about that. Can you expand a little bit on your thoughts about those ethical concerns?
- 9:27 - Ray Poynter
I will come back to those in just a second. There's actually an even bigger level of worry. So, when the real experts get together, and they talk about the existential risk, “What is the possibility that mankind, humankind will be wiped out by AI?” They put it around five or 6%, which is way too big. [Laughter] It's very unlikely, but it's still way too big a risk. Getting closer to home, what about the biases? I did an exercise with ChatGPT the other week where I said, “I'm thinking of writing a short story. It's going to be a short story about a doctor and a nurse who were driving in the countryside, and they crashed their car. During their recovery, they form a relationship, and the story is going to be about the relationship. Could you suggest some names for the doctor and for the nurse? Could you suggest six names for each?” I typed it all in, and it came back, and it offered me three male and three female names for the doctor, three male and three female names for the nurse with a mixture of racial stereotypes in around those names. There was an Indian name, a more typically African American name, a European-sounding name, and so on. That is because somebody had put guardrails in there so it didn't come out with racist, sexist junk. That is a bias. It's a good bias, but it's a bias. If that had simply gone out and collected mega millions of conversations out there in social media, it would probably come out with some regular conservative reactionary position.
So, there is no right answer, wherever you source the information is going to create a bias, and then what are the guardrails you put in, and guardrails is going to be a word - if you do a Google search at the end of this year, it's going to go off the scale. Guardrails is just going up and up as a phrase you keep hearing. If you put the guardrails in, that is another source of bias. The next issue we've got around that is these things called hallucinations, which is the manufacturers have come up with this term because it's a misnomer; it's a lie. Large language models are not trying to tell you the truth. They're not programmed, they're not designed to tell you the truth. They are designed to give you a plausible conversation. It's what's called a stochastic parrot. “Probably what is the next word I should say to sound like a real human asked the question?” If they don't know the answer, they will frequently make up a plausible answer. So, not only is it wrong, but it's plausible, and that, of course, for us is an even bigger worry. Things like ChatGPT has gone up to the internet and has absorbed and read mega millions of articles and things; it's not paid any copyright for the learning it's done for reading all those articles. If you ask a question about mobile market research to ChatGPT, it's quite likely to give you answers from one of my books because it will have read my books, but it's not going to credit me. It's going to come forward as if it's its answer.
Then if we upload a script, so you've got some scripts from your online discussion, you upload them to ChatGPT, or you upload them to Bard, what happens to them? Well, with ChatGPT, they say they put it in their training database. That's your first loss automatically. So, you might not want to do that with your client's data. Then the privacy. Sometimes, you can work out too much by just having a few pieces of information. So, there was a front-page story a few years ago that Target knew that somebody's daughter was pregnant because they had profiled her purchasing, and she had switched the product she was buying, and they had identified that. I've seen other cases where people’s smartwatches have identified that they're pregnant because of their change in blood pressure rates, exercise, and all of these characteristics. So, we can give things away without realizing we're handing over privacy, and when you throw that into facial recognition as well and emotional recognition. So, you might say, “I'm very happy to be interviewed. I don't mind what you record;” but you're thinking, “I'm only going to answer some questions;” but if your face tells the truth, well, that's a data privacy loss as well that you may not have envisaged.
- 13:53 - Annie McDannald
It's difficult to know how to regulate all of that. How do we regulate emotion like that?
- 13:59 - Ray Poynter
So, there's been quite a few regulators, leading spokesmen who have been calling for a moratorium or for a standstill. Most of the people doing that have got really big investments in some of the big players. So, we're hearing people like Musk who's invested in it, and we're hearing people from Google talking about it. They've got a really big vested interest in keeping it down to a regular bunch of people. It's an immense difficulty. It's probably going to have to be about outcomes rather than mechanics because legislators won't understand how the tools work. Probably nobody will understand how they work in four or five generations from now. By generations, I mean three to five years, not human generations. AI generations because the AIs will start programming the next Ais. So, how they work is going to be less knowable. Having a method of appealing any decision made by a computer has got to be enormously important. In Australia, they brought in a thing called robodebt, which was a method that the government had using algorithms in computers to identify people who should not have been receiving benefits but were receiving benefits, and it automatically cut their benefits. It got it wrong in a percentage of cases that led to an enormous, not enormous, 50, 100 suicides, people losing their homes, all sorts of things because of errors in the process. Because normally, when we talk about statistical process, we say, “Well, there's false positives; there are false negatives.” If the false negatives - sorry, the false positives are going to cause people to lose their home or cause people with auto-policing systems to be arrested and sent to jail, a little bit like Minority Report and mind crime, “Okay, this person must have done it because they were there at the right time, their face looks guilty, and they've got this characteristic.” Those false positives are really frightening, and we need a method of appealing any method of AI sanction.
- 16:14 - Annie McDannald
I think that's really, really fascinating. I think, too, that it's, thinking back to the or the research side of things, AI being a time-saver. It's going to do part of this work for you. A lot of people are referencing it like a new superpower. You can have this new superpower of using AI to process things faster, but to the point of the validity of that information, are you really saving time in the long run, turning around, you'll have to validate what that output is. So, you're saving time generating it, but then I think it's important as well to invest time in validating what information you're using and proceeding with. Do you think that will have an equal impact as people start to continue adopting AI in more of their research process?
- 16:59 - Ray Poynter
I think we're going to see missteps along the way. I think some of the basics are going to be very straightforward. So, there's been enormous growth in the platforms that people can use for self-serve research. So, people can go along with no research training, and they can access a survey platform, and they can survey their customers, and they can do some analysis. Much of that is shockingly bad. That over the next three to five years is going to be replaced by AI. So, if I don't know anything about research, I will say I need to do a survey about this new product, and it's going to say, “Well, do you want to do it qual or quant? Here's the benefits,” and you'll pick one of those, and then it will ask you a few more questions. It will run that, it will source the sample, it will do the data cleaning, and it will write you a brief report and a summary. Now, sometimes it will get that wrong, but that is going to be percentage-wise much more likely to be true than these people doing it themselves. So, that is going to mean more research being conducted, more things going on. So, that is, in those areas, it's all good, and we're going to have to get our head around what does it mean when a machine gets it wrong. So, self-driving cars are already safer than most humans. When a relative of ours is killed by a human, we're fairly angry, even though it happens a lot. If they're killed by a self-driving robot, we're incensed; we're really angry with it. So, the robots have to be way safer than the humans before we make that switch over, and it's not very sensible, but it is part of being human. Likewise, when automated research goes wrong, it will probably generate more criticism. Judges will be more likely to give you damages than if a human had made the same mistake, “Oh, well, humans make mistakes.” So, there's going to be some to-and-fro in that process.
- 18:48 - Annie McDannald
I think that's a great point. Mitigating human error, but yet it still persists. So, looking ahead, then, let's dive into what the future of AI in market research looks like. What do you think those future prospects look like? Do you think AI systems will replace traditional methods almost entirely?
- 19:08 - Ray Poynter
Eventually, but eventually is a long time. So, if we start off closer, then we're going to see more and more quality checks coming in. I say the biggest change over the next few years will be in the self-serve platforms. They will become smarter and smarter. At the moment you can't do qual with most self-serve platforms. There are some great systems out there, but actually, the people using them don't know enough about qualitative research to make a good job of it. That will change with the introduction of smart self-serve qual. So, all of that is going to make a big change, and we'll see a continued growth of more research being done by non-researchers. That's a pattern that's already been going on for years, and that is going to get faster and faster. The next thing we're going to see is project management, and this side of things, again, using a lot more AI, and the people who will benefit the most will be the computer scientists, the ethnographers, and so on. If they are savvy at using AI, they would just be so much more productive. So, there's a real good place for experts provided they are experts plus AI. In the AI world, these are called centaurs, so half man, half horse. So, a person and AI working in combination will be AI, will be just people. So, that we're going to see more and more of as well as we go forward.
- 20:42 - Annie McDannald
That's fantastic. Thinking back about the regulations that will inevitably pop up and the effort to rein in that use of AI or how AI is being used and trying to keep that in a responsible headspace. For example, what do you think are the implications of Musk's proposed moratorium on AI systems overall? Who would pass those legislations? [Laughter] Yes.
- 21:08 - Ray Poynter
If let's say the US were to put a moratorium, and it was to find a way of enforcing it through the universities and large corporations, it would make a difference to the speed with which AI advanced around the world, but not much because as Google was saying recently, pretty much now, you just need a good home computer and the open-source algorithms, and you can make a really good start at that. We know that China is working very hard on it, too. We know obviously that Russia is working very hard on it. There are Europeans working on it. This is not easily stoppable. It's not stoppable by government passing a piece of legislation. I see that one of the states in the US has passed a law saying you can't use TikTok as a private citizen. Montana, I think. How on earth would you regulate that? I'm not saying whether it's right or wrong, but the concept of how you would go around checking everybody's [Laughter] smartphone, just beggars belief. So, I don't think a moratorium is there. I think what we have to be doing is pushing into the protections, the right to be able to see your data, the right to be forgotten is one of the things that's in European law, these issues, very simple rules. In some cases, like you must be told if you're talking to a person or a computer would be a very straightforward piece of legislation. It might not always be easy to prove, but if you put quite a significant sanction, like you go to jail if you systematically lie about this stuff, then most people are going to be telling the truth.
- 22:49 - Annie McDannald
Good incentive. [Laughter] You brought up a great point about the global prevalence of it as well. I think even during the pandemic, you saw how something can change overnight and impact the entire planet now. So, it's interesting to be able to see that happening and to know that that's how AI is impacting the world. It's happening globally. With that in mind, what do you think about regulations where maybe there is a ban, like the state of Montana banning TikTok? I think Italy has tried to put a ban on ChatGPT. How do you regulate that? What's the effect of that?
- 23:25 - Ray Poynter
For small places, like a state or a country like Italy, it was not going to have a big impact on the world because this thing would just keep moving on and keep changing, keep coming forward. Italy had to back off from the ChatGPT. Anyway, that is only the famous large language model. What about all of the other associated systems? So, I’m thinking now maybe that you can't have judges that are AI; you can't have the decision made whether to prosecute somebody being fully AI that somebody has to sign off afterwards. There was a recent court case in the US where somebody tried to give the patent to a piece of artwork to the algorithm that had created it, and the court has ruled against them that only people can have patents. I think that distinction, only people can sentence people to jail, only people can marry somebody. Just defining which things do we want to make sure that only people can do is probably going to be part of the picture, even though they will look at AI outputs to make that decision, but they have to carry the consequences of making the decision.
- 24:51 - Annie McDannald
I think that’s well stated. What about other next steps? Can you give us your own predictions maybe for the future of AI on market research?
- 25:01 - Ray Poynter
We’ll come along with this, already there a little bit, which is virtual consumers. So, there are now some eye tracking systems, which will take a piece of stimulus, so a proposed new packaging or a proposed magazine front cover, and it will tell you where the eyes would go. Having done lots and lots of previous work, they make an assessment of where the eyes would go. So, you don't have to pay for the eye tracking. You get the virtual eye tracking. There is nothing to stop people programming systems that say, “Well, if you were to ask 1,000 people this question, this is what they would say.” Now, those systems only have to be plausible to be commercially realistic. So, I would expect to see them really quite soon where you will simply go and ask a question, and it will generate a plausible answer that, if you were to ask enough people this, it would go there. Now, if we flip for a moment to a company like Ipsos or Dynata, who have millions of survey responses. Imagine putting AI against that and saying, “I bet some people have asked questions a bit like this in the past.” So, if we were to ask this combination of questions, what would the answers probably be? Obviously, we don't know how reliable that would be, but imagine you're a business executive. You want to make your decision, and you're told, “Well, with all the new techniques, we could give you an answer in five days,” or you say, “Actually, we could run it through ChatGPT RI, and it's going to give it you in an hour or 40 minutes.” It's very seductive, and it wouldn't cost very much.
- 26:58 - Annie McDannald
Right. It’s all automated.
- 26:59 - Ray Poynter
So, I think we will see virtual systems starting to appear in the near future. It's very straightforward. You can actually do this now with Bard and ChatGPT. You can ask it to create personas and then say, “Tell me what a persona who is this young hedonist. What would they say about drinking a coke on a sunny afternoon on a beach;” and it gives you some quite plausible stuff. It's not a big jump from that to, as I say, virtual respondents, virtual participants.
- 27:33 - Annie McDannald
Then comes the step of validating that, making sure that that plausibility is accurate.
- 27:39 - Ray Poynter
I think companies who will be selling it will be very slow to validate it.
- 27:44 - Annie McDannald
So, I could talk about this with you all day, Ray. It's such a fascinating conversation, and I'm so glad we've had this chance to talk. I do have a couple more questions I want to squeeze in about AI, particularly, and this is another one of those questions that comes up in every conversation is the job market and how AI can affect the job market. Many researchers have expressed concern about AI tools taking over their jobs. What's your take on that perspective?
- 28:12 - Ray Poynter
I think a lot of jobs done today will be done by AI, or they will be done by fewer people working more effectively. At the same time, the number of research activities that happen will be growing. So, there won't necessarily be an immediate drop. At the moment, there's a shortage of talent in our industry. So, I think we're going to see people being more productive before it makes a really big impact, but undoubtedly, it will create a major impact on the whole economy, including the research world. Well, it never has been the case in research. You had the same job for 50 years, but now don't expect your job to be the same in five years, that's for sure.
- 28:55 - Annie McDannald
Yes, the technology wave comes. Would you say there are any particular skills or expertise that researchers should be working to develop and adapt right now in order to take on that AI-powered research mentality?
- 29:10 - Ray Poynter
So, in general, my advice is always you want to be a driver, not a roadkill. [Laughter] So, be involved in bringing AI into your company, put your hand up to volunteer to work on all the AI projects, expero with it, spend some time yourself in understanding AI, and you will never be an AI expert. So, if you're a qualitative researcher, you want to be the best AI qualitative person. If you are a linguist, then you want to be the best AI linguist person because you want to take your strength and merge it together with this ability to use AI, all of those things, and then anything to do with humans. So, most selling is not going to be done by AI. It's going to be done by people selling. A lot of customer success management is going to be - managing clients is going to be a very human skill, making really good arguments, being persuasive, doing storytelling, anything that's not typical. So, if we think about movies, AI is going to make lots of generic watchable movies. You really wouldn't want to be a generic movie maker, but it's not to make the stuff that's really off the wall and new. So, that, again, would be a niche that you want to be thinking about.
- 30:35 - Annie McDannald
I think that's great, and it sounds like it's more about looking for the way AI can complement your strengths than replace those strengths. I'm sure some of our listeners, at least I know many of them are interested in integrating this technology into their projects. Do you have any tips that you wish to share with the rest of the group about how this can be done properly to integrate that technology?
- 31:03 - Ray Poynter
Well, I've got one urgent request to them. Don't do it on a live project for a client until you've done it on some internal projects for yourself, and just check that it works, that it fits that it, it does what you think it's going to do. So, I think that is part of it. You can find some clients around who will be willing to co-fund some of these experiments so that you'll do side by side. So, that is all a useful piece. By and large, I would say automate and use AI for the tasks that you and your staff don't like. So, data cleaning is a real fact. So, let's put the AI to that. Let's not try to use the AI for the really fun parts of that because that's going to alienate the staff, and it's going to reduce you around that. The single biggest issue in quantitative research is data quality. So, you should be looking to use AI to make sure that your data is better than your competitor’s data.
- 32:09 - Annie McDannald
Absolutely. I think and mindfulness about who, who you are working with in the industry for learning and getting your feet wet, know who that partner is, yes, in terms of - and their security responsibility and how seriously they take data privacy. One of the big things that I know are important in terms of developing the skills is a lot about asking the right questions, asking - putting the right prompts in for AI to generate what it is that is going to help you the most. How can we ask the right questions or the right prompts in the context of AI? Do you have any tips for that?
- 32:41 - Ray Poynter
Yes. So, first of all, there's a fantastic book out there that was written within three months of ChatGPT coming out. The book is called Prompt. It's got so many tips, so many good advice for how to do it. My suspicion is that qualitative researchers are going to be better than most quantitative researchers at asking things like large language models, ChatGPT, the prompting questions. And that's going to be a little bit of a switch because the quants have been quite good at asking Google to search for things, of putting in Boolean strings to make those search terms work accurately. When it comes to trying alternative emphasis in the question, because sometimes it requires only a small change in the question to ChatGPT to get a better answer, and it's very iterative. So, you ask it a question, you see it hasn't quite understood. So, you say, “Oh, you haven't quite understood. What I really wanted was people who loved this topic but couldn't afford to do it. Could you now” - and you just keep moving backwards and forwards quite a lot like if you were talking to a real person. I mean, I type please and things like this when I'm typing them, which obviously are just noise, but it is very much talking to a person, and you're trying to iterate backwards and forwards, so it understands. With any one session, it also remembers the context so you can build a context and say, “Okay, I'm talking about microbreweries. I'm talking about beer. We're on the west coast. Here are the personas. We've developed the personas. Now, let's take that first persona. What might a questionnaire look like for them? Oh, no, I need more closed questions, or I need more open questions.” Just cycling it back. Don't take the first answer and say, “Oh, that's not very good.”
- 34:31 - Annie McDannald
That's great. Get those iterations in. Yes. See if it can narrow down the plausibility [Laughter] and give you something more concrete to work with. This has been amazing, Ray. We've covered so much ground, and it's been a very informative and thought-provoking discussion with you. Thank you again for joining us here today. Before we do wrap up our podcast, Ray. Do you have any final words or advice for our listeners?
- 34:55 - Ray Poynter
Just keep learning new stuff. You’ve got to keep adding to your strengths. You can't expect to use what you already know if you can blend what you know into the new stuff, but sometimes you've just got to let that go because there are some things that are going to be replaced and won't come back. So, always move forward.
- 35:17 - Annie McDannald
I think that's great advice. With that, Ray, I would say we've reached the end of our episode. I want to thank you so much for lending your expertise and your thoughts on this topic. It's so important just to be communicating and talking and reaching out to others and getting their perspective and having these conversations. So, thank you so much for taking the time to answer my questions.
- 35:38 - Ray Poynter
Great talking to you, Annie.
- 35:40 - Annie McDannald
Absolutely. Thank you. Your expertise, your insights, everything is greatly appreciated. So wonderful. Well, I hope that for everyone this discussion has opened your eyes to the potential of AI and market research and its impressive and ongoing and ever-growing capabilities onward and upward. Thanks so much. [Music] If you want to learn more about the topics discussed here today and find out what other kinds of research strategies are out there, we invite you to visit our website at civicommrs.com to explore our library of resources and qualitative research services. You can also connect with us on social media by following us on LinkedIn, Facebook, Twitter, and YouTube. Until then, goodbye and happy researching. [Music]