March 30, 2021
Nikki Lavoie talked to Roddy Knowles, VP of Research at Feedback Loop, a fantastic agile research platform for getting fast and easy consumer feedback allowing researchers to go from questions to data in days.
Roddy oversees Feedback Loop's research and operations teams and has an extensive professional background in both qual and quant methodologies. He also serves as President-Elect of the Insights Association Southeast Chapter and as North American Representative for ESOMAR.
In this interview, he and Nikki talk about the role of research associations, how agile, qual and Feedback Loop come together, and give us an outlook on the top priorities of the UX and MR industries.
Nikki Lavoie: Hi Roddy! Before we jump into the good stuff, I’d love for you to give the readers a little bit of an intro into your background and experience. What led you to Feedback Loop, and to the research industry in general?
Roddy Knowles: Well, that’s a big question. For the full story, you’ll have to wait for my unlikely-to-ever-be-written autobiography. The short story is that I unintentionally made my way into the research industry as an extension of my academic training in the social sciences. One of the things I took with me from academia is always having front of mind the human participating in research with you, the researcher. Valuing their time and effort and making their research experience a positive one is so important. It’s incredibly frustrating to see so many in the industry still fail to care about participants. What I’ve also been continually frustrated with is the friction it takes to get research done, which ultimately diminishes the potential impact that research and data can have - because it’s so often too difficult to execute and just too darn slow. So, when I found a startup, Feedback Loop (then Alpha), that was focused on agile research, which is research that people can easily participate in, reduces friction immensely and puts data in the hands of those who need it quickly, I was in, no questions!
NL: You’re also involved with a lot of associations, like ESOMAR and Insights Association. Why is it important for you to work with groups like these?
RK: For me it’s really two things, not necessarily in order. One, I think that organizations like ESOMAR and IA are essential in making it possible to conduct research (we so often take this for granted), setting standards and best practices, providing forums for sharing and collaboration, and fostering innovation. And I want to do whatever I can to help push the whole industry forward and keep us all growing. Two, I just really like teaching and helping people. It’s an itch that I still have to scratch after having left academia. Participating in industry organizations allows me to share what I’ve learned and help others grow. And it also provides a way for me to keep learning. If I’m not learning something new I get bored pretty quickly.
NL: So tell us a little bit about Feedback Loop, specifically: what’s your sweet spot?
RK: Here’s the part of the story where you ask the researcher to give the sales pitch... so, buckle up! Bad jokes aside, we’re an agile research platform for rapid consumer feedback. What does that really mean, you ask? We’re a tech company, thus, everything is built around our platform, although we have a services layer on top of the platform. I like to think we strike a good balance between what humans and machines do best, respectively. Where I think we excel is in combining automation and human expertise to reduce the friction in conducting research. How? By cutting out a lot of the steps (and I won’t go through them all here), including getting research out the door and into the field, starting with questionnaire design and sampling. As an agile research platform, we have guardrails in place to ensure that tests are focused. Practically, that means they’re 10 questions or fewer and are laser-focused around a key decision to be made. The limited scope of what our platform tackles with a piece of research allows us to move fast and put data into our client’s hands, well technically put it in our dashboard, usually more quickly that it would have taken them to iterate on questionnaire design. My use of “limited scope” is intentional. We’re not trying to be the platform for all things research. Rather, we’re trying to be the platform that enables quick, easy, and reliable data collection. So we play a lot in areas like early stage discovery, idea and concept testing, feature prioritization, and message testing. If you’re working on something like a product or a campaign where iterating and working in an agile (or agile-ish) manager, where having data to inform and de-risk multiple small decisions is key, then we’re likely a good fit for you. We’re one tool in the research toolkit or research tech stack. However, if you’re looking to run a brand tracker, conduct a multi-country market sizing study, or do IDIs, I’ll likely send you elsewhere.
NL: As you know, the majority of my network is what I would call “qual-leaning.” I’d love to hear your take on how tools like Feedback Loop and Qualitative can work together.
RK: Hey, secret/not secret… I’m “qual-leaning” too! My research roots are deepest in qual and I could never pull them up even if I tried. Anyway, this is an interesting question - and one that comes up with our clients. There are ways that an agile research platform like ours plays quick nicely with qual. I mentioned earlier that if you came to me wanting to do IDIs for whatever reason, I’d send you elsewhere. But if you were my client I’d also share with you how we can help on both ends of these or whatever in-depth qual work you’re doing. Our clients often use our platform to refine ideas, test certain words or phrases, tighten up messaging concepts before they go spend the time and money on qual. It’s a pretty low lift and pretty darn cost effective to run a few tests to help optimize a more significant research investment. Additionally, on the back end of qual, our clients will leverage our platform to explore ideas that come out of qual. One example I’ve encountered a few times is that there will be certain ideas expressed by participants that may be interesting, but not core to the findings or recommendations necessarily, but shouldn’t be just cast aside. So, our clients have run agile tests to explore some of these ideas, literally just partially baked ideas, quickly with a broader audience in order to prioritize which, if any, are worthy of exploring further. The ultimate takeaway here is that an agile research platform should, in my opinion, be seen as a complementary tool/approach/resource to more rigorous and time-consuming approaches.
NL: I’m also interested in hearing more on your thoughts about where the industry needs to improve. We’ve talked about everything from consumer trust in our work to pushing out methods and research tools that don’t appropriately capture data. What would you say are the top priorities we need to pay attention to, within the research industry?
RK: Hold on, I’m going to go look for a bigger soapbox. Nikki, you know me well enough to know I have my fair share of complaints, most of which are meant to be productive ones. Here, I’ll just pick one as to not ramble more - and I’ll try to be brief.
Perhaps at the top of the list for me is breaking down the stupid barriers that exist between different types of research and the people who specialize in certain approaches / methodologies. UX and MRX are often in total silos. Data scientists working with third-party data often have no interaction with those conducting primary research. While it has gotten better, we still see the great divide between qual and quant, and I could go on. There is, and always will be a place for specialization. However, if we don’t actually get together and have conversations, and get in the same room, literally (someday, soon), go to conferences and events and talk to each other, we’re really missing out. And, as an industry, we are not moving fast enough to build these bridges.