Monday, December 28, 2009

Really, another algorithm? Is this the new fad in research?



We are all abuzz with algorithms, to identify everything from the texting soccer mom* to the pizza zealot*…


One thing I have noticed recently is that not every algorithm gives you the person you think you should get.


Does this mean the clients have not tested their algorithms? Not necessarily. But I wonder; have they tested across multiple markets…with the same results?

Let’s use the texting soccer mom* life stage, for example. The definition/story that goes along with this segment is:

  • a woman
  • has 2 or more children age 7-14
  • texts, on average, 120 messages a month or more, and
  • her children are in at least 2 extracurricular activities each.


First, imagine we are conducting the study in Dallas, Chicago and Los Angeles...Now, imagine you have pulled your database query and are only calling mothers with 2 or more children age 7-14.

You call each women and ask the screener questions:

  1. What are the ages and genders of your children
  2. What extra curricular activities do these children participate in?
  3. Do you currently own and use a cellular phone with texting capabilities?
  4. In an average week, how many text messages would you say you personally send?
  5. In an average week, how many text messages would you say you personally receive?


From their screener answers, each woman qualifies as what we would think a “texting soccer mom” should be.


  • has 2 or more children age 7-14
  • texts, on average, 120 messages a month or more,
  • her children are in at least 2 extracurricular activities each.


So now we ask the algorithm questions.

  • 20 questions.
  • For each question, we need a number from 1-5
  • All answers are plugged into the excel sheet


Lets first look at Samantha in Chicago-

We call Samantha at 11:00 am. She is home writing out her grocery list and has time to talk to us.

  • We plug the algorithm numbers into the excel sheet and…uh oh----the algorithm declares she’s not a texting soccer mom; she is a Collaborative Planner*...( a 20-29 year old female with no kids that uses social networking on her cell phone to plan her weekends)


Now let’s move to Susan in Dallas

We call Susan at 4pm. She is on her cell phone, driving home after dropping one child off at karate and one at ballet , but will take to us.

  • she is a soccer mom both on the screener and algorithm!


But, not so fast. We decide to call Karen in LA at 9:30 am.

Karen is sitting in the doctor’s office, waiting for an appointment, but figured, why not talk till they call her in…

  • The algorithm calls her a Gamer* (a male age 18-29 that uses their cell phone for listening to music and playing games)



Does this mean the algorithm is flawed? Not necessarily? Time of day, city, attitude, and activities that they are doing at the time influence how someone answers a question. Also, respondents are now being asked to stay on the phone with a recruiter for upwards of 20-30 minutes and are overwhelmed with the questions.


Have you ever tried to take an algorithm supplied by a client, yourself? I have, and I try to take each one at least 3 times.


Here are some of the common times I try to hit:

  • Sitting at my desk right when it comes in
  • Sitting in traffic on the way too an appointment (my staff loves trying to get me to answer questions as I am swearing at the other driver who just cut me off)
  • Right in the middle of preparing dinner.
  • During my lunch break as I am “surfing the web”
  • At night with the family just watching TV


And guess what...65% of the time I will get different segments. I have not changed who I am. However, I may have changed a rating based on “the place my head is at,” either 1 number up or down on 1 or 2 questions and…a brand new segment!


Does this mean we should throw the baby out with the bath water? No, I think we need to use a hybrid, to double check.


  • Respondents are asked to choose the story that they believe best describes them.
  • We ask an abbreviated algorithm of no more than 5-10 questions. (only the ones that pertain to their “story”, to get the “predictability” of being this segment)
  • As a homework assignment, have the respondents take the entire algorithm (as a word document or on-line).

The reasoning….


When the algorithms are written, each segment is given a “story”. Why not allow the respondents to hear a watered down version of the “story” and choose which one best describes them? This would give everyone a look, at first, what segment they believe best describes them, and then what the outcome of the algorithm is. When all of this is combined with ownership, demographics and a few pertinent questions, respondents will more closely match what clients expect in their groups and recruiting facilities will be able to provide the correct respondents for their client.


We recently tried this for one client, with interesting results.


Respondents that did not match their segment story and algorithm, at initial screen, were sent a homework assignment. The assignment was to complete the same algorithm questionnaire we performed with them over the phone, as a take-home questionnaire (not an excel sheet just a word document). They were asked to find a quiet time and spend about 10-15 minutes answering the questions and bring them back. Guess what happened? 75% of them came out with the segment matching the “story” they chose.


Can we extrapolate anything from this? Unfortunately not.

  • We do not know if reading the questions, for themselves, and seeing the scale on paper resulted in a more accurate algorithm.
  • We do not know if they answered differently because they were in a calm, quiet place.
  • We don’t even know if because they had “chosen a story” they then chose the answers they felt went more closely with that story.


What I can tell you is the respondents in the group were just what the client expected.


So here are my two cents; try the hybrid approach on your next project. As a short algorithm and then let them choose the “story” that best defines them. Have the field service screen and hold anyone who does not match segment, via algorithm and story. Really take a look at the holds. Not just the segments, but the answer for each question in the algorithm. You might be surprised. There may be some great respondents that could have been having a bad day and answered one or two of your algorithm questions “just one or two” numbers off that threw them out of contention.


If you understand how your algorithm works and the weights given to each answer for each segment, this will be an easy fix. You will be able to identify where someone that self-selects the “story” falls short on the algorithm.

We also need to shorten the screener/algorithm. Not only does the fatigue lead to “not paying attention” to the questions anymore, but it has led to lower respondent cooperation rates. Taking respondents through long algorithms and then having them “not qualify” for the segment you need, has resulted in more respondents terminating phone calls and asking out of databases.


I would love to hear your thoughts on how algorithms are taking over and what you would do to lessen respondent fatigue and decreased incidence of respondents qualifying for a segment they “really should” qualify.



* All segments and segment “stories” are fictitious.

No comments: