Pass-along receivership best practices

Tips to improve the results of your online study.

Drawing the Sample

Drawing the sample is extremely important and presents the greatest opportunity for problems to occur. 

One of the market research firms that participated in developing these best practices pointed out that “almost all lists provided by clients are much worse than the clients think they are.” This firm found a large numbers of errors in lists they were given from clients in different industries for various purposes. These errors included:

  • Names of deceased employees or companies that could not be traced.
  • Omission of names that should have been included.
  • Incorrect or misspelled names, addresses, telephone numbers or email addresses.

Since AAM  audits the nth selection of paid subscribers or individuals who have directly requested a copy, this problem is mitigated. A Stanford University paper reports that online commission studies based on probability samples were as accurate as Random Digit Dialing (RDD) phone samples. However, poststratification weighting improved the predictability of the data when compared to other high-quality studies.

Sample Recruitment

Recruitment using print or online advertising is not recommended because it takes more time and is potentially less representative. It can be used under some circumstances and must be approved by AAM. 

Sweepstakes are not recommended as they generate poorer response rates than other incentives. They can be used as long as response rates are calculated as shown in the pass-along receivership guidelines

Questionnaire Development

Interviews should typically be no longer than 15 minutes. If the questionnaire is considered engaging (based on pretesting), the interview could take as long as 25 minutes. Separate interviews should be designed for execution on a tablet or computer screen. Executing 15-minute surveys on smartphones will produce very low response rates. 

The questionnaire should be programmed and checked by research staff using a computer and a tablet. There are two possible approaches:

  • Implement the lowest common denominator. Use the version that works best across various platforms.
  • Use different designs for different modes. To execute this, the research company will have to either test the versions of the questionnaire on a small batch of respondents or conduct in-person (or via Skype) qualitative interviews to hear first-hand about the respondent’s engagement and/or satisfaction with the interview.

Executing the Study

By definition, an online study is a self-completed questionnaire by a respondent who was randomly selected from a subscriber file.

Completion should be defined as the act of responding to critical questions necessary for the evaluation of the data. As a general rule of thumb, 85 to 90 percent of the questions should be completed.

As long as the rest of the survey is completed, respondents may omit responses to household income and revenue questions without disqualifying the survey for completeness.

Key questions should be reviewed when the first 100 and 200 responses have been received.

Response Rates

Many different studies have reported declines in response rates for telephone, mail and personal interviewing from predesignated samples such as subscribers. Online studies have traditionally had response rates lower than telephone and mail.

AAM does not require a minimum response rate for online surveys. The data should be anchored to a different third-party measurement so that the users of the data have confidence that even a low response rate internet survey provides comparable data to a third party. It is important to link the low-response data to a third-party study with a higher response rate or to known data from a government source (e.g. the government census or the Department of Commerce surveys). 

Tabulating the Study

Tabulations for online studies are easier, faster and more accurate. AAM recommends developing and testing programs for tabulating the study before fielding. Data processing should be double checked when the study is closed.

The weights should be based on U.S. government statistics or previously conducted high-quality studies. The weights are essential because not all segments of a population will respond equally. For example, a business journal may get a higher response rate from businesses of different sizes. To reflect the population correctly, the data needs to be weighted by the number of businesses on the basis of employee count or revenue. 

Unweighted data can be used if it reflects previous high-quality direct mail or RDD studies.

Validating the Study

Validation compares an online study to a high-quality syndicated study using probability sampling (e.g. a publication’s reader profiles of the amount spent on various products and services to those of the MRI respondents on the same behavioral categories), high-quality probability studies (MRI, Scarborough, U.S. Census, Commerce, etc.), survey data or previous high-quality direct mail or telephone studies of same population.

All research executives contacted agreed that looking for questions from high-quality government surveys could be included in online studies for validation purposes. For example, if a restaurant trade publication was conducting an online study, they might want to capture the number of employees or types of restaurants similar to the way the U.S. Census or Commerce departments would gather the same information.

Face-validation can also be executed by having AAM staff or a third-party research expert review the findings. Visit our pass-along study guidelines for an overview of this service.