Glossary of Terms
|Response Rate 1 (RR1): the number of complete interviews divided by the number of interviews (complete and partial), the number of non-interviews (refusal, break-offs, non-contact and others), and number of cases of unknown eligibility. Response Rate 2 (RR2): the number of complete and partial interviews divided by the number of interviews (complete and partial), the number of non-interviews (refusal, break-offs, non-contact and others), and number of cases of unknown eligibility. Response Rate 3 (RR3): the number of complete interviews divided by the number of interviews (complete and partial), the number of non-interviews (refusal, break-offs, non-contact and others), and an estimate of the proportion of cases of unknown eligibility that are actually eligible. Response Rate 4 (RR4): the number of complete and partial interviews divided by the number of interviews (complete and partial), the number of non-interviews (refusal, break-offs, non-contact and others), and an estimate of the proportion of cases of unknown eligibility that are actually eligible. Response Rate 5 (RR5): a special case of RR3 in which either the proportion of eligible cases among the cases of unknown eligibility is assumed to be zero or there are no cases of unknown eligibility. Response Rate 6 (RR6): a special case of RR4 in which either the proportion of eligible cases among the cases of unknown eligibility is assumed to be zero or there are no cases of unknown eligibility.|
|A short description of the types of substantive questions in the questionnaire. Demographics and survey-organization defined variables, like date of interview, are not described.|
|The Center’s online catalog of studies.|
|A presentation of survey results with many crosstab tables that indicate how different types of respondents responded to each survey question.|
|The percent of respondents who start the survey but do not finish it.|
|This occurs when an interviewer attempts to reach a potential survey respondent by phone after failing to reach them previously. Most telephone surveys will set a maximum number of callbacks and continuing attempting to reach the respondent until that number has been reached.|
|A mode of survey data collection in which an in-person interviewer uses a computer to administer the survey and record responses.|
|A mode of survey data collection in which a telephone interviewer uses a computer to administer the survey and record responses.|
|A technique of adjusting weights on respondents in a survey in which the weight applied to each subgroup in the dataset (e.g. men over the age of 55) is calculated based on the relevant distribution of the target population.|
|Redirecting an online respondent to another survey at the end of a completed survey|
|A reference intended to uniquely identify the study/dataset used in research. Mutiple citations formats are available on the Cite Study tab for each study in the archive, and citations are also available for individual questions.|
|Method by which data were collected, such as telephone, in-person, online, etc.|
A continuous variable is a variable that can be expressed by an infinite number of measures. For survey purposes, they are usually measured on an interval or ratio scale. (i.e. time, speed, weight - since these may be broken down into an infinite number of smaller parts.)
A table which shows the influence of an independent variable (located in the column) on a dependent variable (located in the row.)(e.g. a table showing how income is related to the likelihood of voting for a certain candidate).
|A mode of survey data collection in which the respondent completes the survey using computer technology with little or no assistance from an administrator. CSAQ applications include online surveys.|
A set of codes or categories used by survey researchers to document the ultimate outcome of contact attempts on individual cases in a survey sample.
|The DOI, or Digitial Object Identifier, a permanent identifier for the study|
|A survey that is conducted on voters immediately after exiting a polling station, asking them how they voted. Exit polls are intended to allow for better understanding of voting behavior of different groups in the electorate and drivers of vote choice.|
|When applicable, the name of the organization that commissioned the survey. If the same organization funded, designed, and fielded a poll, no sponsor is listed.|
|The version of a questionnaire used in a survey. While online or telephone surveys often utilize split samples to test different wordings or to included more questions than would be manageable in a single questionnaire, printed surveys require more than one form of the questionnaire to be printed and distributed if the questions are giong to vary. Multiple forms of questionnaires were often used in early polling. Information about the forms in Roper polls can be found in the survey documentation.|
|Replacing missing values for a respondent in a dataset with corresponding values from a similar respondent. For instance, if a respondent failed to answer a specific question, their non-response would be replaced with the response given by another respondent who was similar to them with regard to other characteristics.|
|A means of sampling or selecting people to participate in a survey that involves recruiting respondents at a particular public place or event. An intercept survey is usually administered by an in-person interviewer.|
|The date range during which data was collected from respondents.|
|The mode of interviewing: mail, telephone, in-person, online, etc.|
|The most comprehensive and up-to-date source for national public opinion data in the United States. The iPOLL Databank is a study catalog holding datasets and a full-text question-level retrieval system, designed so that users can locate, examine and, ultimately, capture questions asked on national surveys on a variety of topics.|
|A technology that allows a computer to interact with humans through the use of voice and keypad input. IVR technology can replace a human interviewer in telephone surveys. IVR surveys are sometimes called "robocalls."|
|A means of estimating whether a survey respondent is likely to vote in an election. Survey firms typically use questions about a respondent's past voting behavior and their intention to vote in future elections in order to identify respondents that are most likely to vote in an election, to increase the accuracy of electoral predictions based on surveys. Likely voter screens range widely in complexity, from single questions to multi-question indexes. Survey organizations often keep the elements of their likely voter model proprietary.|
Margin of sampling error reported by the survey organization. The MOE describes the maximum expected difference between a true population parameter and the survey's sample estimate of that parameter, expressed as a percentage-point range.
|A statistic that captures the amount of random sampling error in a survey's results. Random sampling error is the difference between the values of the sample and the values of the population from which the sample is drawn. It is usually unknown but can be estimated.|
|The study of methods and research practices used in a field of study. In survey research, methodology refers to the study of different means of collecting and analyzing survey data.|
|The method or mode of interviewing: mail, telephone, in-person, online, etc.|
A technique for selecting members of a given household for a survey. Interviewers ask to speak to the member of the household with the most recent birthday. This selection method is called quasi-random.
|The number of observed cases in a sample. In polling, N refers to the number of respondents.|
|A technique for selecting members of a given household for a survey. Interviewers ask to speak to the member of the household whose birthday is next.|
|A type of sampling where samples are drawn utilizing non-random methods. In nonprobability sampling, all members of the universe do not have a known, non-zero probability of being sampled. Inferences about the target population should not be made based on such surveys unless methods, such as propensity-score weighting, have been put in place to adjust the results to better represent the total population, including those with a zero probability of being sampled. Non-probability sampling was standard in the United States before 1950, when U.S. polling organizations began to move to probability-based methods. In recent years, online polling utilizing non-probability sampling methods have become common. See also: quota sampling, river sampling.|
|An instance in which a survey respondent selected to participate in a survey cannot be reached. For instance, if a respondent is selected for an in-person survey based on their address, but is not at home when the interviewer visits the address, this would be considered a noncontact.|
|An instance in which a survey respondent fails to complete a survey or answer a survey question. Systematic or non-random nonresponses can result in nonresponse bias.|
|A type of bias resulting from respondents failing to complete a survey or answer a specific survey question for systematic reasons. For instance, if younger respondents are less likely to complete a survey on political ideology than older ones, survey results may be biased in a particular direction.|
|One survey that collects responses on a wide range of questions for multiple individuals or organizations. Pooling questions to field an omnibus survey may lower costs.|
|A sample of respondents who have agreed to complete surveys online. Survey firms use online panels to quickly obtain responses from large groups of respondents. Online panels can utilize probability-based or non-probability-based sampling procedures. Probability-based online panels recruit panelists using traditional probability-based methods, like RDD telephone surveys, then provide internet access to those who require it. Non-probability-based online panels recruit their respondents through a variety of methods, including online ads.|
|Searches organizations associated with surveys, including both external sponsors and survey field organizations.|
|A means of selecting respondents that generates a sample in which some groups are over-represented compared to their share of the target population. This method allows for analysis of groups that would otherwise make up such a small percent of the sample that analysis would be difficult. For instance, oversampling Asian Americans in a survey of US citizens may allow for a more accurate analysis of this group. Oversamples are generally weighted down to their share of the population when results are aggregated to report overall results.|
|A technique of adjusting weights on respondents in a survey so the adjusted weights add up to the known population sizes within each group, making the sample more closely resemble the target population. Poststratification is used when the grouping of like units is not possible during sampling. It is used in order to reduce bias and improve the precision of estimates. For instance, if respondents are selected for a survey and their gender is not known in advance, the gender distribution in the sample could be different from the gender distribution in the target population. Poststratification would be used in this case to adjust the weights of each gender to match the target population.|
|The lead researcher on a grant-funded study, often referred to as the PI.|
|A type of sampling which ensures that each member of the sampling frame has an equal, known chance of being selected. This kind of sampling allows researchers to make statistical inferences about the population at large. (see Non-probability Sampling)|
|A technique used to adjust weights on different observations in an analysis based on their likelihood of receiving the treatment the analyst is interested in, given all other characteristics. Propensity score weighting is used in order to reduce selection bias and is commonly used by online polls to weight based on the likelihood of a respondent to have oneline access.|
|A survey that is primarily intended to manipulate or influence respondents' opinion rather than to generate data for analysis. Push polls are especially used in political campaigning in order to influence potential voters under the guise of conducting a legitimate survey. Roper Center's acquisition policy excludes push polls.|
The actual wording of the question. If additional information, such as text from a preceding question, is required for the question to be independently understandable, the added text will appear in parentheses. Similarly, the stem of multipart questions will be repeated and displayed in paratheses. For example: "(For each one, please tell me if you think it is a very serious problem, somewhat serious, not too serious, or not a problem at all.)...The large amount of American debt that is held by China." In iPOLL the question text is preceded by a number such as R18, Q08, or R02. This unique designation is assigned by the Roper Center and does not necessarily reflect the order in which the question appeared in the original study. Whenever possible the original survey instrument is used as the source document by Center staff. In many cases, though, the order of questions on the survey may have been altered in some way for publication in a final report or news release. Researchers requiring information on the original question order should contact the Roper Center.
|A means of sampling or selecting people to participate in surveys based on specific characteristics (such as age or gender). The objective is often to generate a sample that closely reflects the target population with regard to these characteristics, in order to reduce bias. For instance, if the target population consists of 50% men and 50% women, one might use quota sampling to recruit an equal number of men and women. In contrast to stratified sampling, quota sampling is a type of non-probability sampling.|
|A type of poststratification procedure that adjusts the sample weights in a survey in order to make the sample match the target population more closely with regard a number of different groups or post-strata (such as gender, race, age, etc). Raking adjusts the sample weights through repeated calculation of weights so they add up to the known population totals for the post-stratified classifications when only the marginal population totals are known (e.g. if the gender and age distribution of the population is known, but not the gender distribution for each age group).|
A technique used to obtain a representative sample by using a device that randomly generates telephone numbers in order to contact eligible participants.
|Survey respondents who either have to be contacted multiple times in order to be reached, or are contacted again after the initial survey has been conducted (e.g. to validate completed interviews or to measure behavioral changes over time).|
|An instance in which a survey respondent selected to participate in a survey declines to do so or declines to answer one or more questions in the survey. This is a type of nonresponse that can bias survey results.|
|A means of sampling or selecting people to participate in election polls using a database of registered voters for a given geographical area. With RBS, the sampling frame from which the sample is drawn consists of the registered voters for the area.|
|Interviews conducted with respondents who participated in a previous survey. Reinterviews are used in order to track changes in respondents' opinions over time or before and after an event, such as a presidential debate.|
|The process of choosing the final respondent for a survey; for example, choosing the respondents for a particular survey from an online panel or choosing members of a household to participate in a survey once the household has already been selected (e.g. through random digit dialing).|
|Proportion of contacted respondents who completed the survey. The American Association for Public Opinion Research (AAPOR) provides definitions for six measures of response rates (AAPOR response rate definitions).|
The response categories and percentages of the sample answering each way. Generally, the percentages shown are weighted if the data were weighted better to reflect the population. Any special question-related information clarifying such things as multiple responses, partial responses, and the like will appear after the responses. These notes relate only to specific questions as opposed to the entire study and are referred to as question-level notes.
|A means of sampling or selecting people to participate in surveys in which potential survey respondents are recruited through online ads or pop-ups on online platforms. It is a type of non-probability sampling in which the respondent's identity is unverifiable and the respondent cannot be recontacted. A respondent would typically click on an ad or offer, answer a number of pre-screening questions, and then be routed to a survey.|
|Online survey routs screen respondents and directs them to open surveys for which they are qualified.|
A description of the population from which the survey respondents were drawn.
This is the total unweighted count of all completed interviews.
|The method by which participants in a poll were selected.|
The document from which information was gathered. In iPOLL, the source document usually refers to the topline document released by a polling organization which was used as the source for questions and topline results.
|Respondents answering at a rate too fast to allow for adequate comprehension of questions, particularly for paper or online questionnaires.|
|A type of survey research design in which a sample is randomly split into different groups and assigned different treatments (e.g. questions or prompts) in order to determine the effect of the treatment on survey responses. A sample may also be split into groups and asked different questions in order to maximize the number of questions that can be asked in the survey.|
|A respondent providing identical answers across a range of questions (on a printed survey, literally marking off responses in a "straight line" through the instrument).|
A method of sampling where groups that might not otherwise be equally represented are first divided proportionately into categories (“strata”); then, a sample is randomly selected from each of these categories. (e.g. If you wanted to do a study on hospitals, you’d separate them by size—small, medium-sized, and large hospitals. From there, you would draw samples from each category so that they’d all be equally represented.
This note field on questions in the iPOLL database pertains to the entire release, report, or study from which the question was taken.
A subset of the population under study. In cases where responses are not based on the entire sample, question results in iPOLL will show the Subpopulation field with a description of the portion of the sample whose responses are being reported appears here (e.g. women, or those who favor a given policy).
The organization that conducted the fieldwork for a survey.
A method of sampling where units are selected from the sampling frame by every “nth” unit. (e.g. You have a directory of 100,000 names and you want a sample of 1,000 names. Divide 100,000 by 1,000 to get 100. You will select every 100th name from the directory. Randomly select a number between 1 and 100, say 42, and select every 42nd name in groups of 100 (42, 142, 242, 342, 442.) to complete your sample.
|A technique of adjusting weights on respondents in a survey based on the estimated probability that the respondent would be found at home at the time the interview was completed. This type of weighting is used in order to reduce bias that would result from under-representing respondents who are difficult to reach at home. This method was used by Gallup to weight face to face, probability-based surveys from the 1960s to 1980s and is still used for weighting in-person interviews in some areas.|
|The subject classification(s) that best describe the question. The scheme for this categorization was developed by the Roper Center and contains over 100 subject categories.|
The topline is the result of how the aggregated sample answered a specific question. (See also, What is a topline?)
|A series of surveys repeated over time in order to measure changes in survey responses in a target population. Tracking polls are often used over the course of electoral campaigns to measure changes in support for political candidates.|
Using more than one method to find meaning in a problem. i.e. If you want to interpret the President's Approval Rating, you could look at poll results, results of focus groups, and news stories of current events.
|All entities that qualify for inclusion in the study or survey, from which the sample of respondents is drawn. The target population could consist of all adult American citizens, for instance, or all Fortune 500 companies.|
A common theoretical population for US “national” polls. Typically it means the age 18+, non-institutionalized (e.g. no prisons, nursing homes, or military bases) population in the 48 contiguous states, since Alaska and Hawaii are often omitted for practical reasons.
Also known as sample balancing, weighting is a technique used to reflect differences in the number of population units that each case in a dataset represents. Typically, for surveys designed to be representative of the population of the U.S., units are adjusted to reflect the U.S. Census on several demographic measures, including age, education, and sex. While polling organizations may have different methods for their weighting procedures, weighting generally involves the multiplication of survey observations by one or more factors in order to increase or decrease the emphasis that will be given to the observations when analyzing the data. See also propensity score weighting, raking weighting, times-at-home weighting.Also known as sample balancing, weighting is a technique used to reflect differences in the number of population units that each case in a dataset represents. Typically, for surveys designed to be representative of the population of the U.S., units are adjusted to reflect the U.S. Census. While polling organizations may have different methods for their weighting procedures, weighting generally involves the multiplication of survey observations by one or more factors in order to increase or decrease the emphasis that will be given to the observations when analyzing the data.
|Data source for benchmarks used to weight the sample.|
|The process of choosing members of a household to participate in a survey once the household has already been selected (e.g. through random digit dialing). See also Respondent selection. Examples of methods of within household selection include most recent birthday and youngest man/oldest woman designs.|
|A systematic, non-random technique for selecting members of a given household for a survey. The member of the household who should be selected, e.g. the youngest man, the youngest woman, the oldest man or the oldest woman, can be randomly assigned across the sample, or interviewers can always start with the youngest man, then move to the oldest woman.|