The Elon University Poll is conducted using a stratified random sample of households with telephones and wireless (cell) telephone numbers in the population of interest – in most cases this means citizens in North Carolina. We do at times survey citizens in other south Atlantic states (e.g., Florida, Georgia, South Carolina, and Virginia). Samples of telephone numbers for our surveys are purchased from Survey Sampling International, LLC.
Selection of Households
To equalize the probability of telephone selection, sample telephone numbers are systematically stratified according to subpopulation strata (e.g., a zip code, a county, a state, etc.), which yields a sample from telephone exchanges in proportion to each exchange’s share of telephone households in the population of interest. Estimates of telephone households in the population of interest are generally obtained from several databases. Samples of household telephone numbers are distributed across all eligible blocks of numbers in proportion to the density of listed households assigned in the population of interest according to a specified subpopulation stratum. Upon determining the projected (or preferred) sample size, a sampling interval is calculated by summing the number of listed residential numbers in each eligible block within the population of interest and dividing that sum by the number of sampling points assigned to the population. From a random start between zero and the sampling interval, blocks are systematically selected in proportion to the density of listed household “working blocks.” A block (also known as a bank) is a set of contiguous numbers identified by the first two digits of the last four digits of a telephone number. A working block contains three or more working telephone numbers. Exchanges are assigned to a population on the basis of all eligible blocks in proportion to the density of working telephone households. Once each population’s proportion of telephone households is determined, then a sampling interval, based on that proportion, is calculated and specific exchanges and numbers are randomly selected.
The wireless component of the study sample starts with determining which area code-exchange combinations in North Carolina are included in the wireless or shared Telcordia types. Similar to the process for selecting household telephone numbers, wireless numbers involve a multi-step process in which blocks of numbers are determined for each area code-exchange combination in the Telcordia types. From a random start within the first sampling interval, a systematic nth selection of each block of numbers is performed and a two-digit random number between 00 and 99 is appended to each selected nth block stem. The intent is to provide a stratification that will yield a sample that is representative both geographically and by large and small carrier. From these, a random sample is generated.
Because exchanges and numbers are randomly selected, unlisted as well as listed numbers are included in the sample. Thus, the sample of telephone numbers generated for the population of interest constitutes a random sample of telephone households and wireless numbers of the population.
Procedures Used for Conducting the Poll
The Elon University Poll typically conducts surveys on a Monday through Thursday schedule, but periodically we will conduct surveys on a Sunday through Wednesday/Thursday schedule. Calls are made from 5:00 p.m. to 9:00 p.m. during the week and from 1:00 p.m. to 6:00 p.m. on weekend days. The specific times and dates are delineated for each survey conducted.
The Elon University Poll uses CATI system software (Computer Assisted Telephone Interviewing) for the administration of surveys. Several attempts (at least four) are made to reach each working telephone number in the sample. Only individuals 18 years or older are interviewed; those individuals reached at business or work numbers are not interviewed. For each number reached, one adult is generally selected based on whether s/he is the oldest or youngest adult at home at that time. Interviews, which are conducted by paid, live interviewers, are completed with adults from the target population as specified. A survey is considered completed if a respondent progresses through the survey and completes 80 percent of the survey questions. Interviews for most surveys generally result in approximately 500 to 600 interviews with adults from the target population (e.g., North Carolinians if it is a survey of North Carolina residents). For a sample size of 500, there is a 95 percent probability that our survey results are within plus or minus 4.5 percentage points (the margin of sampling error) of the actual population distribution for any given question (and for a sample size of 600, the margin of sampling error is 4.1 percentage points). For sub-samples, which is a subgroup selected from the overall sample, the margin of error is higher depending on the size of the subsample. When we use a subsample, we identify these results as being from a subsample and provide the total number of respondents and margin of error for that subsample. Because our surveys are based on probability sampling, there are a variety of factors that prevent these results from being perfect, complete depictions of the population; the foremost example is that of margin of sampling error (as noted above). As with all probability samples there are theoretical and practical difficulties estimating population characteristics (or parameters). Efforts are made to reduce or lessen sampling error, as well as other types of errors associated with survey research; error effects are present in surveys derived from probability samples and, while not all inclusive, examples of such threats and effects include: non-response rates, question order effects, question wording effects, etc.
Reporting Information and Weighting
The Elon University Poll supports transparency in survey research and is a supporter of the American Association for Public Opinion Research Transparency Initiative, which is a program promoting openness and transparency about survey research methods and operations among survey research professionals and the industry. All information about the Elon University Poll that is released to the public conforms to reporting conventions recommended by the American Association for Public Opinion Research and the National Council on Public Polls. The reporting conventions recommended by these national industry organizations are found at: NCPP and AAPOR (http://tinyurl.com/24wtt6c). The raw datasets from the Elon University Poll are owned and maintained by Elon University.
Results from the Elon University Poll are typically weighted by all or at least one of the following demographic characteristics: race, gender, and age. Decisions to weight survey results are based on the representation of demographic characteristics in the sample relative to the population disposition of these characteristics (i.e., we use post-stratification adjustments). Weighting survey results is a common statistical procedure that adjusts an underrepresented element(s) in the sample to conform to population parameters for that specific element. Information about weighting of survey samples for each poll is provided on the Elon University Poll website (and entitled ‘demographic variables’).
Question Construction and Question Order
In releasing survey results, the Elon University Poll provides the questions as worded and the order in which these questions are administered (to respondents). In an effort to provide neutral, non-biased questions, we attempt to observe conventional question wording and question order protocols in all of our polls. Though not exhaustive by any means, examples of such protocols/practices include: avoiding the use of jargon and ambiguous terms; avoiding any priming or leading questions by removing any ‘loaded’ language; wording questions succinctly and specifically; asking only one question (avoiding ‘double-barreled’ questions that ask about multiple topics in one question); ensuring reasonable response options that conform to topic/question. To reduce the threat of question construction and instrument issues that can possibly affect results, several techniques are employed in the administration of our survey instrument.
Response Options Included in Question and Rotated – Noticeable in reviewing questions asked during the survey is the “bracketed” information that appears in the question. Information contained within brackets ( [ ] ) denotes response options as provided in the question. These response options provided in brackets are rotated randomly; this technique serves to ensure that a set order of response options are not presented to respondents. Randomly rotating response option order maintains question construction integrity by avoiding respondent acquiescence based on question composition (i.e., recency or primacy effects), e.g., a set order of candidate names may lend to a person mentioned first or last being selected more than others simply by the ordered position of the name, therefore alternating the order of options protects against response option order influencing a person’s response to a question, which then would bias results for that question. Response options, however, are generally offered for questions about demographic characteristics (background characteristic, e.g., age, education, income, etc.).
Open Ended Questions, Probe, “Don’t Know” – Some questions in our surveys used a probe maneuver to determine a respondent’s intensity of opinion. Probe techniques used in our questionnaires consist mainly of asking a respondent if their response is more intense than initially provided with the simple dichotomized option. For example, in asking a question that elicits a satisfaction/dissatisfaction response, the respondent, upon indicating whether s/he is satisfied or dissatisfied, is asked a follow up question that probes for intensity: “would you say you are very ‘satisfied’/’dissatisfied’”. This technique aids respondents by enhancing the interpretation, recall, and judgment required to answer a question. A probe technique is employed in some questions as opposed to specifying the full range of choices in the question. Though specifying the full range of options in questions is a commonly accepted, necessary practice (for specific questions) in survey research, we sometimes prefer that the respondent determine whether their perspective is stronger or more intense for which the probe technique is used.
Open-Ended Questions – The open-ended question is another technique used for acquiring information from respondents. The open-ended question is a question for which no response options are provided, i.e., it is entirely up to the respondent to provide the response information.
Anticipated/Volunteered Response Options – Oftentimes respondents volunteer response options not provided to them as an option. As we typically only offer response options in the questions as presented, some respondents choose to ignore these explicit options provided to them and offer or volunteer another response option; in the event that such more common options can be anticipated, these are noted (by the lower case ‘v’ in parentheses (v)); though not all volunteered options can be accommodated or anticipated, the more common options are noted.
The “Don’t Know” & “Refused” Response Options – Question construction also involves the response option choices made available to the respondent for selection. We typically do not express to respondents that the ‘don’t know’ response is an option for most questions; we do, however, record this response should it be offered. If a respondent indicates s/he has no opinion because s/he does not know how to respond, the interviewer codes the response as a ‘don’t know’ and proceeds to the next question. For questions involving topics that are sensitive or less salient, we oftentimes offer an option that permits respondents to comfortably acknowledge lack of interest or attention, and little or no knowledge or awareness about a topic. Again, as explained previously, this option is provided as part of the question presented to respondent.
The Elon University Poll
The Elon University Poll is directed by Dr. Jason Husser. The poll is conducted under the auspices of the Center for Public Opinion Polling, which is a constituent part of the Institute of Politics and Public Affairs. These academic units are part of Elon College, the College of Arts and Sciences at Elon University, which is led by Dean Gabie Smith. The Elon University administration, led by Dr. Connie Book, president of the university, fully supports the Elon University Poll as part of its service commitment to state, regional, and national constituents.
Being fully funded by Elon University permits the Elon University Poll to operate as the neutral, non-biased resource of information for citizens and political and business leaders; as a result of this generous support, the Elon University poll does not engage in any contract work. Elon University students are employed to administer the survey as part of the University’s commitment to civic engagement and experiential learning where “students learn through doing.”