Fiveable

๐ŸŽฒData, Inference, and Decisions Unit 3 Review

QR code for Data, Inference, and Decisions practice questions

3.4 Surveys, questionnaires, and measurement scales

๐ŸŽฒData, Inference, and Decisions
Unit 3 Review

3.4 Surveys, questionnaires, and measurement scales

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐ŸŽฒData, Inference, and Decisions
Unit & Topic Study Guides

Surveys are powerful tools for gathering data, but their effectiveness hinges on careful design. From cross-sectional snapshots to longitudinal tracking, surveys come in various forms. Choosing the right type and crafting clear questions are crucial for collecting reliable information.

Measurement scales and question structure play key roles in survey quality. Nominal, ordinal, interval, and ratio scales each serve different purposes. Ensuring reliability and validity through rigorous testing helps researchers create surveys that accurately capture the intended data.

Survey Types and Questionnaires

Cross-sectional vs. Longitudinal Surveys

  • Cross-sectional surveys collect data at a single point in time providing a snapshot of the population
  • Longitudinal surveys gather information from the same sample over an extended period tracking changes over time
  • Panel surveys repeatedly sample the same group of respondents over time allowing for analysis of individual-level changes
  • Cohort surveys focus on a specific group sharing common characteristics (birth year) tracking them to observe trends

Structured vs. Unstructured Questionnaires

  • Structured questionnaires use closed-ended questions with predetermined response options facilitating quantitative analysis
  • Unstructured questionnaires employ open-ended questions allowing for free-form responses and qualitative insights
  • Omnibus surveys combine multiple unrelated topics in a single questionnaire maximizing data collection efficiency
  • Semi-structured questionnaires blend both closed and open-ended questions balancing quantitative and qualitative data

Survey Administration Methods

  • Computer-assisted personal interviewing (CAPI) involves interviewers using electronic devices to record responses improving data accuracy
  • Web-based surveys are self-administered online questionnaires offering convenience and cost-effectiveness
  • Telephone surveys conducted via voice calls allow for real-time clarification and probing
  • Mail surveys distributed and returned via postal service provide respondents time for thoughtful responses

Effective Survey Questions

Question Wording and Structure

  • Use clear, concise, and unambiguous language ensuring consistent interpretation across respondents
  • Avoid leading questions that may bias respondents' answers (Do you agree that recycling is important?)
  • Implement mutually exclusive and exhaustive response options for closed-ended questions covering all possibilities
  • Balance positively and negatively worded items reducing acquiescence bias and maintaining engagement
  • Use skip logic or branching guiding respondents through relevant questions based on previous answers

Question Order and Placement

  • Consider the order of questions placing sensitive or complex items strategically to minimize impact on subsequent responses
  • Start with easy, non-threatening questions to build rapport and increase response rates
  • Group related questions together improving flow and reducing cognitive burden
  • Place demographic questions at the end preventing potential bias in earlier responses
  • Use transition statements between sections helping respondents understand the survey structure

Pilot Testing and Refinement

  • Conduct cognitive interviews with respondents to identify issues with question comprehension or interpretation
  • Pilot test questions with a small sample to uncover potential problems before full-scale implementation
  • Analyze pilot data for response patterns, missing data, and unexpected results informing questionnaire refinement
  • Seek feedback from subject matter experts to ensure questions adequately cover the research objectives
  • Iteratively revise and retest questions based on pilot results improving overall survey quality

Measurement Scales for Data

Nominal and Ordinal Scales

  • Nominal scales categorize data into mutually exclusive groups without inherent order (gender, ethnicity)
  • Ordinal scales rank data in a specific order but do not provide information about magnitude of differences (Likert scales, education levels)
  • Nominal data analysis involves frequency counts, mode, and chi-square tests
  • Ordinal data allows for median calculation and non-parametric tests (Mann-Whitney U, Kruskal-Wallis)

Interval and Ratio Scales

  • Interval scales have equal distances between adjacent values but lack a true zero point (temperature in Celsius)
  • Ratio scales possess all properties of interval scales with a true zero point enabling proportional comparisons (height, weight, income)
  • Interval data supports mean, standard deviation, and parametric tests (t-tests, ANOVA)
  • Ratio data allows for geometric mean, coefficient of variation, and advanced statistical modeling

Scale Selection Considerations

  • Research objectives guide scale choice ensuring alignment with study goals and hypotheses
  • Nature of the variable being measured influences scale selection (categorical vs continuous)
  • Intended statistical analyses impact scale choice as different tests require specific measurement levels
  • Respondent characteristics affect scale selection considering literacy levels and cognitive abilities
  • Practical constraints (time, resources) may influence the complexity of scales used in a survey

Reliability and Validity of Scales

Reliability Assessment Methods

  • Test-retest reliability measures consistency of scores over time by administering the same test twice
  • Parallel forms reliability compares scores from two equivalent versions of a scale
  • Internal consistency reliability (Cronbach's alpha) assesses how well items within a scale measure the same construct
  • Inter-rater reliability evaluates consistency of measurements across different raters or observers
  • Split-half reliability divides items into two sets comparing correlation between halves

Validity Evaluation Techniques

  • Content validity ensures the scale adequately covers all aspects of the construct being measured
  • Construct validity assesses how well a scale measures the intended theoretical construct
  • Criterion-related validity examines the relationship between scale scores and external criteria
  • Convergent validity evaluates correlation between measures of related constructs
  • Discriminant validity assesses the extent to which a scale differs from measures of unrelated constructs

Advanced Psychometric Analysis

  • Factor analysis identifies underlying structure of multi-item scales evaluating construct validity
  • Item response theory (IRT) analyzes individual item properties providing insights into difficulty and discrimination
  • Multitrait-multimethod (MTMM) matrix approach simultaneously evaluates multiple aspects of construct validity
  • Differential item functioning (DIF) analysis detects potential bias in item performance across subgroups
  • Structural equation modeling (SEM) assesses complex relationships between latent variables and observed indicators

Bias and Error in Surveys

Sampling and Nonresponse Bias

  • Sampling bias occurs when the selected sample does not accurately represent the target population limiting generalizability
  • Nonresponse bias arises when non-participants differ systematically from respondents distorting findings
  • Coverage bias results from incomplete sampling frames excluding portions of the target population
  • Self-selection bias occurs when respondents choose to participate based on personal characteristics or interests
  • Strategies to mitigate include probability sampling, oversampling underrepresented groups, and nonresponse follow-up

Response Biases

  • Social desirability bias influences respondents to provide more socially acceptable answers rather than true opinions
  • Acquiescence bias leads respondents to agree with statements regardless of content
  • Extreme response bias involves consistently selecting extreme options on rating scales
  • Central tendency bias results in respondents avoiding extreme responses clustering around midpoints
  • Techniques to address include balanced scales, forced-choice items, and randomized response techniques

Measurement and Administration Errors

  • Order effects (primacy and recency) impact how respondents interpret and answer questions based on position
  • Interviewer bias occurs when interviewer characteristics or behavior influence respondents' answers
  • Mode effects refer to differences in responses arising from survey administration method (online, telephone, in-person)
  • Satisficing leads respondents to provide minimally acceptable answers rather than optimal responses
  • Context effects result from the influence of preceding questions on subsequent responses