An Introduction to the Understanding America Study Internet Panel

by
Social Security Bulletin, Vol. 78 No. 2, 2018

This article provides an overview of the Understanding America Study (UAS), a nationally representative Internet panel of approximately 6,000 adult respondents that is administered by the University of Southern California. The UAS, which began in 2014, represents one of the richest sources of panel data available in the United States. It includes over 50 survey modules on topics such as retirement planning, economic well-being, and psychological constructs. This article reviews the UAS methodology; describes how external researchers may commission UAS surveys and incorporate their own survey questions and randomized controlled trials; highlights selected publicly available data from UAS surveys on cognition, personality, financial literacy and behaviors, political views, and other topics; and discusses opportunities for external parties to work with UAS administrators in developing new surveys and future lines of research.


Laith Alattar and David Rogofsky are with the Office of Retirement Policy, Office of Retirement and Disability Policy (ORDP), Social Security Administration (SSA). Matt Messel is with the Office of Research, Evaluation, and Statistics, ORDP, SSA.

Acknowledgments: The authors thank Anya Olsen, John Murphy, Hector Ortiz, Arie Kapteyn, Marco Angrisani, and Tania Gutsche for their helpful comments and suggestions.

The findings and conclusions presented in the Bulletin are those of the authors and do not necessarily represent the views of the Social Security Administration.

Introduction

Selected Abbreviations
ALP American Life Panel
CDS Computerized Delivery Sequence
CESR Center for Economic and Social Research
CPS/ASEC Current Population Survey Annual Social and Economic Supplement
FMS Financial Management Survey
HRS Health and Retirement Study
LISS Longitudinal Internet Studies for the Social Sciences
RCT randomized controlled trial
SIS Sequential Importance Sampling
SSA Social Security Administration
UAS Understanding America Study
USC University of Southern California

The Understanding America Study (UAS) is a nationally representative Internet panel of approximately 6,000 respondents aged 18 or older that is administered by the Center for Economic and Social Research (CESR) at the University of Southern California (USC). The UAS, which began in 2014, is supported by the Social Security Administration (SSA) and the National Institute on Aging through a cooperative agreement. Panel members are selected through address-based sampling and are compensated for their participation. Respondents are provided with a tablet computer and Internet access, if needed, to complete the surveys. The UAS includes over 50 survey modules on topics such as retirement planning, economic well-being, and various personality, cognitive, and other psychological constructs. The UAS also includes modules that correspond topically with most of the modules that comprise the University of Michigan's Health and Retirement Study (HRS). Although federal agencies, corporations, and academic research centers have commissioned many of these surveys, the collected data are available to the public either immediately or after a brief embargo. The UAS represents one of the richest sources of panel data available in the United States. In addition to offering breadth and accessibility, the UAS allows researchers to incorporate their own survey questions and methodological experiments, thereby providing greater flexibility than many other Internet panels.1 Because many of the UAS surveys are regularly readministered, researchers can also use the UAS to conduct longitudinal panel analysis.2 Finally, the UAS allows researchers to conduct randomized controlled trials (RCTs) to evaluate the efficacy of a wide array of interventions. Because the UAS is conducted online, researchers can receive survey and intervention data relatively quickly. On average, UAS administrators deliver data (weighted to reflect the U.S. population) within 1 month of their collection.3 Overall, the richness, flexibility, and timeliness of the UAS present significant opportunities for federal agencies, nonprofit organizations, and academic centers conducting research and analysis aimed at developing and improving programs and services.

Nationally representative Internet-based panels such as the UAS exemplify a relatively recent phenomenon in survey methodology. The oldest existing panel of this kind, CentERpanel, originated in the Netherlands in 1991. That panel enabled clients to receive results within a week of a survey being released to the respondents, a faster turnaround than had been possible with phone or written probability surveys. Today, a number of nationally representative Internet-based probability panels exist alongside the UAS. The GfK KnowledgePanel, initiated in 1999 as Knowledge Networks, is the largest ongoing Internet-based panel, with 55,000 participants. Both the Longitudinal Internet Studies for the Social Sciences (LISS) panel in the Netherlands and the American Life Panel (ALP) in the United States began in 2006; they have 7,500 and 6,000 panel members, respectively. Those panels administer surveys covering a wide range of topics, from health status to economic well-being to political views.4

This article provides an overview of the UAS. It first outlines the UAS methodology, then describes the process by which external parties such as researchers, policymakers, and corporations may commission UAS surveys. Finally, it highlights selected publicly available data (including a nearly complete replication of the HRS) and surveys on cognition, personality, financial literacy, retirement planning, political views, and voting behaviors, among other topics. The article also discusses opportunities for external parties to work with UAS administrators and CESR researchers in developing new surveys and future lines of research.

Methodology

This section covers UAS sampling, recruitment and survey collection, weighting procedures, standard variables, and the scope and treatment of missing data.

Sampling

In contrast with surveys that recruit panel members with random-digit dialing and face-to-face area sampling methodologies, the UAS uses address-based sampling. Random-digit dialing involves generating a list of telephone numbers at random. Researchers can stratify numbers by area code, telephone exchange, and other geographic identifiers when available. Although it has been a common method for generating survey samples since the 1970s, critics have in recent years questioned its ability to cover sampling frames adequately.5 Furthermore, rates of landline telephone ownership decreased from 62 percent to 49 percent in the period 2012–2015, with 47 percent of U.S. households owning only cellular phones in 2015 (Blumberg and Luke 2016). Although some random-digit dialing samples now include cell phones, call-screening technologies and concerns about privacy among both cell phone and landline users may result in low response rates (Link and others 2008).

Face-to-face area sampling, in which researchers travel to households within a selected area, is an expensive alternative to random-digit dialing. Over the last decade, advancements in database technology have allowed compilers to create nationwide databases of addresses that researchers can use to construct sampling frames (Link and others 2009). The U.S. Postal Service created the most widely used database, the Computerized Delivery Sequence (CDS) file, which contains every postal address. In address-based sampling, researchers draw from one or more databases to recruit samples, often via mail.6 In recruiting participants for the Massachusetts Health Insurance Survey, Sherr and Dutwin (2009) found that address-based sampling produced a lower response rate than random-digit dialing (34.7 percent versus 42.0 percent, respectively), but it also cost less and reduced coverage bias.7 Additional limitations to address-based sampling include incomplete coverage of rural areas and the potential double counting of households with more than one mailing address.8 Yet, the representativeness of address-based sampling continues to improve as survey methodologists address these issues (Iannacchione 2011; Iannacchione, Staab, and Redden 2003; Shook-Sa and others 2013). The UAS uses the CDS file, which includes 135 million residential addresses covering nearly 100 percent of U.S. households. The UAS also includes oversamples, such as Native Americans and residents of Los Angeles County and California.9

In its exclusive use of address-based sampling, the UAS differs from other studies. Although the GfK KnowledgePanel uses only address-based sampling (with data from the CDS file) to recruit panel members today, it used random-digit dialing prior to 2009. The ALP has used both address-based sampling and random-digit dialing. For the latter, it employs alternative sampling frames for landline and cellphone-only households in order to maximize coverage. The LISS panel uses a population-based registry, which is available in the Netherlands but not in the United States.

Like the other survey panels, the UAS draws a probability sample, as opposed to the convenience samples of some Internet-based surveys (Hays, Liu, and Kapteyn 2015). Convenience samples involve selecting the participants who are the easiest to locate and recruit; for a probability sample, on the other hand, researchers select participants randomly from a study population. Although researchers can recruit a large number of participants through convenience sampling, such a sample may not accurately represent study populations (Craig and others 2013; Tourangeau, Conrad, and Couper 2013).

An important feature of the UAS sampling procedure is sequential sample batching. The first batch is a simple random sample of addresses drawn from the CDS file. Subsequent batches are based on Sequential Importance Sampling (SIS), an algorithm developed by CESR designers.10 SIS is a type of adaptive sampling (Groves and Heeringa 2006; Tourangeau and others 2017; Wagner 2013) that generates unequal sampling probabilities with desirable statistical properties. Specifically, before sampling an additional batch, the SIS algorithm computes the unweighted distributions of particular demographic characteristics (such as sex, age, marital status, and education) in the UAS at that time. It then assigns to each ZIP code a nonzero probability of being drawn, which is an increasing function of the degree of “desirability” of the ZIP code. The degree of desirability is a measure of how much, given its population characteristics, a ZIP code is expected to move the current demographic distributions in the UAS towards those of the U.S. population. For example, if at a particular juncture the UAS panel underrepresents women with a high school diploma, ZIP codes with a relatively high proportion of women with a high school diploma receive a higher probability of being sampled. The SIS is implemented iteratively. That is, after selecting a ZIP code, the distributions of demographics in the UAS are updated according to the expected contribution of this ZIP code towards the panel's representativeness, updated measures of desirability are computed, and new sampling probabilities for all other ZIP codes are defined. That procedure provides a list of ZIP codes to be sampled. From each ZIP code in the list, addresses are then sampled randomly from the CDS database.

Recruitment and Survey Collection

Administrators at CESR send an advance notification letter in English and Spanish to potential UAS respondents, followed by a mail survey to the randomly selected addresses, inviting residents aged 18 or older to participate. The mail survey includes a prepaid return envelope, a $5 incentive payment, and a promise of $15 for an individual who returns the completed survey by mail. The survey gathers demographic and economic information about the respondent and her or his household, as well as information about computer usage and other topics. At the end of the mail questionnaire, respondents may indicate their interest in participating in future surveys. If administrators do not receive a response within 2 weeks, they send a reminder post card. After another 2 weeks, they mail another questionnaire and provide the option to complete the survey online and an explanation that a different household member may complete the survey. If administrators receive no response within 3 weeks of mailing the second survey, they attempt to call the household, should a phone number be available.11

If a respondent returns the completed questionnaire and is not interested in participating in future surveys, administrators send a $15 payment, a thank-you letter, and a form inviting another household member to participate in the study. An individual who returns the survey and expresses interest in continued participation receives a brochure, a $15 prepaid debit card, and a welcome letter with information on how to start taking surveys online. The welcome letter notifies the individual that administrators will accept responses from all household members aged 18 or older who provide contact information. The letter also informs the individual that after logging into the UAS website and completing the “My Household” (demographic) survey, he or she will receive a bonus of $20. If the household does not have Internet service (as indicated in the mailed survey), the welcome letter will include a consent form (with return envelope) that permits administrators to provide a tablet and set up broadband Internet for the household. Once CESR receives the consent form, the UAS help desk calls the respondent to confirm his or her current address and the availability of broadband connectivity there. Participants are encouraged to use libraries or other free resources while they wait for their tablet or if they are hesitant to borrow equipment from the study. Tablets are set up per UAS specifications with a “quick link” to the survey site. Respondents whose participation lapses while in possession of a borrowed tablet are contacted and offered assistance to encourage them to resume participation.

When logging onto the UAS website, individuals are asked to complete an online consent form prior to beginning the My Household survey. They are also informed that the UAS has been granted a Certificate of Confidentiality by the Department of Health and Human Services. Households with at least one individual who submits My Household survey responses become part of the UAS panel. Thereafter, the UAS help desk invites respondents to participate in one or more surveys per month. The invitation includes a brief description of the survey, an estimate of the time it will take to complete the survey, the amount of compensation, and the deadline (if applicable). Panel members receive compensation on a monthly basis via a prepaid debit card provided by the survey team.

The UAS has a panel recruitment rate of 15–20 percent, similar to those of the GfK KnowledgePanel and the ALP.12,13 The response rate is only an estimate, as it is not possible to definitively code how many mailed surveys arrived at their intended destination. Because initial surveys are sent by priority mail, the majority of mailings to bad addresses are assumed to be returned and coded as nondeliverable, but it is impossible to know how many bad addresses do not lead to a returned mailing. The calculation of response rates is therefore conservative, as any nonreturned survey is assumed to have gone to a valid address.14

Completion rates for individual UAS online surveys range between 70 percent and 95 percent.15 Panel members typically spend 30 minutes, at most, completing a single survey. They are compensated $20 for a 30-minute survey.16

The UAS team administers surveys via the Internet in English and Spanish, using the NubiS data collection tool developed by CESR. The advantages of the Internet over other modes of conducting surveys (such as face-to-face, mail, or telephone) include lower costs and the ability to obtain survey data more quickly. However, Couper and others (2007) find that Internet access is unevenly distributed across certain demographic categories.17 Internet usage is lower among Americans aged 65 or older and those with lower levels of education than it is among their younger and more educated counterparts (Table 1). In 2018, 66 percent of Americans aged 65 or older used the Internet, compared with 98 percent of those aged 18–29. Likewise, 65 percent of individuals with less than a high school diploma used the Internet, compared with 97 percent of those with a college degree (Anderson, Perrin, and Jiang 2018). The UAS addresses the variance in usage rates by providing Internet access and a tablet to any panel members who lack them.

Table 1. Internet usage rates of U.S. adults, by selected demographic characteristics, January 2018
Characteristic Percentage
Total 89
Sex
Men 89
Women 88
Age
18–29 98
30–49 97
50–64 87
65 or older 66
Race/ethnicity
Non-Hispanic white 89
Non-Hispanic black 87
Hispanic 88
Educational attainment
Less than high school diploma 65
High school diploma 84
Some college 93
Postsecondary degree 97
Household income
Less than $30,000 81
$30,000–$49,999 93
$50,000–$74,999 97
$75,000 or more 98
SOURCE: Anderson, Perrin, and Jiang (2018).

Weighting

Researchers use weighting to allow the characteristics (such as race, sex, age, or education) of a sample to more closely reflect those of a study population. Respondents with characteristics that are underrepresented (or overrepresented) relative to the population receive larger (or smaller) survey weights. Each UAS survey is separately weighted. The target population is typically noninstitutionalized U.S. residents aged 18 or older, although specific surveys may target particular segments of the population (for example, Medicare-eligible individuals). UAS surveys are weighted using a two-step process.

In the first step, statisticians create a base weight to address the fact that the SIS algorithm causes the probability of being sampled to vary from one ZIP code to another and from one household in a sampled ZIP code to another. Sampled ZIP codes are weighted to match their characteristics—such as Census region, urbanicity, and demographic composition (sex, age, education, race, and marital status)—with those of the ZIP codes covered by the Census Bureau's American Community Survey. This weight, indicated by w 1 b , is generated via logit regression. Then, the ratio of the number of all households to the number of sampled households in the ZIP code is computed. This can be denoted by w 2 b . The base weight is a ZIP code–level weight defined by the product of w 1 b and w 2 b .

In the second step, statisticians generate poststratification weights to correct for differential survey nonresponse rates and to align the survey sample with the reference population in terms of a predefined set of demographic and economic variables (race, sex, age, education, household size, and total household income). The UAS uses estimates from the most recent available version of the Census Bureau's Current Population Survey Annual Social and Economic Supplement (CPS/ASEC) as the benchmark for population distributions of these variables. Specifically, UAS survey data collected from September 2015 to September 2016 are weighted using the 2015 CPS/ASEC, data collected from September 2016 to September 2017 are weighted using the 2016 CPS/ASEC, and so on. The poststratification weights are available for completed surveys and are part of the data file. Researchers may also request weights for ongoing surveys.18

Poststratification weights are created using a raking algorithm. The algorithm compares relative frequencies within the target population with relative frequencies in the survey sample by race, sex and age, sex and education, household size and total household income, census region, and urbanicity. When a researcher combines responses from two or more UAS surveys, the UAS team will provide weights unique to the combined data set based on the procedure described above. Alternatively, the UAS team can provide custom poststratification weights using specific raking factors chosen by the researcher.19

Standard Variables

In addition to survey weights, each UAS survey data set includes a set of standard variables. These include identifying variables, demographic variables, and survey metadata (for example, survey completion time, panel member's interest in the survey, and so on).

Identifying variables. Each panel member receives an individual identifier and two household identifiers. The individual identifier (uasid) is assigned to panel members at recruitment and remains with them through each survey in which they participate. Researchers may use this variable to merge data from different surveys. The UAS defines a household as all individuals living at the same address. The first household identifier (uashhid) matches the individual identifier for the primary panel member within the household.20 Other panel members within the household are assigned the same household identifier. This identifier remains constant throughout a panel, so that researchers can always find the original household of each panel member. The second household identifier (survhhid) indicates the household in which a panel member lives at the time of the survey. This identifier may change; for example, if a household member moves to another household.

Demographic variables. Each data set also includes current demographic information about the panel member. Every quarter, panel members must update the My Household survey to complete additional surveys.21 This survey covers a range of demographic information, which UAS administrators merge into all other surveys. Variables include sex, age, race and ethnicity, highest level of education, household size, household income, state of residence, marital status, citizenship, and place of birth, as well as additional variables related to employment. Table 2 summarizes demographic and employment characteristics reported in the My Household survey as of June 30, 2017. It shows the unweighted and weighted demographic characteristics of UAS panel members along with weighted figures from the 2016 CPS/ASEC, which serves as the U.S. population benchmark for the UAS surveys that began collecting data in September 2016. The sample includes 5,319 respondents from 9 nationally representative recruitment batches (therefore, it excludes the Los Angeles County and Native American oversamples).

Table 2. Demographic characteristics of the UAS 2017 panel members: Unweighted, weighted, and compared with the benchmark 2016 CPS/ASEC, as of June 30, 2017
Characteristic UAS 2017 panel 2016 CPS/ASEC percentage distributions a
Number Percentage distributions
Unweighted Weighted to 2016 CPS/ASEC
Sex
Men 2,337 43.9 48.3 48.3
Women 2,982 56.1 51.7 51.7
Age
18–39 1,533 28.8 38.2 38.2
40–49 1,020 19.2 16.3 16.3
50–59 1,170 22.0 18.0 18.0
60 or older 1,596 30.0 27.5 27.5
Race/ethnicity
Non-Hispanic white 4,152 78.1 64.4 64.4
Non-Hispanic black 452 8.5 11.8 11.8
Hispanic 377 7.1 15.8 15.8
Other 338 6.3 8.0 8.0
Educational attainment
High school diploma or less 1,367 25.7 40.6 40.7
Some college, no degree 1,242 23.3 17.8 19.1
Associate's degree 813 15.3 10.7 9.5
Bachelor's degree 1,110 20.9 17.7 19.5
Postgraduate/professional degree 787 14.8 13.2 11.2
Household income
Less than $30,000 1,440 27.1 26.0 25.4
$30,000–$59,999 1,447 27.2 26.1 27.2
$60,000–$99,999 1,309 24.6 25.1 22.9
$100,000 or more 1,123 21.1 22.8 24.5
U.S. citizenship
Yes 5,244 98.6 96.9 91.6
No 75 1.4 3.1 8.4
Born in United States
Yes 5,014 94.3 90.3 82.7
No 305 5.7 9.7 17.3
Marital status
Married 3,202 60.2 56.0 53.1
Separated, divorced, or widowed 1,150 21.6 20.3 18.6
Never married 967 18.2 23.7 28.3
Number of persons in household
1 777 14.6 14.8 14.8
2 2,318 43.6 36.1 34.3
3–4 1,654 31.1 34.1 35.8
5 or more 570 10.7 15.0 15.1
Employment status
Working 3,135 59.0 61.1 59.6
Self-employed 402 12.2 11.0 10.1
Unemployed 320 6.0 6.5 3.2
Retired 964 18.1 15.8 17.7
Other 900 16.9 16.6 19.5
SOURCES: UAS 2017 panel; 2016 CPS/ASEC.
a. Values are weighted using person-level weights.

Individuals who are female, middle-aged (40 to 59), non-Hispanic whites, married, U.S. citizens, and U.S.-born are more heavily represented in the UAS panel than in the U.S. population. In most cases, survey weighting minimizes the aggregate differences between the UAS panel and the U.S. population. By construction, distributions of raking factors align with their benchmarks. The alignment matches exactly for sex, age, and race/ethnicity because the algorithm uses the same categories as those reported in Table 2 to generate the poststratification weights. However, the raking algorithm uses three education, household income, and household size categories instead of the four (or five) reported in Table 2. Because of this, the distributions among the weighted UAS values and the benchmark CPS/ASEC values differ slightly for these three variables. For most domains, survey weights diminish differences between sample and population distributions. Even after weighting, UAS panel members are slightly more likely than the U.S. population to have postgraduate education, to be self-employed, or to be unemployed. Differences are greater still in the distributions by citizenship, place of birth, and marital status.

Timing. Researchers may wish to use UAS data on demographic characteristics that were collected in multiple surveys. The interval between the data collection and its availability can range from a few moments to several months, depending on the promptness of a given survey's respondents and the length of time between surveys. Each UAS survey contains timestamps to indicate when the panel member began and finished the survey. These timestamps can help researchers to establish temporal aspects of study variables, when relevant.

Missing Data

Demographic data collected in the UAS are relatively complete. Table 3 shows the frequency of missing data for key demographic variables. Variables such as sex, citizenship, and place of birth have no missing values out of 5,319 respondents. Variables such as age, race/ethnicity, educational attainment, household income, marital status, and state of residence each have fewer than 10 missing values. Data on household size were missing for 3.5 percent of respondents. For weighting purposes, missing demographic variables are first categorized (if continuous or taking more than 10 values) and then imputed using a sequential imputation procedure.22,23 Missing data on the respondent's sex are never imputed; information for such individuals do not receive a weight. In the data files, the extension “.e” represents questions that the respondent saw but did not answer; “.a” represents questions that the respondent never saw.24 Respondents may not have seen a question either because they intentionally or inadvertently skipped over it or because they began but did not finish the survey.

Table 3. Frequency of missing data on demographic characteristics in the UAS 2016 panel
Characteristic Missing data rate (%)
Sex 0.00
Age 0.08
Race/ethnicity 0.19
Educational attainment 0.04
Household income 0.17
U.S. citizenship 0.00
Born in United States 0.00
Marital status 0.04
State of residence 0.06
Number of persons in household 3.48
Employment status 0.04
SOURCE: UAS.

External Research Examples and Commissioning Research

In addition to using UAS data that are already available, researchers, policymakers, and corporations may commission their own surveys or methodological experiments with the nationally representative UAS panel. The CESR research team will administer these surveys either once (for cross-sectional analysis) or multiple times (for panel analysis), depending on the research need of the client. A number of government agencies and private entities have used the UAS to conduct primary research that expands academic and policy-relevant knowledge. For instance, SSA worked with CESR to develop two questionnaires for use in annual surveys measuring the public's knowledge of Social Security and identifying the communication channels by which individuals prefer to receive information about the agency and the programs. SSA plans to use these surveys to improve their public outreach and communication efforts. Among other entities, Princeton University and the Roybal Center for Health Decision Making and Financial Independence in Old Age have commissioned surveys through the UAS. Similarly, the Federal Reserve Bank of Boston began conducting its annual Survey of Consumer Payment Choice with the UAS panel in 2015. For that survey, the longitudinal structure of the UAS allows researchers to understand not only which payment instruments Americans use most frequently, but also how these payment behaviors change over time and in relation to microeconomic and macroeconomic phenomena.

When an individual or organization commissions a UAS survey, CESR designers provide support through programming and testing, finalizing the draft survey instrument, selecting the survey sample, translating the survey instrument into Spanish (if desired), collecting data, and providing a final weighted data set. CESR also provides other services: survey development and questionnaire design, item/survey testing, human-subject research advice, application development, visual displays, graphical interface, sample management design, data cleaning, and data analysis. Clients also gain access to NubiS, the web-based software developed by UAS programmers to conduct online surveys. The pricing structure of externally commissioned surveys depends on the survey length and sample size.

Each survey or methodological experiment requires approval from the USC human subjects committee internal review board (IRB) before data collection may begin. CESR researchers submit the survey to the IRB on behalf of the investigator and act as the intermediary between the investigator and the board.

Investigators also have the opportunity to embargo data temporarily. By default, survey questions and data, including those commissioned by external investigators, are publicly available through the UAS website. However, to allow investigators to analyze and write results before public release, CESR researchers will provide them exclusive access to the data for a period generally not exceeding 6 months after survey completion.

Available Panel Data

CESR administers a number of core UAS surveys on an ongoing basis, typically with an annual or biennial frequency. Examples include surveys on Social Security program knowledge and preferred communication channels; HRS-based survey modules; and surveys on psychological (cognitive and personality) variables, financial management and knowledge, and political preferences and voting behaviors. Panel members completed the first wave of most of these surveys in 2016. In future years, they will complete successive waves developed by CESR and external clients.

Social Security Program Knowledge and Preferred Communication Channels

UAS survey 16 (UAS16) asks panel members about their knowledge of Social Security, and UAS survey 26 (UAS26) asks them about the channels through which they prefer to receive information from SSA. UAS16 and UAS26 expand on the Social Security module of the ALP. Among others subjects, the two surveys address:

Yoong, Rabinovich, and Wah (2015) provide initial findings from UAS16, and Rabinovich and Yoong (2015) provide findings from UAS26.

Beyond the wide range of topics included in the two Social Security surveys, researchers and policymakers may expand understanding of the public's interaction with SSA and the programs it administers by matching these data to surveys covering related topics such as wealth or financial knowledge. CESR will work with SSA to develop the surveys and will administer them every 2 years.

UAS-HRS Surveys

Since its inception in 1992, the HRS has proven useful for studying both national trends and individual-level changes among Americans aged 50 or older.25 Studies using HRS data have played a crucial role in understanding changes in the health, wealth accumulation, and retirement planning of older Americans. The HRS has shed light on retirement planning and saving behavior, the role of health in labor force participation and retirement timing, and income and wealth trends in retirement, among myriad other research topics.26

The UAS extends these knowledge bases by administering biennial HRS survey modules that collect a breadth of data on retirement planning and saving from panel members aged 18 or older. Over time, HRS-based UAS data may also inform researchers about health, wealth, and retirement-planning trends of Americans over the entire course of adulthood. Furthermore, researchers and policymakers using the UAS may explore the relationships between data collected in the HRS, other UAS surveys, and their own survey instruments. In this way, researchers may extend their analysis to a broad range of topics while developing surveys that are focused and succinct. Finally, researchers and policymakers may use HRS-based measures to test the effectiveness of interventions targeting behaviors such as retirement planning or financial decision making. For example, researchers can test or compare the efficacy of new interventions using RCTs because the UAS allows different versions of a survey to be administered to randomly selected panel subsamples.

The UAS-HRS surveys differ from the official HRS surveys in several important ways. First, the UAS-HRS surveys are administered to all panel members, not only those aged 50 or older. (Panel members are invited to participate in UAS-HRS surveys only after they have completed at least three other UAS surveys.) Second, the timing of UAS-HRS surveys differs from that of the official HRS surveys. For the latter, panel members enter the study as part of a cohort; they complete all survey modules at the same time, with the first wave of surveys conducted within a single calendar year and additional surveys completed every 2 years thereafter. In the UAS-HRS, panel members may complete survey modules at different times. They too must complete the modules every 2 years, but they do not necessarily complete them at the same time that other UAS panel members do. Box 1 shows the correspondence between official HRS modules and the first round of UAS-HRS surveys. Each round of UAS surveys reflects the most recent HRS wave; for example, the first UAS round corresponds with the 2014 HRS and the second round with the 2016 HRS.27

Box 1. Topics covered in UAS surveys and corresponding HRS modules
UAS survey number Topics HRS modules
20 Personal background; household characteristics; health history; cognitive abilities A, B, C, D
21 Family characteristics; health condition; caregiving; living arrangements E, F, G, H
22 Current job status; job history; health-related work impairments J, J2, K, L, M
23 Health insurance; health care service use; health event probabilities N, O, P
24 Income and assets Q, R
25 Wills, trusts, and life insurance policies T, U, V
SOURCE: UAS.
NOTE: HRS modules not included in the UAS cover physical measures and biomarkers (I), widowhood and divorce (S), and Internet use (W).

The UAS-HRS modules use variables that are named using a convention consistent with that of the RAND HRS data file and codebook (Chien and others 2015). CESR designers adopted this naming convention to enable an easy transition for individuals who are familiar with RAND HRS data to the HRS-based data in the UAS. Income data are reported at the individual level and wealth data are reported at the household level. Although each participating HRS household includes only one financial respondent, more than one UAS respondent may provide financial data for his or her household.

Surveys on Cognitive and Personality Variables

UAS survey 1 (UAS1) measures numeracy, risk perception, personality, and financial literacy. Previous research found that these constructs significantly predict patterns of financial, health, retirement, and other behaviors (for example, Banks and Oldfield 2007; Lusardi and Mitchell 2011a, 2011b). Nevertheless, few surveys have included cognitive or personality measures in a panel design.28 By collecting these measures on a regular basis, the UAS allows researchers to evaluate changes in cognition and personality over time, as well as to establish temporal patterns in cognition and personality as they relate to financial, health, and retirement planning and decision making. The UAS-HRS surveys, for instance, will query respondents every 2 years about current savings, saving plans, and expectations for retirement.

Numeracy. Numeracy refers to the “ability to understand numerical information” (Reyna and others 2009, 943) and plays an important role in financial and health care decision making. The UAS measures numeracy through a Rasch-based scale developed by Weller and others (2013). The UAS scale combines five items drawn from the numeracy scale of Lipkus, Samsa, and Rimer (2001) with one item from the Peters and others (2007) scale and two items from the Cognitive Reflection Test (Frederick 2005). A number of studies find that numeracy and cognitive reflection—the latter defined by Sinayev and Peters (2015, 1) as the “tendency to check and detect intuitive errors”—represent a similar underlying concept (Låg and others 2014; Liberali and others 2012;29 Weller and others 2013). The UAS website includes information on scale items and the development of the final scale score.

The UAS enables researchers to expand the study of numeracy and decision making across the life course. In particular, the UAS-HRS surveys contain information on health care and retirement saving and planning behaviors. The UAS also allows researchers and policymakers to test hypothetical health care or retirement interventions. Because the UAS samples are larger than those observed in many previous studies of numerical ability,30 researchers may use UAS to study numeracy in subsamples (such as young adults) or to test the effectiveness of interventions across the spectrum of numerical ability.

Risk perception. UAS1 also measures consistency in risk perception, a subtest of the larger adult decision-making competence scale (Bruine de Bruin, Parker, and Fischoff 2007). Consistency in risk perception involves uniformly determining the probability of events over different time spans or in different contexts. For instance, it refers to an individual's ability to assess the risk of an event occurring within the next year versus the next 5 years, or of a specific event context (such as visiting the dentist to fill a cavity) versus a more general one (visiting the dentist for any reason). By testing consistency in risk perception, the UAS may open multiple avenues for research. Before the UAS, researchers tested the adult decision-making competence scale in controlled settings with relatively limited sample sizes. Over time, the UAS will allow for longitudinal measurement of a larger sample. Researchers may also be able to measure the consistency-in-risk-perception scale in relation to real-life financial, health, and retirement decisions by employing items from the UAS-HRS surveys. For example, policymakers could use these data to understand how changes in policy or the economy shape the investment strategies, health care decisions, or retirement plans of Americans based on differing levels of consistency in risk perception.

Personality. Alongside cognitive scales, a 44-item version of the “Big Five” personality inventory (John 1990) is included in the UAS. The inventory measures five personality traits: openness to experience, conscientiousness, agreeableness, extroversion, and neuroticism. A large body of research focuses on the relationship between big five personality profiles, job satisfaction, and career success (for example, Barrick and Mount 1991; Judge and others 1999; Judge, Heller, and Mount 2002; Seibert and Kraimer 2001; Soldz and Vaillant 1999; and Thoresen and others 2004). The UAS offers opportunities to expand the understanding of personality in relation to retirement, financial decision making, and health. In particular, the relationship between personality and financial decision making bears further exploration. The UAS allows researchers to match personality data with an array of self-reported financial decisions and facilitates the exploration of these personality/behavior relationships in respondents from young adulthood to beyond retirement age. The UAS also enables longitudinal studies linking measures of personality and health. Shanahan and others (2014) theorize about how personality relates to health over the life course, yet few studies have had the opportunity to study this relationship empirically. The UAS-HRS includes health-related items covering topics such as perceived health, disability, and health care expenditures.

Financial literacy. The role of financial literacy in retirement planning, saving, and making informed financial decisions is central (for example, Hilgert, Hogarth, and Beverly 2003; Lusardi and Mitchell 2011b; and Utkus and Young 2011). The UAS adopts a measurement of financial understanding that was developed for the ALP; prior research on financial literacy had relied on a limited set of questions and samples that excluded younger individuals (Lusardi and Mitchell 2011b). UAS1 adopts some of the basic questions from previous surveys on financial literacy31 and includes many additional items that test respondents' knowledge of stocks, bonds, and savings accounts, among other financial topics. Lusardi and Mitchell (2017) find that knowledge of these specific financial instruments, terms, and concepts predicts time devoted to retirement planning more strongly than does knowledge of more basic concepts such as interest rates, inflation, and risk diversification. The survey also includes a scale of respondents' confidence in their own financial knowledge.32

The UAS allows for additional research in financial literacy. The wide array of survey data available in the study will allow researchers to explore the relationship between financial knowledge and demographic, economic, cognitive, and personality variables. For instance, researchers may combine UAS data to study the ways in which financial literacy and personality interact to shape retirement saving behavior across the life course. Furthermore, the UAS may allow researchers and policymakers to test online financial education interventions.

Financial Management Survey (FMS)

CESR designed the FMS to provide updates to the 2012 Older Adult Survey, which was administered by the Federal Reserve Board of Governors to a sample of ALP respondents.33 The 2012 survey explored the financial well-being of Americans aged 40 or older in the wake of the Great Recession. Specifically, it investigated how older adults use financial products, how they make financial decisions and to whom they turn for advice, and the primary sources of their financial stress. The FMS, which is fielded as UAS survey 18, enables researchers not only to understand how the financial situation of Americans has changed since the Great Recession, but also to assess the financial well-being and decision making of individuals aged 18 or older. CESR will administer the FMS every 2 years, which will allow researchers to explore how households' financial status changes over the life cycle. Specific topics addressed in the FMS include use of financial products and services, including credit cards, mortgages, student loans, bank accounts, and alternative financial services (such as payday lenders); financial decisions, such as refinancing, investment, retirement planning, and planning for incapacity; confidence in financial decisions; and financial stress and well-being. Researchers can match FMS data to results of other UAS surveys, such as those on financial knowledge, financial well-being, and numeracy, further expanding the avenues for research.

Political Data

The UAS also collects data on respondents' political views through its USC Dornsife/Los Angeles Times Presidential Election “Daybreak” poll, which is funded by nonfederal sources. These data are collected continuously during each presidential election cycle. The UAS poll differs from most election polls in that it surveys the same individuals every week from July to November of the election year. First, CESR invites panel members to participate in the poll. Participants take a baseline survey between May and early July, in which they indicate the candidate for whom they voted the previous presidential election, the U.S. congressional candidates for whom they voted in the previous midterm election, and whether they are currently registered to vote. A brief follow-up survey is administered to each participant every week until the election. CESR administers the weekly survey to one-seventh of the participants each day, and weights the responses to demographic characteristics from the Current Population Survey and to 2012 election data.

Each week, Daybreak poll participants indicate the percentage likelihood that (1) they will vote in the presidential election; (2) they will vote for the Democratic candidate, the Republican candidate, or another candidate; and (3) the Democratic candidate, the Republican candidate, or another candidate will win the election. Thus, the survey questions are probabilistic rather than the verbal questions (such as, “For which candidate will you vote?”) typical of most election surveys. In assessing ALP 2008 election data, Delavande and Manski (2010) found that probabilistic items predicted actual voting behavior more accurately in early August, while verbal questions predicted more accurately in late October. However, responses to both probabilistic and verbal items largely agreed over the election cycle as a whole.

The Daybreak poll provides two broad research opportunities. First, it allows researchers to track changes in voter preference and likelihood of voting over time. Second, researchers can match political preference and voting data to results from other UAS surveys.

Discussion, Limitations, and Future Research

The UAS presents researchers with unique reach and flexibility in conducting survey-based and experimental research, including access to a large, nationally representative sample; customizable surveys and RCTs; and rich, publicly available data sets. The UAS also provides unique information to broaden SSA's understanding of U.S. retirement security and to enable the agency to help workers plan for retirement. In addition to the two surveys on Social Security program knowledge and preferred communication channels mentioned earlier, other UAS surveys will examine various aspects of the retirement-benefit claiming decision. Some UAS surveys also allow the agency to learn how American families save over time and how much they rely on Social Security income in retirement. Data from these surveys enable the agency to target program-knowledge outreach campaigns to key subgroups—particularly, to those most reliant on Social Security income. For SSA's disability programs, the UAS, in conjunction with the HRS, can identify patterns and predictors of impairments and functional limitations from young adulthood to old age. Additionally, the UAS is working to establish permissions and procedures for matching survey results with administrative data from SSA and the Centers for Medicare and Medicaid Services. To explore ways to reduce respondent burden and enhance accuracy, a CESR pilot study encourages respondents to use a financial aggregator service. The aggregator provides daily electronic financial transaction data to CESR, which can be compared against self-reported information.

Limitations

The UAS addresses a number of limitations of other Internet-based panels, such as accessibility and random sampling; yet certain challenges remain. Although an address-based sampling frame is comprehensive, it may fail to include an adequate number of population subgroups such as ethnic and racial minorities. In the past, UAS administrators have targeted ZIP codes with high proportions of Native American residents as part of a special-purpose sample. They continue to explore methods that ensure the inclusion of ample numbers of minority households in the UAS panel.

Given the breadth and volume of UAS surveys, another potential concern is survey fatigue. CESR and collaborating researchers ensure reasonable survey loads by monitoring the frequency and length of surveys administered to participants. For example, analysis of timestamp data might show that a given respondent tends to take longer to complete successive surveys, which may indicate incipient survey fatigue. Similarly, the fact that panel members are willing to take so many surveys (and spend so much time taking them) may mean that respondents are more conscientious in this regard than the average American. UAS administrators can address this potential selection bias by measuring and controlling for self-reported conscientiousness and other relevant variables.

A final limitation of surveys that are repeated at regular intervals is that continued participation may trigger knowledge, awareness, and behavior in the respondent that might not otherwise have occurred, which might be seen as artificially altering the extent to which the panel members are representative of the population. Fortunately, the continuing expansion of the UAS panel gives researchers the option of limiting data analysis to “fresh” samples of participants with only a single exposure to a particular survey. Conversely, researchers may choose to capitalize on panel members' changing knowledge and incorporate it into their research variables.

Future Research

As the UAS panel expands, CESR and other researchers continue to develop surveys to address more complex research questions across and within larger population samples. For example, SSA and CESR researchers are working on using UAS data in the development of innovative indexes related to retirement. Chard, Rogofsky, and Yoong (2017) introduce the retirement planning index, which combines a set of positive retirement savings–related indicators from the UAS-HRS Income and Asset module. Other ongoing research also aims to develop a retirement satisfaction index, which will measure levels of satisfaction, regret, and well-being among retirees, focusing retrospectively on their retirement-related decisions.

The expansion of the UAS panel will also allow researchers to study specific segments of the population, such as low-income households or individuals with disabilities. Further, it will allow analysis of defined geographic areas. SSA researchers will analyze trends in Social Security program knowledge and preferred methods of communication across the agency's 10 administrative regions. SSA regional offices may use this research to better understand the populations that they serve. With this information, the agency may also tailor communication efforts and deliver them through more effective platforms. Because the sample sizes will be small in some SSA regions, however, it may be difficult for researchers to provide subanalysis at the regional level.

As noted earlier, the flexibility and continued expansion of the UAS also enable researchers to conduct RCTs to evaluate the efficacy of various program and communication interventions. For example, SSA and CESR are studying how the use of alternative terminology in discussing Social Security benefit claiming affects respondent understanding, claiming intentions, and other outcomes.

In addition, CESR specialists have started building a user-friendly public-use data file similar to that of the HRS. The core components of the UAS public-use file are the cognitive ability, financial knowledge, big five personality inventory, Social Security program knowledge, and Social Security preferred communication channel modules; the FMS; and data on key UAS-HRS topics. The public-use file will include longitudinal data for many of these components. In the future, the data file may, with sponsoring agency approval, also include federally funded data sets. Data file documentation will indicate when each of the various UAS modules and surveys was administered and, when applicable, readministered.

Future researchers may be able to match UAS panel results to SSA and Internal Revenue Service (IRS) administrative data. Work is under way to determine how many panel members will consent to provide their Social Security number to match UAS and Social Security administrative data. If enough panel members consent to match their UAS survey responses and SSA/IRS administrative data, CESR will create a restricted-use file and house it in a secure location. If matched data are available, researchers will send their project proposals to SSA and IRS. Researchers will also require approval from USC to use the restricted matched data. The approval procedure will be similar to that for obtaining restricted HRS data.

This article outlined the methodological features of the UAS, including its sampling and weighting procedures. It provided information on how researchers can customize their own surveys and incorporate them into the UAS panel. It also highlighted some of the recurring UAS surveys, such as UAS-HRS surveys and modules on cognitive and personality variables, financial management, financial knowledge, Social Security program knowledge and preferred communication channels, and political views.

Future articles will provide additional detail on aspects of the UAS panel such as the public-use data file, the retirement preparedness index, the retirement satisfaction index, and additions to the panel as it expands.

Notes

1 Some other studies, such as the RAND American Life Panel, allow such interactivity.

2 Panel analysis involves studying the same individuals over an extended period with a series of repeated observations. In the UAS, these observations include various survey modules. Although researchers regularly add new modules to the study, panel members also take many of the core surveys on a repeated follow-up basis. This allows researchers to understand how the knowledge and perspectives of panel members change over time or, in certain cases, after an experimental intervention.

3 Although receiving initial results in a month is typical, researchers may receive weighted data sooner, depending on how quickly panel members respond to the survey and when the survey is closed. If a high response rate is achieved within a few days of the survey release, the researcher may request the weighted data at that point.

4 For more information on CentERpanel, see https://www.centerdata.nl/en/databank/centerpanel-data-0; on GfK KnowledgePanel, see http://www.gfk.com/products-a-z/us/knowledgepanel-united-states/; on LISS, see https://www.lissdata.nl/about-panel/; and on ALP, see https://alpdata.rand.org/.

5 Sampling frames represent all units in a population that a researcher intends to study. For the UAS, the sampling frame includes individuals aged 18 or older living in the United States.

6 In some cases, however, researchers contact households via telephone after matching telephone numbers to addresses (Dekker and Murphy 2009).

7 Coverage bias occurs when a sampling methodology draws its sample from a population subset that differs from the entire population in a systemic way (for example, the income level of the subset differs substantially from that of the entire population).

8 This may occur if a household maintains seasonal residences, merges two apartment units at the same address, or uses a Post Office box in addition to a home mailing address.

9 CESR designers draw these oversamples to produce sample sizes that are statistically sufficient to support studies covering those specific populations. The oversamples of Native Americans and a subgroup of Los Angeles county residents with young children, recruited using information from state birth records, are omitted from the UAS weighting computations that allow each survey sample to be representative of the target population. Both groups are appropriately flagged in the data files. For more information on the construction of the oversamples, see https://uasdata.usc.edu/index.php.

10 The SIS algorithm is implemented to recruit respondents for the nationally representative main sample as well as for the oversamples of Los Angeles County and California residents. Different sampling procedures are adopted to recruit respondents for the Native American oversample and for the oversample of a subgroup of Los Angeles County residents with young children. Because of their specific sampling procedures, these two groups receive zero weight.

11 Administrators make up to 15 attempts to contact the household about completing the survey.

12 The UAS response rate is provided by the American Association for Public Opinion Research's Response Rate calculator.

13 The LISS panel, employing both telephone and face-to-face recruiting methods for its population registry-based sample, has an initial response rate of 45 percent.

14 For complete details on recruitment per sample wave, see https://uasdata.usc.edu/index.php.

15 For comparison, 2015 ALP surveys based on probability samples had completion rates of 60 percent or higher (Pollard and Baird 2017) and one of the GfK KnowledgePanel surveys had a completion rate of 85 percent (Callegaro and DiSogra 2008).

16 Some surveys take less than 30 minutes. The amount of compensation is proportional to the length of the survey.

17 Hays, Liu, and Kapteyn (2015) discuss other drawbacks of Internet-based surveys, such as respondents inadvertently giving the same response to consecutive items or, in the case of convenience panels, taking the same survey more than once.

18 Researchers may send the request to uas-weights-l@mymaillists.usc.edu.

19 For more information on the raking algorithm, refer to UAS documentation (https://uasdata.usc.edu/addons/documentation/UAS%20Weighting%20Procedures.pdf).

20 The primary panel member is the resident who first responded to the initial survey mailed to the address.

21 Items in the My Household survey are prepopulated with the respondent's answers from the previous survey iteration. If a particular item remains the same, the panel member does not change it.

22 In sequential imputation, the missing values of a given variable (for example, household income) are imputed using a regression of observed cases of that variable with a set of other variables (such as age or sex). When the missing values of the first variable are imputed, those values are used in a regression imputation for a second variable. The process repeats until all variables have been imputed.

23 For more information on sequential imputation and all other UAS weighting procedures, see https://uasdata.usc.edu/addons/documentation/UAS%20Weighting%20Procedures.pdf.

24 “.a” may also represent data that contain errors.

25 For more information on the HRS, see http://hrsonline.isr.umich.edu/.

26 The HRS website includes a full list of publications (https://hrs.isr.umich.edu/publications).

27 For the UAS-HRS survey codebooks, see https://uasdata.usc.edu/surveys; for the original HRS survey codebook, see http://hrsonline.isr.umich.edu/index.php?p=showcbk.

28 The HRS, which includes repeated measures of cognitive ability, is a notable exception.

29 The authors report that outcome in one of the two studies they conducted.

30 UAS1 has about 6,000 respondents versus between 100 and 200 respondents in previous studies.

31 UAS designers drew these questions from the National Council of Economic Education Survey, the Financial Industry Regulation Authority's Investor Knowledge Quiz, the HRS module on financial literacy and planning, the Survey of Financial Literacy in Washington State, and the Survey of Consumers.

32 Lusardi and Mitchell (2007) observe that confidence generally exceeds financial literacy. Few studies, however, have investigated the relationship between financial literacy and confidence in financial decision making (see Asaad 2015).

33 Additional information about the Older Adult Survey may be found at https://www.federalreserve.gov/econresdata/older-adults-survey/July-2013-Appendix-A-Older-Adult-Survey-Methodology.htm.

References

Anderson, Monica, Andrew Perrin, and JingJing Jiang. 2018. “11% of Americans Don't Use the Internet. Who Are They?” Pew Research Center FactTank (March 5). http://www.pewresearch.org/fact-tank/2018/03/05/some-americans-dont-use-the-internet-who-are-they/.

Asaad, Colleen Tokar. 2015. “Financial Literacy and Financial Behavior: Assessing Knowledge and Confidence.” Financial Services Review 24(2): 101–117.

Banks, James, and Zoe Oldfield. 2007. “Understanding Pensions: Cognitive Function, Numerical Ability and Retirement Saving.” Fiscal Studies 28(2): 143–170.

Barrick, Murray R., and Michael K. Mount. 1991. “The Big Five Personality Dimensions and Job Performance: A Meta-Analysis.” Personnel Psychology 44(1): 1–26.

Blumberg, Stephen J., and Julian V. Luke. 2016. Wireless Substitution: Early Release of Estimates from the National Health Interview Survey, January–June 2016. Hyattsville, MD: National Center for Health Statistics.

Bruine de Bruin, Wändi, Andrew M. Parker, and Baruch Fischoff. 2007. “Individual Differences in Adult Decision-Making Competence.” Journal of Personality and Social Psychology 92(5): 938–956.

Callegaro, Mario, and Charles DiSogra. 2008. “Computing Response Metrics for Online Panels.” Public Opinion Quarterly 72(5): 1008–1032.

Chard, Richard E., David Rogofsky, and Joanne Yoong. 2017. “Wealthy or Wise: How Knowledge Influences Retirement Savings Behavior.” Journal of Behavioral and Social Sciences 4(3): 164–180.

Chien, Sandy, Nancy Campbell, Chris Chan, Orla Hayden, Michael Hurd, Regan Main, Joshua Mallett, Craig Martin, Colleen McCullough, Erik Meijer, Michael Moldoff, Philip Pantoja, Susann Rohwedder, and Patricia St. Clair. 2015. RAND HRS Data Documentation, Version O. Santa Monica, CA: RAND Center for the Study of Aging.

Couper, Mick P., Arie Kapteyn, Matthias Schonlau, and Joachim Winter. 2007. “Noncoverage and Nonresponse in an Internet Survey.” Social Science Research 36(1): 131–148.

Craig, Benjamin M., Ron D. Hays, A. Simon Pickard, David Cella, Dennis A. Revicki, and Bryce B. Reeve. 2013. “Comparison of US Panel Vendors for Online Surveys.” Journal of Medical Internet Research 15(11): e260.

Dekker, Katie, and Whitney Murphy. 2009. “Address Based Sampling and Address Matching: Experience from REACH U.S.” Paper presented at the Proceedings of the American Statistical Association, Section on Survey Research Methods.

Delavande, Adeline, and Charles F. Manski. 2010. “Probabilistic Polling and Voting in the 2008 Presidential Election: Evidence from the American Life Panel.” Public Opinion Quarterly 74(3): 433–459.

Frederick, Shane. 2005. “Cognitive Reflection and Decision Making.” Journal of Economic Perspectives 19(4): 25–42.

Groves, Robert M., and Steven G. Heeringa. 2006. “Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs.” Journal of the Royal Statistical Society: Series A (Statistics in Society) 169(3): 439–457.

Hays, Ron D., Honghu Liu, and Arie Kapteyn. 2015. “Use of Internet Panels to Conduct Surveys.” Behavior Research Methods 47(3): 685–690.

Hilgert, Marianne A., Jeanne M. Hogarth, and Sondra G. Beverly. 2003. “Household Financial Management: The Connection between Knowledge and Behavior.” Federal Reserve Bulletin 89(7): 309–322.

Iannacchione, Vincent G. 2011. “The Changing Role of Address-Based Sampling in Survey Research.” Public Opinion Quarterly 75(3): 556–575.

Iannacchione, Vincent G., Jennifer M. Staab, and David T. Redden. 2003. “Evaluating the Use of Residential Mailing Addresses in a Metropolitan Household Survey.” Public Opinion Quarterly 67(2): 202–210.

John, Oliver P. 1990. “The 'Big Five' Factor Taxonomy: Dimensions of Personality in the Natural Language and in Questionnaires.” In Handbook of Personality: Theory and Research, edited by Oliver P. John, Richard W. Robins, and Lawrence A. Pervin (66–100). New York, NY: Guilford Press.

Judge, Timothy A., Daniel Heller, and Michael K. Mount. 2002. “Five-Factor Model of Personality and Job Satisfaction: A Meta-Analysis.” Journal of Applied Psychology 87(3): 530–541.

Judge, Timothy A., Chad A. Higgins, Carl J. Thoresen, and Murray R. Barrick. 1999. “The Big Five Personality Traits, General Mental Ability, and Career Success across the Life Span.” Personnel Psychology 52(3): 621–652.

Låg, Torstein, Lars Bauger, Martin Lindberg, and Oddgeir Friborg. 2014. “The Role of Numeracy and Intelligence in Health-Risk Estimation and Medical Data Interpretation.” Journal of Behavioral Decision Making 27(2): 95–108.

Liberali, Jordana M., Valerie F. Reyna, Sarah Furlan, Lilian M. Stein, and Seth T. Pardo. 2012. “Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment.” Journal of Behavioral Decision Making 25(4): 361–381.

Link, Michael W., Michael P. Battaglia, Martin R. Frankel, Larry Osborn, and Ali H. Mokdad. 2008. “A Comparison of Address-Based Sampling (ABS) Versus Random-Digit Dialing (RDD) for General Population Surveys.” Public Opinion Quarterly 72(1): 6–27.

Link, Michael W., Gail Daily, Charles D. Shuttles, Tracie L. Yancey, and H. Christine Bourquin. 2009. “Building a New Foundation: Transitioning to Address-Based Sampling after Nearly 30 Years of RDD.” Paper presented at the 64th annual conference of the American Association for Public Opinion Research, Hollywood, FL,, May 14–17.

Lipkus, Isaac M., Greg Samsa, and Barbara K. Rimer. 2001. “General Performance on a Numeracy Scale among Highly Educated Samples.” Medical Decision Making 21(1): 37–44.

Lusardi, Annamaria, and Olivia S. Mitchell. 2007. “Financial Literacy and Retirement Preparedness: Evidence and Implications for Financial Education.” Business Economics 42(1): 35–44.

———. 2011a. “Financial Literacy and Planning: Implications for Retirement Wellbeing.” In Financial Literacy: Implications for Retirement Security and the Financial Marketplace, edited by Olivia S. Mitchell and Annamaria Lusardi (17–39). New York, NY: Oxford University Press.

———. 2011b. “Financial Literacy and Retirement Planning in the United States.” Journal of Pension Economics and Finance 10(4): 509–525.

———. 2017. “How Ordinary Consumers Make Complex Economic Decisions: Financial Literacy and Retirement Readiness.” Quarterly Journal of Finance 7(3).

Peters, Ellen, Judith Hibbard, Paul Slovic, and Nathan Dieckmann. 2007. “Numeracy Skill and the Communication, Comprehension, and Use of Risk-Benefit Information.” Health Affairs 26(3): 741–748.

Pollard, Michael, and Matthew D. Baird. 2017. The RAND American Life Panel: Technical Description. Santa Monica, CA: RAND Labor and Population. https://www.rand.org/content/dam/rand/pubs/research_reports/RR1600/RR1651/RAND_RR1651.pdf.

Rabinovich, Lila, and Joanne Yoong. 2015. “How Do People Want to Learn About Social Security?” CESR-Schaeffer Working Paper No. 2015-021. Los Angeles, CA: University of Southern California Center for Economic and Social Research.

Reyna, Valerie F., Wendy L. Nelson, Paul K. Han, and Nathan F. Dieckmann. 2009. “How Numeracy Influences Risk Comprehension and Medical Decision Making.” Psychological Bulletin 135(6): 943–973.

Seibert, Scott E., and Maria L. Kraimer. 2001. “The Five-Factor Model of Personality and Career Success.” Journal of Vocational Behavior 58(1): 1–21.

Shanahan, Michael J., Patrick L. Hill, Brent W. Roberts, Jacquelynne Eccles, and Howard S. Friedman. 2014. “Conscientiousness, Health, and Aging: The Life Course of Personality Model.” Developmental Psychology 50(5): 1407–1425.

Sherr, Susan, and David Dutwin. 2009. “Comparing Random Digit Dial (RDD) and United States Postal Service (USPS) Address-Based Sample Designs for a General Population Survey: The 2008 Massachusetts Health Insurance Survey.” Paper presented at the Proceedings of the American Statistical Association, Section on Survey Research Methods.

Shook-Sa, Bonnie E., Douglas B. Currivan, Joseph P. McMichael, and Vincent G. Iannacchione. 2013. “Extending the Coverage of Address-Based Sampling Frames Beyond the USPS Computerized Delivery Sequence File.” Public Opinion Quarterly 77(4): 994–1005.

Sinayev, Aleksandr, and Ellen Peters. 2015. “Cognitive Reflection vs. Calculation in Decision Making.” Frontiers in Psychology, 6(532): 1–16.

Soldz, Stephen, and George E. Vaillant. 1999. “The Big Five Personality Traits and the Life Course: A 45-Year Longitudinal Study.” Journal of Research in Personality 33(2): 208–232.

Thoresen, Carl J., Jill C. Bradley, Paul D. Bliese, and Joseph D. Thoresen. 2004. “The Big Five Personality Traits and Individual Job Performance Growth Trajectories in Maintenance and Transitional Job Stages.” Journal of Applied Psychology 89(5): 835–853.

Tourangeau, Roger, Michael Brick, Sharon Lohr, and Jane Li. 2017. “Adaptive and Responsive Survey Designs: A Review and Assessment.” Journal of the Royal Statistical Society: Series A (Statistics in Society) 180(1): 203–223.

Tourangeau, Roger, Frederick G. Conrad, and Mick P. Couper. 2013. The Science of Web Surveys. New York, NY: Oxford University Press.

Utkus, Stephen P., and Jean A. Young. 2011. “Financial Literacy and 401(k) Loans.” In Financial Literacy: Implications for Retirement Security and the Financial Marketplace, edited by Olivia S. Mitchell and Annamaria Lusardi (59–75). New York, NY: Oxford University Press.

Wagner, James. 2013. “Adaptive Contact Strategies in Telephone and Face-to-Face Surveys.” Survey Research Methods 7(1): 45–55.

Weller, Joshua A., Nathan F. Dieckmann, Martin Tusler, C. K. Mertz, William J. Burns, and Ellen Peters. 2013. “Development and Testing of an Abbreviated Numeracy Scale: A Rasch Analysis Approach.” Journal of Behavioral Decision Making 26(2): 198–212.

Yoong, Joanne, Lila Rabinovich, and Saw Htay Wah. 2015. “What Do People Know About Social Security?” CESR-Schaeffer Working Paper No. 2015-022. Los Angeles, CA: University of Southern California Center for Economic and Social Research.