Test of Low-Literacy Video Guide for the Harvard Cancer Risk Index-G (HCRI-G)

 

Linda K. Larkey, Ph.D.
Graham Colditz, M.D., Ph.D.
Jeffrey Wilson, Ph.D.

Alicia Carson, B.A.

Keven Siegert, B.A.


Our study was designed to test the effectiveness of a video guide to assist patients at various levels of literacy to complete the Harvard Cancer Risk Index-General (HCRI-G). We tested the comparability of a video-assisted method to other standard methods for completing the Harvard Cancer Risk Index-General. Seventy respondents completed three administrations of the survey, separated by washout periods. The video-assisted method showed concordance with the other two methods. Cancer risk scores were in strong agreement in video and personal-assistance methods (but not self-administration). The video-assisted method is comparable to standard methods and may be an effective way for patients to receive cancer risk information in a primary care setting.

 

Background

Many models of health behavior change begin with a premise that people have to believe they are at risk for disease before they change behavior. For example, the Health Belief and Extended Parallel Processing Models claim that one needs to perceive severity and personal susceptibility before one takes action against the threat of disease (Gutierrez-Ramirez et al., 1994; Witte, 1992). Studies of perceived cancer risk show strong relationships to many prevention and screening behaviors (McGarvey et al., 2003; Vernon 1999) making risk perception one of the logical targets for promotion of these behaviors.

Currently published methods of measuring and communicating cancer risk are specific to particular cancers, such as the Gail and Claus models for breast cancer (Euhus, 2001), and/or use genetic information for communicating very high risk (Hampel et al., 2000). These tools are useful in specific contexts, but for many people, perceived risk for cancer is not tied to a specific cancer site. Educating healthy individuals about cancer prevention behaviors requires a more general tool that cuts across cancer sites.

To communicate individualized risk for cancer in general, it is important to find a way to measure and communicate results in a simple yet meaningful way (Emmons et al., 1999). To fill this need, the Harvard Cancer Risk Index was adapted to assess overall risk for cancer. This adapted tool, the HCRI-General, combines the common risk factors and assigns weights to them according to the strength of effect predicted to contribute to risk across the most common cancer sites.

Because of the simplicity of this tool, it is intended for use as a risk communication and teaching tool. It may be particularly useful in contexts where a portable paper and pencil test is needed, such as in clinics and community centers serving low-income patients who may not regularly use (nor have access to) more complex or Web-based health information.

One important setting for identifying and teaching patients at risk is the primary care office. Unfortunately, primary care physicians are burdened by many tasks in addition to diagnosing and treating presenting symptoms (Cohen et al., 1994). This is particularly true in health-care settings with underserved and low-literate patients, adding a paper and pencil assessment such as the HCRI-G would be perceived inefficient for physicians or staff due to time requirements to coach patients through completion.

Health information literature often surpasses the literacy level of patients, especially those who most need the information—patients of lower socioeconomic status with less initial knowledge about health and prevention (Davis et al., 1990). Even the simplest wording and layout may not serve people who cannot read. A video was developed to guide individuals through completion of the HCRI-G in a clinic setting serving indigent and low socioeconomic-status (SES) patients. Some of these patients had very low literacy levels.

Our study was designed to test the effectiveness of a video guide to assist patients at various levels of literacy to complete the HCRI-G. The strategy for testing this video was to compare individuals’ responses to video-guided administration across two other common methods of survey administration—a face-to-face interview and self-completion.

Development of Instrument — A team of interdisciplinary researchers led by Graham Colditz, M.D., Ph.D., at Harvard Medical School developed the Harvard Cancer Risk Index as a series of instruments to measure risk for 13 different cancers for individuals relative to U.S. populations averages (based on Surveillance Epidemiology and End Results data) (Colditz et al., 2000). Published work details the methods for developing and defining instruments used to weight risks and estimate relative risk (Colditz et al., 2000; Kim et al., 2004). The set of instruments is available for public use on www.yourcancerrisk.harvard.edu.

To develop the single HCRI-G instrument using methodology of the HCRI as a foundation, risk factor weights were considered for a selection of the most prevalent cancers. The risk factors with strongest contribution to the largest number of incident cancers were identified, resulting in 10-13 questions regarding modifiable and non-modifiable factors for men and for women (with the number of questions depending on age brackets and screening guidelines). The resulting instrument presents questions designed for binary scale (yes/no) responses, weighted according to contribution to risk (e.g., smoking more than 15 cigarettes per day receives a score of 8, while exercising less than three hours per week scores 2 points) (Figure 1).

Development of Video Guide — A 12-minute Quicktime video titled, “What’s Your Cancer Risk?” (Larkey and Siegert, 2002) was developed to assist participants in completing the HCRI-G. To assist participants with limited reading abilities, each question and answer option was color-coded in the paper survey and presented in the video in visual and auditory format, including color-coding and instructions for skip patterns. Introductory concepts about risk and specific scenes depicting risk-reducing behaviors were interspersed with images of the color-coded risk survey.


Figure 1. Physical activity scene from “What is Your Cancer Risk?” videotape.

Each question was presented in the paper version in a strip of color; the video references the color of each question so that a low-literate patient could locate the position of the question and the answer appropriately. Skip patterns, that is, questions included or skipped depending on the answer to a prior question (e.g., if you are under 50, please skip to question 13; if you are 50 years of age or older, answer the next question), were pointed out visually and verbally in the video guide. Simple instructions for scoring and interpreting the scores were built into the survey and described in the video.

The video was designed initially by the research team. Team members first created a script that faithfully reflected the risk factor language from the HCRI-G. The biomedical communications expert then developed a professional script and storyboard around the core risk factor information, incorporating a series of visual images and action shots to illustrate the concepts and to add interest without detracting from the task instructions. Pacing was an important consideration in the editing process to provide the correct amount of time to complete each section in synch with the video. Each segment that required viewers to listen, decide, and mark their surveys included an appropriate pause.


Figure 2. Color coded questions are highlighted and synchronized to the narration in the videotape.

Methods

We designed the study to test the comparability of the video-assisted method of administering the HCRI-G to two other methods of test taking using a crossover design with a two- to four-week “wash-out” period between administrations. The baseline administration was given without assistance, that is, participants completed the HCRI-G as a self-administered paper and pencil test. Each of the subsequent two administrations, video-assisted and personal-assisted methods, were assigned in different order, randomly varied by participant. The personal-assisted method utilized a trained interviewer who read the questions aloud and marked the responses given verbally by the participant.

We hypothesized that the participants’ answers given during the video-assisted administration would be consistent with answers given during the personal-assisted administration, since both methods provide guidance that is designed to help participants overcome literacy problems they may have with health-related materials. Concordance on these two methods would provide support for comparability of the video as an effective means of guiding the administration of the survey. Scores for the self-administered method were hypothesized to be less correlated with the video-assisted method. We had the expectation that low-literate participants may have unreliable answers to items not fully comprehended in the self-administered method due to low literacy.

Target population and recruitment — Clients from a variety of agencies that serve low-income populations (including a GED [general equivalency diploma] tutoring program for adults, a homeless shelter, and an unemployment office providing job skills training) were prescreened for age and literacy. These recruitment sites were selected in anticipation of identifying participants who would score low on the Rapid Estimate of Adult Literacy in Medicine (REALM) (Emmons et al., 1999; Davis et al., 1993). The REALM is administered by asking participants to read aloud 66 medically-related terms listed in increasing difficulty of recognition and pronunciation. The words are commonly used in health education materials and serve as a valid proxy for comprehending written health information. Potential participants, 18 years of age or older and scoring 60 or lower on the REALM, were provided informed consent. Contact information for follow-up testing was requested. A $20 incentive was promised on the condition that all three tests be completed. Ninety-one participants consented to the study.

Testing — For the first baseline administration, participants were given the survey and asked to complete it to the best of their ability (self-administration). Upon survey completion, the participants were randomly assigned to either the personal-assisted or video-assisted category first, and re-contacted after a two- to four-week washout period to complete the second test. After an additional two to four weeks, the alternate test was administered. When all tests were completed, scores were discussed with participants; the discussion included counseling for cancer prevention/screening.

Results

Of 91 participants initially recruited, 70 attempted to complete all three administrations of the survey. The mean REALM score was 36.69 (standard deviation = 17.26), indicating a grade school literacy level (i.e., < high school level) (Shea et al., 2004). A wide range of literacy levels was represented, from 0, indicating no reading capability, to 56 correct responses out of a possible score of 66; 70 percent of responses fell between 34 and 51, equivalent to sixth to ninth grade level. To assess comparability of the video-assisted method to the other two methods, self-administration and personal-assisted, Cohen’s Kappa test and loglinear models for measures of agreement were used (Bishop et al., 1975).

There were missing data across all three administrations of the survey, limiting complete analyses of between 45 and 55 participants’ responses to nine questions that were administered to all ages and both genders. Missing data were examined for patterns among certain questions or among certain ages or literacy levels of participants. No single survey question was regularly skipped; the missing data are distributed among several questions. A pattern of missing data by participants was found for a small set of very low-literate participants who were unable to complete the first survey (self-administration).

Concordance among Survey Administrations — Kappa statistics indicate a high level of agreement of scores across all three administrations; concordance between video-assisted and the other methods are shown in Table 1. Kappa scores > .50 are considered evidence of good reliability of test/retest scores; all but one of the comparisons is greater than .50 and most are above .65. One notable exception is the physical activity question. In the video-assisted/personal-assisted methods comparison, Kappa = .41. Inspection of the frequency of “yes” and “no” responses shows fewer “no” responses on the personal-assisted method than on the video-assisted method.

There was not enough variability in the answers to the alcohol question to make computation of the Kappa statistic meaningful or McNemar’s tests useful. No participants claimed they drank “1 or more alcoholic drinks per day” on both the video-assisted and self-administration surveys; only one answered “yes” on both tests in the personal-assisted and video-assisted comparison.

Table 1:
Test of Comparability of Video-Assisted to Self-Administered and Personal–Assisted Methods of HCRI-G Administration

Concordance of Final Scores — Participants’ scores were totaled according to instructions on the survey (self-administration) or as described in the video (video-assisted). High correlations were found between the participant computed score and the true calculated scores in methods that are assisted (.96 for video, .81 for personal-assisted) but low correlation for the respondent-only calculated score (in the self-administration) and true summed score (.26) indicating that participants were calculating their scores most correctly when guided by the video or with assistance from the interviewer, but much less accurately for self-administration.

Correlations of true scores between video and other modes of administration are as follows: self-administration and video-assisted (.89), and personal-assisted with video (.76), indicating that the summed risk item values across these methods are highly reliable.

Discussion

The video-assisted method of administration of the HCRI-G shows moderate concordance to two other common methods of completion, indicating this method yields comparable results. Self-administration and personal-assisted methods both showed consistently high Kappa values in comparisons with video-assisted, with the exception of one question regarding level of physical activity in the personal-assisted/video-assisted comparison.

Although it was expected that the self-administration responses among a literacy-compromised group might show less correspondence to the two assisted methods, video- and personal-assisted, they appeared to be comparable. This may have been due to the ease of comprehension of the survey in self-administration. The language used in the survey was greatly simplified, avoiding words that caused respondents to stumble during administration of the REALM. REALM includes medical terms such as antibiotic, inflammatory, jaundice, while the health concepts in the HCRI-G are expressed in lay terms/language, such as “active 3 or more hours each week,” or “Do you check your whole body for skin cancer?”. Thus, participants in self-administration either completed all of the survey or none of the survey, indicating they were either able to read well enough to answer the questions with a fair comprehension or, if they could not read at all, they did not fill in answers. This possibility is consistent with the dozen incomplete baseline surveys that corresponded to REALM scores of 14 or lower (and the only two others with such low scores attempted answers). Surveys with no answers at baseline (self-administration) dropped out of the analysis. The remaining comparisons were likely among those who could read the survey well enough to comprehend it, yielding comparable scores between baseline (self-administration) and video-assisted administration.

The high correlation between true (staff-calculated) final scores between the video-assisted and the other two administrations provide additional support for the comparability of outcomes for the methods. If such an instrument were used not only as an educational tool, but also to calculate risk levels across groups of study participants or among patients within a health-care system, the use of the guide could potentially be used reliably to guide survey administration. In busy primary care practice settings, or in research studies requiring uniform methods of administration, this could be a practical tool to support data collection for cancer risk information.

Finally, it should be noted that moderate and low health-literacy levels of participants, as indicated by the REALM scores, did not appear to prevent most participants from consistently completing the survey alone with assisted methods; only a handful of participants were completely unable to read and complete the survey. In such cases where literacy levels range down to no reading skills whatsoever, using a video guide to assist completion may be an effective, more efficient way than taking the time of interviewers. Survey scoring also seems to require some assistance across literacy levels, and the video method of assistance for calculating summed scores works well.

Conclusions

The HCRI-G is a brief tool that can be used to teach patients about their individual risk level and modifiable risk factors for cancer, and it is likely that adequate completion of the survey may be obtained by simple self-administration, even among fairly low-literate populations. Those who cannot read at all could benefit from assistance, either from a face-to-face interviewer or from a video.

The video presentation created to guide respondents through completing the HCRI-G may be an effective way to standardize delivery and minimize the use of staff time in a primary care practice setting. The video-assisted method of administering the HCRI-G is comparable to either self-administration or personal-assisted methods, with the advantage of assuring capability of completion to those who cannot read at all. Final scores on the HCRI-G (as indicators of level of risk for cancer) are best computed with staff assistance, but satisfactory results are achieved with the video-assisted method of guiding participants through scoring their own surveys. Improved capability for completion of those who cannot read, combined with more accurate final score calculations, makes the HCRI-G a valuable tool.

References

Bishop, Y. M. M., S. E. Fienberg, and P. W. Holland. 1975. Discrete Multivariate Analysis: Theory and Practice. Cambridge: MIT Press.

Colditz, G. A., K. A. Atwood, K. Emmons, R. R. Monson, W. C. Willett, D. Trichopoulos, and D. J. Hunter. 2000. "Harvard report on cancer prevention volume 4: Harvard Cancer Risk Index." Cancer Causes and Control 11:477-488.

Cohen, S. J., H. W. Halvorson, and Gosselink. 1994. "Changing physician behavior to Improve disease prevention." Preventative Medicine 23:284-291.

Davis, T., M. Crouch, G. Wills, S. Miller, and D. Abdehou. 1990. "The gap between patient reading comprehension and the readability of patient education materials." Journal of Family Practice 31:533-538.

Davis, T. C., S. Long, R. Jackson, et al., 1993. "Rapid Estimate of Adult Literacy in Medicine: A Shortened Screening Instrument". Family Medicine 25:391-395.

Emmons, K. M., S. Koch-Weser, K. Atwood, L. Conboy, R. Rudd, and G. Colditz. 1999. "A qualitative evaluation of the Harvard Cancer Risk Index." Journal of Health Communication 4(3): 181-193.

Euhus, D. M. 2001. "Understanding mathematical models for breast cancer risk assessment and counseling."Breast Journal 7(4): 224-32.

Gutierrez-Ramirez, A., R. B.Valdez, and O. Carter-Pokras. 1994. Cancer. In Latino Health in the United States: A Growing Challenge, edited by C.W. Molina and M. Molina, 211-246. Washington, D.C.: American Public Health Association.

Hampel, H., and P. Peltomaki. 2000. "Hereditary colorectal cancer: risk assessment and management."Clinical Genetics 58(2): 89-97.

Kim, D. J., B. Rockhill, G. A. Coldtiz. 2004. "Validation of the Harvard Cancer Risk Index: a prediction tool for individual cancer risk."Journal of Clinical Epidemiology 57:332-40.

Larkey, L. K., and K. Siegert. 2003. “What’s Your Cancer Risk?” Biomedical Communications, University of Arizona.

McGarvey, E. L., G. J. Clavet, J. B. Johnson 2nd, A. Butler, K. O. Cook, and B. Pennino. 2003. "Cancer screening practices and attitudes: comparison of low-income women in three ethnic groups."Ethnic Health 8:71-82.

Shea, J. A., B. B. Beers, V. J. McDonald, A. Quistberg, K. Ravenell, and D. A. Asch. 2004. "Assessing health literacy in Africian American and Caucasian Adults: Disparities in Rapid Estimate of Adult Literacy in Medicine (REALM) Scores."Family Medicine 36(8):575-81.

Vernon, S. W. 1999. "Risk perception and risk communication for cancer screening behaviors: a review."Journal of National Cancer Institute Monographs 25:101-119.

Witte, K. 1992. "Putting the fear back into fear appeals: The Extended Parallel Process Model."Communication Monographs 59:329-349.

Authors

Linda K. Larkey, Ph.D.
Dr. Linda Larkey directs the Cancer Prevention and Integrative Medicine office of the Arizona Cancer Center, Scottsdale. Her doctoral degree is in Communication, from ASU, with primary research interests in communication of health messages to multicultural populations. She is a Research Assistant Professor of Family and Community Medicine and Public Health, and a member of the Arizona Cancer Center at the University of Arizona. Most recently, Dr. Larkey has begun applying her long-time personal interest in integrative medicine to research in cancer supportive care.

Graham Colditz, M.D., Ph.D.
Dr. Graham Colditz is Professor of Medicine at Harvard Medical School; Professor of Epidemiology at the Harvard School of Public Health; Senior Epidemiologist in the Department of Medicine, Channing Laboratory, Brigham and Women’s Hospital; Head of the Chronic Disease Epidemiology Group at Channing Laboratory; and Director, Harvard Center for Cancer Prevention. His research focuses on breast cancer incidence, breast cancer prevention, and screening.

Jeffrey Wilson, Ph.D.
Jeffrey Wilson, Director of the School of Health Management and Policy in the W.P. Carey School of Business, Arizona State University, has been awarded numerous grants from the National Science Foundation, U.S. Department of Agriculture, National Institutes of Health, Arizona Department of Health Services, and the Arizona Disease Research Commission. In 1977 he was assigned to serve as a biostatistician on the presidential committee on non-conventional practices in medicine.

Alicia Carson, B.A.
Alicia Carson, Program Coordinator for the Colorectal Risk Talks Project in the Cancer Prevention and Integrative Medicine Office, has worked with cancer prevention education with American Indian tribes in Arizona, Idaho, Oregon, and Washington.

Keven Siegert, B.A.
Keven Siegert, Director of the Media Center at the University of Arizona Health Sciences Phoenix Campus, is the current president of the Health and Science Communications Association, a member of the Arizona Health Information Network and Arizona Educational Development Advisory Council.

Copyright 2005, The Journal of Biocommunication, All Rights Reserved

Table of Contents for VOLUME 31, NUMBER 3