May 2003 • Volume 97Number 5

Guest Editorial

You read the Journal of Visual Impairment & Blindness (JVIB) to stay ahead of the curve on research and practice, whatever your corner of vision-related services. You probably focus on the findings from research, and on the ideas for practice. Ideally, you dig below those surface rewards and consider the methods authors used to reach conclusions they offer you. You weigh the evidence based partly on how it was gathered and analyzed, and partly on how it fits with other evidence you have good reason to trust.

Now all readers—teachers, counselors, administrators, as well as researchers—have another incentive not to skip the methods discussion in JVIB articles (of course, you never do that!). The reason stems from a trend in federal policymaking. Some of our field’s astute observers of the U.S. Department of Education—whose mandate includes vocational rehabilitation, special education, and independent living services for older persons—have become alarmed. They see one research model (“clinical trials,” a type of experimental design testing formal hypotheses) being promoted as the only way to do “scientific” research to be used in “evidence-based policy.”

Why be alarmed? Because our field relies heavily on valuable evidence from exploratory and qualitative methods—for example, case studies or surveys. Arguing the danger of over-reliance on clinical trials is too complex for this brief commentary (but see Darling-Hammond and Youngs, 2002). There is room for optimism as well, if one realizes that the new emphasis could lead to support that strengthens our research efforts. Also, a recent report by the General Accounting Office—the respected congressional watchdog agency—praises the role of qualitative (ethnographic) research in federal agencies.

My aim is simply to make JVIB readers aware that the issue of methods is not esoteric—it will affect federal funding not only for research, but for programs based on research. If nothing else, the topic gives me a timely framework to apply to this month’s articles. This editor was not involved in selecting those articles. Indeed, I was surprised to find that experimental design was, in fact, the method that three of the four articles emulated. (Gompel et al.; Kim; and Blanco & Travieso). They are “quasi-experimental” because realistic limitations common in most research, not only in our field, constrain random assignment to control and treatment groups. By contrast, the article by Corn et al. is a rich description of a statewide low vision services delivery project. Project PAVE exemplifies a successful service model based on exploratory research that would be undervalued by an overly narrow definition of “evidence-based policy.”

Gompel et al. examined competing explanations for why children with low vision have difficulty reading. Their conclusion has important implications for teaching. They specified a creative research question and a plan that convincingly tested closely related causal explanations. Notably, the very feature that makes their methodology elegant—controlling factors other than those under study— is also the feature that distances such research from real-life conditions of practice, where factors are necessarily uncontrolled. That’s one reason an applied field needs both experimental and qualitative research.

It is refreshing that Kim’s study of assertiveness training was published even though most of the expected effects of the training did not appear. A common critique of scholarly publication is that “negative findings” are rarely shared, even though it is important to know when plausible hypotheses fail to show predicted effects. “Non-findings” should trigger rethinking our assumptions, or, at least, how we test those assumptions.

Finally, Blanco and Travieso conducted a quasi-experimental design with a twist. They repeated a well-known experiment done with sighted persons, modified for groups of people who are blind, have low vision, and are blindfolded. Their strategy powerfully undermines earlier conclusions about vision in spatial perception.

In closing, the judgment about what research method is best should depend on the nature of the question and the extent of prior knowledge on the topic. In our effort to expand the frontiers of understanding, as with geographic frontiers, we need to explore the territory before we test hypotheses about placing major highways!

Corinne Kirchner, Ph.D.

Consulting Editor, Research, JVIB

Director, Policy Research and Program Evaluation, AFB


(advertisement)

AccessWorld

Technology and People with Visual Impairments

AccessWorld TM – the most comprehensive resource for obtaining the latest information on assistive technology and visual impairment.

Can you afford to miss a single issue?

Don't delay—subscribe today.

You get six issues per year!

$34.95 online

$39.95 print, audio cassette, ASCII disk, or braille

You'll also receive AccessWorld TM Extra, out bimonthly e-mail supplement, distributed during alternating months.

Subscribe online at: www.afb.org/store

Call toll free: 1-800-232-3044 or 412-741-1398

Read a sample issue at www.afb.org/aw/main.asp

AFB Press

(end advertisement)


Next Article | Table of Contents

JVIB, Copyright © 2009 American Foundation for the Blind. All rights reserved.

 

If you would like to give us feedback, please contact us at jvib@afb.net.



Please direct your comments and suggestions to afbinfo@afb.net
Copyright © 2009 American Foundation for the Blind. All rights reserved.