See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/237131811
Research Methods for Social Work Article · January 2009
CITATIONS
READS
1,086
28,891
2 authors, including:
Allen Rubin University of Houston 71 PUBLICATIONS 2,454 CITATIONS SEE PROFILE
Some of of the authors of this publication are also working on these related projects:
Methodology View project
All content following this page was uploaded by Allen Rubin on Rubin on 26 August 2015. The user has requested enhancement of the downloaded file.
Licensed to: iChapters User
Licensed Licensed to: iChapters iChapters User User
Research Methods for Social Work Seventh Edition
Allen Rubin University of Texas at Austin
Earl R. Babbie Chapman University
Australia • Brazil • Canada • Mexico • Singapore • Spain • United Kingdom • United States
Licensed to: iChapters User
Research Methods for Social Work, Seventh Edition Allen Rubin and Earl Babbie
Publisher: Linda Schreiber Acquisitions Editor: Seth Dobrin Assistant Editor: Arwen Petty Editorial Assistant: Rachel McDonald Media Editor: Dennis Fitzgerald Marketing Manager: Trent Whatcott
© ����, ���� Brooks/Cole, Cengage Learning ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced, transmitted, stored, or used in any form or by any means graphic, electronic, or mechanical, including but not limited to pho tocopying, recording, scanning, digitizing, taping, Web distribution, information networks, or information storage and retrieval systems, except as permitted under Section ��� or ��� of the ���� United States Copyright Act, without the prior written permission of the publisher.
Marketing Assistant: Darlene Macanan Marketing Communications Manager: Tami Strang Content Project Manager: Michelle Cole Creative Director: Rob Hugel Art Director: Caryl Gorska
For product information and technology assistance, contact us at Cengage Learning Customer & Sales Support, �-���-���-���� For permission to use material from this text or product, submit all requests online at cengage.com/permissions Further permissions questions can be emailed to
[email protected]
Print Buyer: Paula Vang Rights Acquisitions Account Manager, Text: Bob Kauser Rights Acquisitions Account Manager, Image: Leitha Etheridge-Sims Production Service: Pre-Press PMG
Library of Congress Control Number: ���������� ISBN-��: ���-�-���-�����-� ISBN-��: �-���-�����-�
Photo Researcher: Joshua Brown Copy Editor: Daniel Nighting Cover Designer: Lee Friedman Cover Image: ©Corbis Photography/Veer, ©Corbis Photography/Veer, ©ImageSource Photography/Veer, ©Glow Images/Getty, ©Corbis Photography/Veer, ©RubberBall Photography/Veer, ©Image Source Photography/ Veer, ©RubberBall Photography/Veer, ©Corbis Photography, ©Pando Hall/Getty, ©Image ��� Photography/Veer, ©RubberBall Photography/ Veer, ©Collage Photography/Veer, ©RubberBall Photography/Veer, ©RubberBall Photography/ Veer, ©Corbis Photography/Veer, ©ImageSource Photography/Veer, ©ImageSource Photography/ Veer, ©Corbis Photography/Veer, ©Pando Hall/Getty, ©ImageSource Photography/Veer, ©Allison Michael Orenstein/Getty, ©Pando Hall/ Getty, and ©Somos Photography/Veer Compositor: Pre-Press PMG
Printed in the United States of America 1 2 3 4 5 6 7 13 12 11 10 09
Brooks/Cole �� Davis Drive Belmont, CA �����-���� USA
Cengage Learning is a leading provider of customized learning solutions with office locations around the globe, including Singapore, the United Kingdom, Australia, Mexico, Brazil, and Japan. Locate your local offi ce at www.cengage.com/global. Cengage Learning products are represented in Canada by Nelson Education, Ltd. To learn more about Brooks/Cole, visit www.cengage.com/brookscole Purchase any of our products at your local college store or at our preferred online store www.ichapters.com.
Licensed to: iChapters User
Dedication To our wives CHRISTINA RUBIN SUZANNE BABBIE
Licensed to: iChapters User
Contents in Brief PART 1 An Introduction to Scienti�c Inquiry in Social Work 1
Chapter 16 Analyzing Existing Data: Quantitative and Qualitative Methods 407
Chapter 1 Why Study Research? 2 Chapter 2 Evidence-Based Practice 25 Chapter 3 Philosophy and Theory in Social Work Research 45
PART 6 Qualitative Research Methods
Chapter 17 Qualitative Research: General Principles 436 Chapter 18 Qualitative Research: Specific Methods 456 Chapter 19 Qualitative Data Analysis
PART 2 The Ethical, Political, and Cultural Context of Social Work Research 73 Chapter 4 The Ethics and Politics of Social Work Research 74 Chapter 5 Culturally Competent Research PART 3 Problem Formulation and Measurement
106
Chapter 20 Quantitative Data Analysis 500 Chapter 21 Inferential Data Analysis: Part 1 527 Chapter 22 Inferential Data Analysis: Part 2 549
Chapter 6 Problem Formulation 132 Chapter 7 Conceptualization and Operationalization 164 Chapter 8 Measurement 187 Chapter 9 Constructing Measurement Instruments 214
PART 8 Writing Research Proposals and Reports Chapter 23 Writing Research Proposals and Reports 574
Appendix A Using the Library 599 Appendix B Statistics for Estimating Sampling Error 607 Glossary 617 Bibliography 631 Index 643
243
Chapter 10 Causal Inference and Experimental Designs 244 Chapter 11 Quasi-Experimental Designs 271 Chapter 12 Single-Case Evaluation Designs 291 Chapter 13 Program Evaluation 318 PART 5 Data-Collection Methods with Large Sources of Data 349 Chapter 14 Sampling 350 Chapter 15 Survey Research
477
PART 7 Analysis of Quantitative Data 499
131
PART 4 Designs for Evaluating Programs and Practice
435
381 iv
573
Licensed to: iChapters User
Contents in Detail Preface XV
Other Ways of Knowing
13
Tradition 13 Authority
14
PART 1
Common Sense
14
An Introduction to Scienti�c Inquiry in Social Work 1
Popular Media
14
Chapter 1 WHY STUDY RESEARCH? Introduction 3 Agreement Reality
Recognizing Flaws in Unscientific Sources of Social Work Practice Knowledge 16 Inaccurate Observation
2
Overgeneralization
Experiential Reality
The Premature Closure of Inquiry Pseudoscience
Reviews of Social Work Effectiveness 5 6
Internet Exercises
The Need to Critique Research Quality 7 Separating the Wheat from the Chaff
Compassion and Professional Ethics
23
Additional Readings
24
7
Chapter 2 EVIDENCE-BASED PRACTICE 25 Introduction 26 Historical Background 26 The Nature of Evidence-Based Practice Steps in Evidence-Based Practice 28
8
A Mental Health Example 9
Utility of Research in Applied Social Work Settings 10 Research Methods You May Someday Use in Your Practice 10
27
Step 1. Formulate a Question to Answer Practice Needs 28
11
11
Step 2. Search for the Evidence 30
12
Step 3. Critically Appraise the Relevant Studies You Find 34
12
Step 4. Determine Which Evidence-Based Intervention Is Most Appropriate for Your Particular Client(s) 36
Objectivity 12 Replication
23
7
Answering Critics of Social Work 8
Observation
21
Review Questions and Exercises
Publication Does Not Guarantee Quality
20
Main Points 21
Early Reviews 5 Studies of Specific Interventions
19
Other Forms of Illogical Reasoning 19
Will You Ever Do Research? 5
Keep an Open Mind
18
Ego Involvement in Understanding
The Utility of Scientific Inquiry in Social Work 4
The Scientific Method
17
Ex Post Facto Hypothesizing 3
Science 4
NASW Code of Ethics
17
Selective Observation
3
16
13
v
Licensed to: iChapters User vi
CONTENTS IN DETAIL
Step 5. Apply the Evidence-Based Intervention 37
Main Points 70
Step 6. Evaluation and Feedback
Review Questions and Exercises
38
Distinguishing the EBP Process from Evidence-Based Practices 38
Internet Exercises
Review Questions and Exercises Internet Exercises
The Ethical, Political, and Cultural Context of Social Work Research 73
44
Chapter 4 THE ETHICS AND POLITICS OF SOCIAL WORK RESEARCH 74 Introduction 75 Institutional Review Boards 75
Chapter 3 PHILOSOPHY AND THEORY IN SOCIAL WORK RESEARCH 45 Introduction 46 Ideology 46 Paradigms 47 Postmodernism
72
PART 2
43
43
Additional Readings
72
Additional Readings
Controversies and Misconceptions about EBP 40 Main Points 42
71
Voluntary Participation and Informed Consent
76
No Harm to the Participants 78 Anonymity and Confidentiality
82
Deceiving Participants 83
47
Contemporary Positivism
Analysis and Reporting 84
49
Interpretivism 50
Weighing Benefits and Costs 85
Critical Social Science 51
Right to Receive Services versus Responsibility to Evaluate Service Effectiveness 86
Paradigmatic Flexibility in Research 52
Theory
53
NASW Code of Ethics
88
IRB Procedures and Forms
Theory and Values 53 Utility of Theory in Social Work Practice and Research 54 Social Work Practice Models Atheoretical Research Studies
Training Requirement 89 Expedited Reviews 89 Overzealous Reviewers
55
89
92
Four Ethical Controversies 92
56
Prediction and Explanation 56
Observing Human Obedience 93
The Components of Theory
Trouble in the Tearoom
57
“Welfare Study Withholds Benefits from 800 Texans” 94
The Relationship between Attributes and Variables 57
Two Logical Systems
60
Comparing Deduction and Induction
Probabilistic Knowledge
60
63
Two Causal Models of Explanation
64
Use of Nomothetic and Idiographic Research in Social Work Practice 64
Quantitative and Qualitative Methods of Inquiry 66 Mixed Methods 67 Objectivity and Subjectivity in Scientific Inquiry 68
94
Social Worker Submits Bogus Article to Test Journal Bias 96
Bias and Insensitivity Regarding Gender and Culture 98 The Politics of Social Work Research Objectivity and Ideology 100 Social Research and Race
101
Main Points 103 Review Questions and Exercises Internet Exercises 104 Additional Readings 105
104
99
Licensed to: iChapters User CONTENTS IN DETAIL
Chapter 5 CULTURALLY COMPETENT RESEARCH 106
Main Points 128
Introduction 107
Additional Readings
Review Questions and Exercises Internet Exercises
129
129 129
Research Participants 107 Measurement
107
Data Analysis and Interpretation Acculturation
PART 3
107
Problem Formulation and Measurement 131
108
Impact of Cultural I nsensitivity on Research Climate 108
Developing Cultural Competence
109
Recruiting and Retaining the Participation of Minority and Oppressed Populations in Research Studies 111
Chapter 6 PROBLEM FORMULATION 132 Introduction 133 Purposes of Social Work Research 133
Obtain Endorsement from Community Leaders 111
Exploration
133
Use Culturally Sensitive Approaches Regarding Confidentiality 112
Description
134
Explanation
135
Employ Local Community Members as Research Staff 112
Evaluation 135
Provide Adequate Compensation 112
Multiple Purposes
Constructing Measurement Instruments
Choose a Sensitive and Accessible Setting
136
Selecting Topics and Research Questions
Alleviate Transportation and Child-Care Barriers 113 113
135
136
Narrowing Research Topics into Research Questions 138
Use and Train Culturally Competent Interviewers 113
Attributes of Good Research Questions 139
Use Bilingual Staff
Involving Others in Problem Formulation
Feasibility
114
Understand Cultural Factors In�uencing Participation 114 Use Anonymous Enrollment with Stigmatized Populations 114 Utilize Special Sampling Techniques 115 Learn Where to Look
115
Connect with and Nurture Referral Sources 116 Use Frequent and Individualized Contacts and Personal Touches 116 Use Anchor Points
Literature Review
How to Review the Literature
144
Searching the Web 144 Be Thorough
145
The Time Dimension
147
Cross-Sectional Studies
148
Longitudinal Studies 149
151
Individuals 152
117
Culturally Competent Measurement
118
Groups 153
Culturally Competent Interviewing
118
Social Artifacts
154
Language Problems 120
Units of Analysis in Review 154
Cultural Bias 121
The Ecological Fallacy 155
Measurement Equivalence 122
Reductionism 157
Assessing Measurement Equivalence 124
Problematic Issues in Making Research More Culturally Competent 127
142
143
Why and When to Review the Literature
Units of Analysis
117
Use Tracking Methods
140
Overview of the Research Process
158
Diagramming the Research Process 159 The Research Proposal 162
143
vii
Licensed to: iChapters User viii
CONTENTS IN DETAIL
Chapter 8 MEASUREMENT 187 Introduction 188 Common Sources of Measurement Error 188
Main Points 162 Review Questions and Exercises Internet Exercises
162
163
Additional Readings
163
Systematic Error Random Error
Chapter 7 CONCEPTUALIZATION AND OPERATIONALIZATION 164 Introduction 165 Conceptual Explication 165 Developing a Proper Hypothesis
188 191
Errors in Alternate Forms of Measurement 191
Avoiding Measurement Error Reliability
194
Types of Reliability
196
Interobserver and Interrater Reliability
166
Test–Retest Reliability
Differences between Hypotheses and Research Questions 166
Face Validity
198
Content Validity
Mediating Variables 169
Criterion-Related Validity Factorial Validity
Operationally Defining A nything That Exists 170 172 174
201 202
208
Reliability and Validity in Qualitative Research 209
174
The In�uence of Operational Definitions 175
Who Decides What’s Valid? 209
Gender and Cultural Bias in Operational Definitions 176
Qualitative Approaches to Reliability and Validity 209
Main Points 212
Operationalization Choices 176 Range of Variation 176 Variations between the Extremes
200
An Illustration of Reliable and Valid Measurement in Social Work: The Clinical Measurement Package 203 Relationship between Reliability and Validity
Conceptualization 172
A Note on Dimensions
200
Construct Validity
Operational Definitions 170
Creating Conceptual Order
197
Validity 198
Extraneous Variables 168
Conceptions and Reality
196
196
Internal Consistency Reliability
Types of Relationships between Variables 166
Indicators and Dimensions
193
177
Review Questions and Exercises Internet Exercises 213 Additional Readings
178
213
213
Examples of Operationalization in Social Work 178 Existing Scales 179 Operationalization Goes On and On
183
A Qualitative Perspective on Operational Definitions 183 Main Points 185 Review Questions and Exercises Internet Exercises
186
Additional Readings
186
186
Chapter 9 CONSTRUCTING MEASUREMENT INSTRUMENTS 214 Introduction 215 Guidelines for Asking Questions 215 Questions and Statements
215
Open-Ended and Closed-Ended Questions Make Items Clear
216
216
Licensed to: iChapters User CONTENTS IN DETAIL
Avoid Double-Barreled Questions 216
Pre-experimental Pilot Studies
Respondents Must Be Competent to Answer Respondents Must Be Willing to Answer Questions Should Be Relevant Short Items Are Best
218
218
218
ix
250
One-Shot Case Study 251 One-Group Pretest–Posttest Design 251 Posttest-Only Design with Nonequivalent Groups (Static-Group Comparison Design) 252
219
Avoid Words Like No or Not 219
Experimental Designs
Avoid Biased Items and Terms 219 Questions Should Be Culturally Sensitive 220
Questionnaire Construction 221
253
Randomization 258 Matching 260 Providing Services to Control Groups 260
General Questionnaire Format 221
Additional Threats to the Validity of Experimental Findings 261
Formats for Respondents 222 Contingency Questions 222
Measurement Bias
Matrix Questions 223
Research Reactivity 262
Ordering Questions in a Questionnaire 224
Diffusion or Imitation of Treatments
Questionnaire Instructions
Compensatory Equalization, Compensatory Rivalry, or Resentful Demoralization 265
225
Pretesting the Questionnaire A Composite Illustration
226
226
229
229
External Validity
267
Main Points 268
Item Selection 230
Review Questions and Exercises
Handling Missing Data 230
Internet Exercises
Some Prominent Scaling Procedures Likert Scaling
263
Attrition (Experimental Mortality) 265
Constructing Composite Measures Levels of Measurement
261
231
269
270
Additional Readings
270
231
Semantic Differential 232
Constructing Qualitative Measures Main Points 239 Review Questions and Exercises Internet Exercises 240 Additional Readings
232
240
240
Chapter 11 QUASI-EXPERIMENTAL DESIGNS 271 Introduction 272 Nonequivalent Comparison Groups Design 272 Ways to Strengthen the Internal Validity of the Nonequivalent Comparison Groups Design 273 Multiple Pretests 273
PART 4
Designs for Evaluating Programs and Practice 243 Chapter 10 CAUSAL INFERENCE AND EXPERIMENTAL DESIGNS 244 Introduction 245 Criteria for Inferring Causality 245 Internal Validity 247
Switching Replication
274
Simple Time-Series Designs Multiple Time-Series Designs
275 278
Cross-Sectional Studies 281 Case-Control Studies 282 Practical Pitfalls in Carry ing Out Experiments and Quasi-Experiments in Social Work Agencies 284 Fidelity of the Intervention 284 Contamination of the Control Condition
285
Licensed to: iChapters User x
CONTENTS IN DETAIL
Resistance to the Case Assignment Protocol 285
Main Points 316
Client Recruitment and Retention
Review Questions and Exercises
285
Mechanisms for Avoiding or Alleviating Practical Pitfalls 286
Internet Exercises
Review Questions and Exercises Internet Exercises
289
289
Additional Readings
316
Additional Readings
Qualitative Techniques for Avoiding or Alleviating Practical Pitfalls 287 Main Points 289
290
Chapter 12 SINGLE-CASE EVALUATION DESIGNS 291 Introduction 292 Overview of the Logic of Single- Case Designs 292 Single- Case Designs in Social Work 294 Use of Single-Case Designs as Part of Evidence-Based Practice 295 Measurement Issues 296 Operationally Defining Target Problems and Goals 297
316
317
Chapter 13 PROGRAM EVALUATION 318 Introduction 319 Purposes of Program Evaluation 319 Historical Overview 319 The Impact of Managed Care 320 The Politics of Program Evaluation 323 In-House versus External Evaluators 323 Utilization of Program Evaluation Findings 325 Logistical and Administrative Problems 326 Planning an Evaluation and Fostering Its Utilization 327
Types of Program Evaluation 329 Evaluating Outcome and Efficiency
329
Cost-Effectiveness and Cost–Benefit Analyses
What to Measure 298
Problems and Issues in Evaluating Goal Attainment 331
Triangulation
298
Monitoring Program Implementation
298
Process Evaluation
Data Gathering
Who Should Measure? 299
Focus Groups 340
299
Direct Behavioral Observation
Logic Models 341
300
Unobtrusive versus Obtrusive Observation
300
Data Quantification Procedures 301 The Baseline Phase 302
Alternative Single-Case Designs
336
Evaluation for Program Planning: Needs Assessment 337
Sources of Data 299 Reliability and Validity
335
304
AB: The Basic Single-Case Design 304 ABAB: Withdrawal/Reversal Design 305
An Illustration of a Qualitative Approach to Evaluation Research 342 Main Points 346 Review Questions and Exercises Internet Exercises
347
347
Additional Readings
347
Multiple-Baseline Designs 307 Multiple-Component Designs 309
Data Analysis
311
Interpreting Ambiguous Results
311
Aggregating the Results of Single-Case Research Studies 313
B Designs 313 The Role of Qualitative Research Methods in Single- Case Evaluation 315
PART 5
Data-Collection Methods with Large Sources of Data 349 Chapter 14 SAMPLING 350 Introduction 351 President Alf Landon
352
330
Licensed to: iChapters User xi
CONTENTS IN DETAIL
President Thomas E. Dewey President John Kerry
Chapter 15 SURVEY RESEARCH 381 Introduction 382 Topics Appropriate to Survey Research 383 Self-Administered Questionnaires 384
353
354
Nonprobability Sampling
355
Reliance on Available Subjects
355
Purposive or Judgmental Sampling 357 Quota Sampling
Mail Distribution and Return
357
Cover Letter
Snowball Sampling 358
Selecting Informants in Qualitative Research The Logic of Probability Sampling
358
359
Conscious and Unconscious Sampling Bias
359
Monitoring Returns
385
Follow-up Mailings
387
Acceptable Response Rates
Interview Surveys
Random Selection 361
388
389
The Role of the Survey Interviewer 390
Can Some Randomly Selected Samples Be Biased? 362 Sampling Frames and Populations
General Guidelines for Survey Interviewing
Telephone Surveys
Review of Populations and Sampling Frames 364
Sample Size and Sampling Error 365 Estimating the Margin of Sampling Error 365 Other Considerations in Determining Sample Size 367
Types of Probability Sampling Designs
391
Coordination and Control 392
362
Nonresponse Bias 363
394
Computer-Assisted Telephone Interviewing Response Rates in Interview Surveys
Online Surveys
395
396
397
Advantages and Disadvantages of Online Surveys 397 Tips for Conducting Online Surveys
367
398
Survey Monkey 399
Simple Random Sampling 368
Comparison of Different Survey Methods
Systematic Sampling 368
399
Strengths and Weaknesses of Survey Research
Stratified Sampling 369 Implicit Stratification in Systematic Sampling
371
Proportionate and Disproportionate Stratified Samples 372
Stratification in Multistage Cluster Sampling
374
Probability Proportionate to Size (PPS) Sampling 375
Illustration: Sampling Social Work Students Selecting the Programs 376 Selecting the Students 377
Probability Sampling in Review
377
Avoiding Gender Bias in Sampling
Review Questions and Exercises
377
Main Points 378
376
405
406
Additional Readings
373
402
Main Points 405 Internet Exercises
Multistage Designs and Sampling Error 373
406
Chapter 16 ANALYZING EXISTING DATA: QUANTITATIVE AND QUALITATIVE METHODS 407 Introduction 408 A Comment on Unobtrusive Measures 408 Secondary Analysis 408 The Growth of Secondary Analysis 409 Types and Sources of Data Archives
410
Sources of Existing Statistics 410 Advantages of Secondary Analysis 412
Review Questions and Exercises Internet Exercises
385
A Case Study 388
Representativeness and Probability of Selection 360
Multistage Cluster Sampling
384
379
Additional Readings
380
379
Limitations of Secondary Analysis 413 Illustrations of the Secondary Analysis of Existing Statistics in Research on Social Welfare Policy 416
Licensed to: iChapters User xii
CONTENTS IN DETAIL
Distinguishing Secondary Ana lysis from Other Forms of Analyzing Available Records 417
Content Analysis
419
Standards for Evaluating Qualitative Studies 451 Contemporary Positivist Standards
Sampling in Content Analysis 420
Social Constructivist Standards
Sampling Techniques
Empowerment Standards
420
451
452
453
Coding in Content Analysis 421
Research Ethics in Qualitative Research
Manifest and Latent Content
Main Points 453
421
Conceptualization and the Creation of Code Categories 422
Review Questions and Exercises Internet Exercises 455
Counting and Record Keeping 423
Additional Readings
Qualitative Data Analysis 424 Quantitative and Qualitative Examples of Content Analysis 424 Strengths and Weaknesses of Content Analysis 426
Historical and Comparative Analysis
427
Sources of Historical and Comparative Data 428 Analytic Techniques
429
Main Points 431 Review Questions and Exercises Internet Exercises
432
432
Additional Readings
453
454
455
Chapter 18 QUALITATIVE RESEARCH: SPECIFIC METHODS 456 Introduction 457 Preparing for the Field 457 The Various Roles of the Observer 458 Relations to Participants: Emic and Etic Perspectives 461 Qualitative Interviewing 463 Informal Conversational Interviews 464
433
Interview Guide Approach 465 Standardized Open-Ended Interviews 467
PART 6
Qualitative Research Methods
435
Chapter 17 QUALITATIVE RESEARCH: GENERAL PRINCIPLES 436 Introduction 437 Topics Appropriate for Qualitative Research 437 Prominent Qualitative Research Paradigms 438 Naturalism
438
Grounded Theory
468
Feminist Methods
468
Focus Groups 468 Recording Observations
470
Main Points 474 Review Questions and Exercises
475
Internet Exercises 475 Additional Readings 476
438
Participatory Action Research 442 Case Studies
Life History
443
Qualitative Sampling Methods
445
Strengths and Weaknesses of Qualitative Research 448
Chapter 19 QUALITATIVE DATA ANALYSIS Introduction 478 Linking Theory and Analysis 478 Discovering Patterns
478
Depth of Understanding 448
Grounded Theory Method 479
Flexibility 449
Semiotics 480
Cost 449
Conversation Analysis
Subjectivity and Generalizability 449
482
Qualitative Data Processing
482
477
Licensed to: iChapters User CONTENTS IN DETAIL
Coding 482
Multivariate Tables
Memoing
Descriptive Statistics and Qualitative Research 520
485
Concept Mapping
486
Computer Programs for Qualitative Data Leviticus as Seen through NUD*IST
487
488
519
Main Points 523 Review Questions and Exercises Internet Exercises
Sandrine Zerbib: Understanding Women Film Directors 490
xiii
524
526
Additional Readings
526
Main Points 497 Review Questions and Exercises Internet Exercises
497
Chapter 21 INFERENTIAL DATA ANALYSIS: PART 1 527 Introduction 528 Chance as a Rival Hypothesis 528
497
Additional Readings
497
Refuting Chance
Statistical Significance 529
PART 7
Analysis of Quantitative Data 499 Chapter 20 QUANTITATIVE DATA ANALYSIS Introduction 501 Levels of Measurement 501
500
538
538
540
Substantive Significance 503
Coding 504 Developing Code Categories
505
Codebook Construction 507
508 508
545
Review Questions and Exercises Internet Exercises
547
548 548
Chapter 22 INFERENTIAL DATA ANALYSIS: PART 2 549 Introduction 550 Meta-Analysis 550
509
509
512
Continuous and Discrete Variables 513
Biased Meta-Analyses
Detail versus Manageability
Critically Appraising Meta-Analyses
514
Collapsing Response Categories
514
Handling “Don’t Know”s 515
Bivariate Analysis
544
Main Points 546
Additional Readings
Central Tendency 509 Dispersion
536
Strong, Medium, and Weak Effect Sizes
Implications of Levels of Measurement
Distributions
The Null Hypothesis 536
Effect Size
Ratio Measures 503
Univariate Analysis
One-Tailed and Two-Tailed Tests 533
Measures of Association
Interval Measures 503
530
Significance Levels 532
The In�uence of Sample Size
Ordinal Measures 502
Data Cleaning
Theoretical Sampling Distributions
Type I and Type II Errors
Nominal Measures 501
Data Entry
529
516
551
Statistical Power Analysis
552
553
Selecting a Test of Statistical Significance
556
A Common Nonparametric Test: Chi-Square
Percentaging a Table 516
Additional Nonparametric Tests 557
Constructing and Reading Bivariate Tables 517
Common Bivariate Parametric Tests 558
Bivariate Table Formats
Multivariate Analyses
517
559
557
Licensed to: iChapters User x iv
CONTENTS IN DETAIL
How the Results of Sign ificance Tests Are Presented in Reports and Journal Articles 562
Organization of the Report Title 591
Common Misuses and Misinterpretations of Inferential Statistics 562
Abstract
Controversies in the Use of Inferential Statistics 566
Methods
Review Questions and Exercises Internet Exercises
592
592 593
Discussion and Conclusions
569
593
References and Appendices 594
570
Additional Readings
591
Introduction and Literature Review Results
Main Points 569
591
Additional Considerations When Writing Qualitative Reports 594
570
Main Points 595
PART 8
Review Questions and Exercises
Writing Research Proposals and Reports 573
Internet Exercises
Cover Materials
Problem and Objectives
580 584
Data Analysis 585 585 585
Additional Components
Appendix B STATISTICS FOR ESTIMATING SAMPLING ERROR 607 The Sampling Distribution of 10 Cases 607 Sampling Distribution and Estimates of Sampling Error 608
587
Table B-1: Random Numbers
Form and Length of the Report 589 Avoiding Plagiarism
589 590
610
611
Confidence Levels and Confidence Intervals
588
Aim of the Report
602
602
Using a Table of Random Numbers
585
Writing Social Work Research Reports Some Basic Considerations 588 Audience
Library of Congress Classification 600
580
Design and Data-Collection Methods 584
Budget
The Card Catalog 600
Professional Journals
Study Participants (Sampling)
Schedule
Appendix A USING THE LIBRARY 599 Introduction 599 Getting Help 599 Reference Sources 599 Using the Stacks 599
Electronically Accessing Library Materials
578
579
Conceptual Framework Measurement
597
Abstracts 600
578
Literature Review
596
Additional Readings
Chapter 23 WRITING RESEARCH PROPOSALS AND REPORTS 574 Introduction 575 Writing Research Proposals 575 Finding a Funding Source 575 Grants and Contracts 576 Before You Start Writing the Proposal 577 Research Proposal Components 578
596
Glossary 617 Bibliography 631 Index 643
614
Licensed to: iChapters User
Preface After six successful editions of this text , we were surprised at how many excellent suggestions for improving it were made by colleagues who use this text or reviewed prior editions. Some of their suggestions pertained to improving the current content. Others indicated ways to expand certain areas, while trimming other areas to prevent the book from becoming too lengthy and expensive. We have implemented most of their suggestions, while also making some other changes to keep up with advances in the field. In our most noteworthy changes we did the following:
the chosen intervention—which has already had its effectiveness empirically supported in prior research— may or may not be the best fit for a particular client. • Clarified differences in sampling between the level of confidence and the margin of error and between quota sampling and stratified sampling. • Clarified how a scale can be incorporated as part of a survey questionnaire. • Elaborated upon the use of random digit dialing and the problem of cell phones in telephone surveys.
• Added quite a few graphics, photos, figures, and tables to many chapters for visual learners.
• Increased our coverage of online surveys.
• In many chapters, to make lengthy parts of the narrative more readable, we added more transitional headings.
• Moved the material on the proportion under the normal curve exceeded by effect-size values from an appendix to the section in Chapter 21 on effect size.
• To address concerns about the book’s length and cost, we moved the appendix “A Learner’s Guide to SPSS” to a separate booklet that instructors can choose whether or not to have bundled with the text for student purchase. That SPSS guide has been updated to SPSS 17.0.
• Expanded our coverage of meta-analysis.
• Expanded coverage of IRBs.
• A figure showing the connections between paradigms, research questions, and research designs.
• Discussed the disparate ways in which significance test results are presented in reports and journal articles. The most significa nt new graphics we added are as follows:
• Expanded coverage of the literature review, particularly regarding how to do it.
• A figure contrasting the emphases in quantitative and qualitative methods of inquiry.
• Reorganized coverage of the two chapters on causal inference and experimental and quasi-experimental designs, and deleted coverage of the elaboration model. (Adding content on spurious relationships in Chapter 7 reduced the need for covering the elaboration model in Chapter 10.)
• A figure depicting quantitative and qualitative examples for different research purposes. • A figure showing how different research questions and designs would fit different research purposes. • A box to illustrate the end product of conceptualization, showing the various indicators of the construct of PTSD and how clusters of indicators form dimensions.
• Added content clarifying the value of pilot studies using pre-experimental designs. • Added a section on B designs in Chapter 12 in light of the potential utility of these designs for practitioners engaged in the EBP process, whose aim is not to make causal inferences but instead to monitor client progress in achieving treatment goals to see if
• A figure illustrating a spurious relationship. • Boxes summarizing actual published social work studies that illustrate the various experimental and quasi-experimental designs. xv
Licensed to: iChapters User x vi
PREFACE
• Two new figures to help students comprehend the logic of quasi-experimental designs using multiple pretests or switching replications to better control for selection biases. Although the above changes are the most noteworthy ones, most chapters were revised in additional ways (many of which re�ect reviewer suggestions) that we hope instructors and students will find helpful. We believe and have been told by instructors that among this text’s most important features have always been its comprehensive and deep coverage, and with each new edition we have sought to strengthen both. Research content can be dif ficult for students to grasp. We think student comprehension is not aided by a simplistic approach, so we explain things in depth and use multiple examples to illustrate the complex material and its relevance for practice. Moreover, taking this approach enhances the book’s value to students in the long run. They seem to agree, and many students keep the book for their professional libraries rather than resell it at the end of the semester. This text’s comprehensive coverage of the range of research methodologies and all phases in the research process—particularly its extensive coverage of qualitative methods, culturally competent research, evidence-based practice, program and practice evaluation, and illustrations of practice applications—represent our effort to help courses re�ect current curriculum policy statements guiding the accreditation standards of the Council on Social Work Education. We are excited about this new edition of Research Methods for Social Work and think the new material we’ve added, along with the other modifications, will meet the needs of instructors and students who seek to keep up with advances in the field. We hope you’ll find this new edition useful. We would like to receive any suggestions you might have for improving this book even more. Please write to us in care of academic.cengage.com, or e-mail us at arubin@mail .utexas.edu.
ANCILLARY PACKAGE
Practice-Oriented Study Guide Instructors have the option of bundling this edition with the 7th edition of a Practice-Oriented Study Guide that parallels the organization of the main text but emphasizes its application to practice. The guide is designed to enhance student comprehension
of the text material and its application to the problems that students are likely to encounter in social work practice. Each chapter of the PracticeOriented Study Guide lists behavioral objectives for applying the chapter content to practice, a summary that focuses on the chapter’s practice applications, multiple-choice review questions that are generally asked in the context of practice applications (answers appear in an appendix along with cross-references to the relevant text material), exercises that involve practice applications that can be done in class (usually in small groups) or as homework, and practice-relevant discussion questions. A crossword puzzle appears at the end of each chapter of the Study Guide to provide students with an enjoyable way to test out and strengthen their mastery of the important terminology in each chapter. Solutions to each puzzle appear in an appendix. In addition to enhancing student learning of research content, we hope that this Study Guide will significantly enhance the efforts we have made in the main text to foster student understanding of the relevance of research to practice and their consequent enthusiasm for research. We also expect that this Study Guide will be helpful to instructors by providing practice-relevant exercises that can be done in class or as homework.
SPSS 17.0 Booklet Instructors also can opt to bundle our Learner’s Guide to SPSS with the text. That SPSS guide been updated to SPSS 17.0.
Instructor’s Manual As with previous editions, an Instructor’s Manual mirrors the organization of the mai n text, offering our suggestions of teaching met hods. Each chapter of the manual lists an outline of relevant discussion, behavioral objectives, teaching suggestions a nd resources, and test items. This Instructor’s Manual is set up to allow instructors the freedom and �exibility needed to teach research methods courses. The test questions for each chapter include approximately 15 to 20 multiple-choice items, 10 to 12 true/ false items, and several essay questions that may be used for exams or to stimulate class discussion. Page references to the text are given for the multiple-choice and true/false questions. Test items are also available on disk in DOS, Macintosh, and Windows formats.
Licensed to: iChapters User PREFACE
xvii
GSS Data
ACKNOWLEDGMENTS
We have sought to provide up-to-date computer— and particularly microcomputer—support for students and instructors. Because many excellent programs are now available for analyzing data, we have provided data to be used with those programs. Specifically, we are providing data from the National Opinion Research Center’s General Social Survey, thus offering students a variety of data gathered from respondents around the country in 1975, 1980, 1985, 1990, 1994 (no survey was done in 1995), and 2000. The data are accessible through our Book Companion website, described below.
We owe special thanks to the following colleagues who reviewed this edition and made valuable suggestions for improving it: Kimberly Kotrla, Assistant Professor, Baylor University; Humberto Fabelo, Director of BSW Program and Associate Professor, Virginia Commonwealth University; Yoshie Sano, Assistant Professor, Washington State University, Vancouver; Robert J. Wolf, Associate Professor, Eastern Connecticut State University; Eileen M. Abel, Associate Professor, University of Central Florida; Amanda C. Healey, Old Dominion University; Needha M. Boutte-Queen, Chair, Texas Southern University. Edward Mullen, Professor, Columbia University also made a helpful suggestion. Thanks also go to the following staff members at Cengage who helped with this edition: Rachel McDonald, Editorial Assistant; Arwen Petty, Assistant Editor; Trent Whatcott, Senior Marketing Manager; Seth Dobrin, Acquisitions Editor; Tami Strang, Marketing Communications Manager; and M ichelle Cole, Content Project Manager.
Book Companion Website Accessible through http://w ww.cengage.com/social work/rubin, the text-specific Companion Site offers chapter-by-chapter online quizzes, chapter outlines, crossword puzzles, �ashcards (from the text’s glossary), web links, and review questions and exercises (from the ends of chapters in the text) that provide students with an opportunity to apply concepts presented in the text. Students can go to the Companion Site to access a primer for SPSS 17.0, as well as data from the GSS. The Instructor Companion Site features downloadable Microsoft ® PowerPoint ® slides.
Allen Rubin Earl Babbie
Licensed to: iChapters User
Licensed to: iChapters User
A
A P P E N D I X
Using the Library
• Readers’ Guide to Periodical Literature: This annual volume with monthly updates lists articles published in many journals and magazines. Because the entries are organized by subject matter, this is an excellent source for organizing your reading on a particular topic.
INTRODUCTION We live in a world �lled with social science research reports. Our daily newspapers, magazines, professional journals, alumni bulletins, and club newsletters— virtually everything you pick up to read—can carry reports that deal with a particular topic. For formal explorations of a topic, of course, the best place to start is still a good college or university library. Today, there are two major approaches to �nding library materials: the traditional paper system and the electronic route. Let’s begin with the traditional method and then examine the electronic option.
In addition to these general reference volumes, you’ll �nd a great variety of specialized references. Here are a few examples: • Social Work Abstracts • Sociological Abstracts
• Psychological Abstracts • Social Science Index
GETTING HELP
• Social Science Citation Index
When you want to �nd something in the library, your best friends are the reference librarians, who are specially trained to �nd things in the library. Some libraries have specialized reference librarians—for the social sciences, humanities, government documents, and so forth. Find the librarian who specializes in your �eld. Make an appointment. Tell the librarian what you’re interested in. He or she will probably put you in touch with some of the many available reference sources.
• Popular Guide to Government Publications • New York Times Index • Facts on File • Editorial Research Reports • Monthly Catalog of Government Publications • Public Affairs Information Service Bulletin • Biography Index • Congressional Quarterly Weekly Report
REFERENCE SOURCES
• Library Literature
You’ve probably heard the expression “information explosion.” Your library is one of the main battle�elds. Fortunately, a large number of reference volumes offer a guide to the information that’s available.
• Bibliographic Index
USING THE STACKS
• Books in Print: This volume lists al l of the books currently in print in the United States, listed separately by author and by title. O ut-of-print books often can be found in older editions of Books in Print.
Serious research usually involves using the stacks, where most of the library’s books are stored. This section provides information about �nding books there.
599
Licensed to: iChapters User 600
APPENDIX A / USING THE LIBRARY
The Card Catalog In the traditional paper system, the card catalog is the main reference system for finding out where books are stored. Each book is described on three separate 3 3 5 cards. The cards are then �led in three alphabetic sets: one by author, another by title, and the third by subject matter. If you want to �nd a particular book, you can look it up in either the author �le or the title �le. If you only have a general subject area of interest, thumb through the subject catalog. Subject catalog cards typically have the following elements:
BF BL–BX
Psychology Religion
C
HISTORY–AUXILIARY SCIENCES
D
HISTORY (EXCEPT AMERICA)
E–F HISTORY (AMERICA) E United States E51–99 Indians of North America G
GEOGRAPHY–ANTHROPOLOGY GN Anthropology
H
SOCIAL SCIENCES HB– HJ Economics and Business HM–HX Sociology
2. Author’s name (last name, �rst name)
J
POLITICAL SCIENCE
3. Title of the book
K
LAW
4. Publisher
L
EDUCATION
5. Date of publication
M
MUSIC
6. Number of pages in the book plus other information.
N
FINE ARTS
7. Call number (This is needed to �nd a non�ction book on the library shelves. A book of �ction generally carries no number and is found in alphabetical order by the author’s name.)
P
LANGUAGE AND LITERATURE
Q
SCIENCE
R
MEDICINE RT Nursing
S
AGRICULTURE —PLANT AND ANIMAL INDUSTRY
T
TECHNOLOGY
U
MILITARY SCIENCE
V
NAVAL SCIENCE
Z
BIBLIOGRAPHY AND LIBRARY SCIENCE
1. Subject heading (always in capital letters)
Library of Congress Classi�cation Here’s a useful strategy to use when you’re researching a topic. Once you’ve identi�ed the call number for a particular book in your subject area, go to the stacks, �nd that book, and look over the other books on the shelves near it. Because the books are arranged by subject matter, this method will help you locate relevant books you didn’t know about. Alternatively, you may want to go directly to the stacks and look at books in your subject area. I n most libraries, books are arranged a nd numbered according to a subject matter classification system developed by the Library of Congress. (Some follow the Dewey decimal system.) The following is a shortened list of some Library of Congress categories.
Library of Congress Classi�cations (partial) A
GENERAL WORKS
B
PHILOSOPHY, PSYCHOLOGY, RELIGION B–BD Philosophy
ABSTRACTS Some publications present summaries of books and articles that help you locate a great many references easily and effectively. These summaries, called abstracts, are often prepared by the original authors. As you �nd relevant references, you can track down the original works and see the full details. In social work, the most relevant publication of these abstracts is Social Work Abstracts (formerly Social Work Research & Abstracts). The first step in using Social Work Abstracts is to look at the subject i ndex to �nd general subject
Licensed to: iChapters User ABSTRACTS
headings related to your speci�c topic of interest. Examine the subtopics listed under the relevant general headings and look for topics that appear to be most directly related to your speci�c topic of interest. Beside each will be one or more numbers. Because the abstracts are presented in numerical order, you can use the listed numbers to locate the abstracts of potential interest to you. When you read the abstract, you will learn whether the study it summarizes is of suf�cient likely relevance to warrant �nding and reading the report in its entirety. If it is worth reading, then the abstract w ill provide the reference information you’ll need to �nd the full report, as well as where you can contact its author. Let’s walk through this process. Suppose you are searching the literature for a valid sc ale to assess the degree of acculturation of foreign-born Chinese Americans. In using Social Work Abstracts, your �rst step would be to �nd a subject heading in the Subject Index that fits the focus of your search. If you looked for the heading “acculturation of foreignborn Chinese Americans,” you wouldn’t �nd it. It’s too speci�c. But if you looked for the broader heading “Acculturation,” you would �nd it in the alphabetized Subject Index between the two headings “Accountability” and “Activism,” as follows: Accountability and Joint Reviews in England, 1083 and school choice, 1243 Acculturation of Chinese Americans, 1081 of Hispanic middle school students, 1231 of Russian immigrants, 1430 of West Indians, 1387 Activism judicial, 1366 Under the heading “Acculturation,” you would �nd four subheadings. The �rst, “of Chinese Americans,” is the one you’d want. The number beside it refers to the number of the abstract you’d want to examine. Because each issue of Social Work Abstracts lists the abstracts it contains in numerical order, you could just �ip pages until you found the page that contains abstract number 1081. Many of the abstracts in Social Work Abstracts are referenced under multiple subject headings. Suppose instead of the heading “Acculturation” you looked for the heading “Chinese Americans.” You would find it in the Subject Index between the headings:
60 1
“Children’s services” and “Citizen participation,” as follows: Children’s services vouchers for, 1003 Chinese Americans and acculturation, 1081 psychosocial issues in working with, 1426 Citizen participation in advocacy for persons with disabilities, 1219 Under the heading “Chinese Americans,” you would �nd two subheadings. The �rst subheading, “and acculturation,” is the one you’d want, and again you would be referred to abstract number 1081. You can see the names of the article’s coauthors, the title of the article, the journal in which it appeared, the volume and issue numbers of that journal, what pages the article appeared on, the date the article was published, a publication code number for that journal, an address for contacting the article’s lead author, and a summary of the article. Social Work Abstracts also provides an Author Index. Suppose you learn the na me of an author who had studied the assessment of acculturation of foreignborn Chinese Americans. You could look up her name in the alphabetized Author Index and �nd the numbers of the abstracts of works written by that author appearing in the volume of Social Work Abstracts you happen to be examining. For example, if the author’s name were R. Gupta, you would �nd abstract 1081 by examining the following section of the Author Index of the September 2002 issue of Social Work Abstracts: Gumport, P. J., 1231 Gunther-Kellar, Y., 1003 Gupta, R., 1081 Gupta, R., 1398 Gurnack, A. M., 1122 Guzley, R. M., 1080 H Hackworth, J., 1359 Gupta’s name is listed twice. That’s because Gupta authored two of the works abstracted in t hat issue of Social Work Abstracts. You’d want to look at all
Licensed to: iChapters User 602
APPENDIX A / USING THE LIBRARY
of the abstracts listed for the person you look up in the Authors Index; perhaps all of them would be of interest to you.
ELECTRONICALLY ACCESSING LIBRARY MATERIALS In Chapters 2 and 6 we discussed how to use your computer to search online for literature. Instead of repeating that material here, we’ll just brie�y remind you that library materials often can be accessed electronically. Although there are different types of computerized library systems, here’s a typical example of how they work. As you sit at a computer terminal in the library, at a computer lab, or at home, you can type the title of a book and in seconds see a video display of a catalog card. If you want to explore the book fu rther, you can type an instruction at the terminal and see an abstract of the book. Alternatively, you might type a subject name and see a listing of all the books and articles written on that topic. You could skim through the list and indicate which ones you want to see. Most college libraries today provide online access to periodicals, books, and other library materials. Your library ’s computerized system should allow you to see which materials are available online and whether paper copies of the m aterials you seek are available in your library. If your library holds those materials, the system may indicate their call numbers, whether the books you seek have been checked out and, if so, the due date for their return. As discussed in Chapters 2 and 6, your library may also provide a variety of Internet professional literature database services to help you search for literature online. (How to use them was discussed in Chapters 2 and 6.)
PROFESSIONAL JOURNALS Despite the exciting advances occurring in computerbased systems and the great practical value of online database services and publications containing abstracts, you should not rely exclusively on them to locate journal articles that are pertinent to your interests. There is no guarantee that every reference of value to you will be identi�ed in a computer search or a publication of abstracts. You should therefore augment your search by examining the tables of contents in recent issues of professional journals that are the
most relevant to your particular interest. For example, if you are searching for studies on interventions for abused children, two of the various journals you may want to examine are Child Welfare and Children and Youth Services Review. Examining recent issues of journals is less timeconsuming than you might imagine. These issues ought to be available in the sec tion of your library that contains unbound current periodicals. Once you locate the recent issues of the relevant journals (the last two years or so ought to suf�ce), it should take only a few m inutes to thumb through the tables of contents looking for titles that have some potential bearing on your topic. Once you spot a relevant title, turn to the page on which the article begins. There you will �nd an abstract of the article; just like the abstracts that appear in publications of abstracts, this one should take only seconds to read and w ill help you determine if the art icle is pertinent enough to warrant reading in greater detail. Your examination of relevant journals can be expedited if your library’s computerized system offers an online service listing the tables of contents of thousands of journals. It might also provide a list of online journals —journals whose entire contents can be downloaded and read online. If you are uncertain about the professional journals that are pertinent to your topic, you might want to examine the list of journals reviewed in several issues of Social Work Abstracts. Each issue contains a list of the journals that have been reviewed for that issue. You might also want to get help with this from your reference librarian. Just to start you thinking about some of the journals you might review, here’s a beginning list of some of the major journals related to social work, by subject area: Aging and the Aged Abstracts in Social Gerontology Canadian Journal of Aging Clinical Gerontologist International Journal of Aging and Human Development Journal of Aging & Physical Activity Journal of Aging & Social Policy Journal of Aging Studies Journal of Applied Gerontology Journal of Elder Abuse & Neglect Journal of Gerontological Social Work Journal of Gerontology Journal of Hosing for the Elderly Journal of Nutrition for the Elderly
Licensed to: iChapters User PROFESSIONAL JOURNALS
Journal of Nutrition, Health and Aging Journal of Social Work in Long-Term Care Journal of Women and Aging Psychology & Aging Quality in Aging: Policy, Practice, & Research in Social Work The Gerontologist Children and Adolescents Adolescence Child & Adolescent Social Work Journal Children & Society Child & Youth Services Children and Youth Services Review Children Today International Journal of Adolescence & Youth Journal of Adolescence Journal of Adolescent & Interpersonal Violence &Trauma Journal of Child & Adolescent Trauma Journal of Children & Poverty Journal of Youth & Adolescenc e Residential Treatment for Children & Youth Child Welfare Adoption and Fostering Adoption Quarterly Child Abuse & Neglect Child Care Quarterly Child Maltreatment Child Survivor of Traumatic Stress Child Welfare Family Preservation Journal Journal of Child Abuse & the Law Journal of Child Custody Journal of Child Sexual Abuse Cognitive or Behavioral Interventions Behavior Modi�cation Behavior Research & Therapy Behavior Therapy Behavioural & Cognitive Psychotherapy Child & Family Behavior Therapy Cognitive & Behavioral Practice Cognitive Therapy and Research Journal of Applied Behavior Analysis Communities Community Development Journal Journal of Community and Applied Social Psychology Journal of Community Practice Journal of Jewish Communal Service
60 3
Journal of Prevention & Intervention in the Community Journal of Social Development in Africa Crime and Delinquency Canadian Journal of Criminology Crime and Delinquency Journal of Research in Crime and Delinquency Journal of Offender Rehabilitation Journal of Research in Crime & Delinquency Youth and Society Youth Violence & Juvenile Justice Cultural Diversity Cultural Diversity & Ethnic Minority Psychology Hispanic Journal of the Behavioral Sciences Journal of Black Studies Journal of Ethnic & Cultural Diversity in Social Work Journal of Ethnicity in Substance Abuse Journal of Immigrant & Refugee Studies Domestic Violence or Trauma Family Violence & Sexual Assault Bulletin Journal of Aggression, Maltreatment & Trauma Journal of Emotional Abuse Journal of Family Violence Journal of Interpersonal Violence Journal of Threat Assessment Journal of Trauma & Dissociation Journal of Traumatic Stress Sexual Abuse: A Journal of Research & Treatment Stress, Trauma & Crisis Trauma, Violence & Abuse Traumatology Violence Against Women Violence & Victims Families American Journal of Family Therapy Child & Family Social Work Con�ict Resolution Quarterly Contemporary Family Therapy Families in Society Family Process Family Relations Family Therapy Family Therapy Networker Journal of Child & Family Studies Journal of Divorce & Remarriage
Licensed to: iChapters User 604
APPENDIX A / USING THE LIBRARY
Journal of Family Issues Journal of Family Psychotherapy Journal of Family Psychology Journal of Family Therapy Journal of Family Social Work Journal of Marital and Family Therapy Journal of Marriage & the Family Journal of Sex & Marit al Therapy Marriage & Family Review Gay, Lesbian, and Transgender Issues and Sexuality Journal of Bisexuality Journal of Gay & Lesbian Soci al Services Journal of Gay & Lesbian Psychotherapy Journal of Homosexuality Journal of Lesbian Studies Journal of Psychology & Human Sexuality Sexuality Research & Social Policy Group Work Group Dynamics—Theory, Research, and Practice Journal for Specialists in Group Work Social Work with Groups Health AIDS & Public Policy Journal Health and Social Work Home Health Care Services Quart erly Hospice Journal Journal of Behavioral Health Ser vices & Research Journal of Health and Social Behavior Journal of Health & Social Policy Journal of HIV/AIDS Prevention & Education Journal of HIV/AIDS & Social Services Journal of Home Health Care Practice Journal of Nephrology Social Work Journal of Occupational Health Psychology Journal of Psychosocial Oncology Journal of Social Work in Disability & Rehabilitation Journal of Social Work in Hospice & Palliative Care Journal of Workplace & Behavioral Health Social Work in Health Care Social Work in Public Health Mental Health American Journal of Orthopsychiatry American Journal of Psychotherapy Archives of General Psychiatry Clinical Social Work Journal Community Mental Health Journal Evidence-Based Mental Health
Journal of Psychotherapy Practice and Research Mental Health Services Research NAMI Advocate Psychiatric Rehabilitation Journal Psychoanalytic Social Work Psychotherapy Networker Psychotherapy Research Schizophrenia Bulletin Social Work in Mental Health Mental Retardation American Journal of Mental De�ciency Journal of Mental De�ciency Research Mental Retardation & Developmental Disabilities Research Reviews Retardation Program Evaluation Canadian Journal of Program Evaluation Evaluation Evaluation Review New Directions for Evaluation The American Journal of Evaluation Qualitative Research Grounded Theory Review Qualitative Health Research Qualitative Inquiry Qualitative Research Qualitative Social Work: Research and Practice Qualitative Sociology School Social Work Children & Schools Journal of School Violence School Social Work Journal Social Work in Education Social Policy Analyses of Social Issues & Public Policy Australian Social Policy Critical Social Policy Global Social Policy International Journal of Social Welfare Journal of Aging & Social Policy Journal of Children and Poverty Journal of European Social Policy Journal of Health & Social Policy Journal of Mental Health Policy and Economics Journal of Policy Analysis & Man agement Journal of Policy Practice Journal of Poverty Journal of Social Distress and the Homeless
Licensed to: iChapters User PROFESSIONAL JOURNALS
Journal of Sociology and Social Welfare Policy & Practice of Public Human Service Public Welfare Social Policy Social Policy & Society Social Policy and Social Work Social Policy Review Social Work & Society Urban Policy and Research Social Work Research Journal of Social Service Research Journal of Social Work Research and Evaluation Journal of the Societ y for Social Work & Research Research on Social Work Practice Social Work Research Social Work (General) Advances in Social Work Australian Social Work British Journal of Social Work Canadian Social Work Review Electronic Journal of Social Work International Social Work Irish Social Work Journal of Baccalaureate Social Work Journal of Evidence-Based Social Work Journal of Social Work Practice Smith College Studies in Social Work Social Service Review Social Work Social Work Abstracts The European Journal of Social Work The Hong Kong Journal of Social Work Spirituality & Religion Journal of Religion & Abuse Journal of Religion & Spirituality in Social Work Social Work & Christianity Substance Abuse Advances in Alcohol and Substance Abuse Alcoholism Treatment Quarterly American Journal of Drug and Alcohol Abuse International Journal of the Addictions Journal of Addictions & Offender Counseling Journal of Addictive Diseases Journal of Chemical Dependency Treatment Journal of Child & Adolescent Substance Abuse
60 5
Journal of Drug Education Journal of Drug Issues Journal of Ethnicity in Substance Abuse Journal of Psychoactive Drugs Journal of Social Work Practice in the Addictions Journal of Studies on Alcohol Journal of Substance Abuse Treatment Substance Abuse Women’s Issues Af�lia Archives of Women’s Mental Health Australian Feminist Studies European Journal of Women’s Studies Feminism & Psychology Feminist Theory Gender & Society Indian Journal of Gender studies Journal of Feminist Family Therapy Violence Against Women Women & Criminal Justice Women & Trauma Other Administration in Social Work Journal of Applied Behavioral Science Journal of Forensic Social Work Journal of Human Behavior in the Social Environment Journal of Progressive Human Services Journal of Technology in Human Ser vices Nonpro�t and Voluntary Sector Quarterly Rural Social Work Social Work & Social Sciences Review No matter what approach you take to �nding library materials, chances are there will be some documents you miss or that are not available in your library or online. If a document is not available at your particular library or via the web, then you can request an interlibrary loan, which is often free. Many libraries have loan agreements, but it might take some time before the document you need arrives at your library. If the document is located at another library nearby, then you may want to go there yourself to get it directly. The key to a good library search is to become well informed; so remember what we said earlier: When you want to �nd something in the library, your best friends are the reference librarians. Don’t be shy about seeking their assistance at various points in your search.
Licensed to: iChapters User
Licensed to: iChapters User
B
A P P E N D I X
Statistics for Estimating Sampling Error In Chapter 14 we noted that probability theory provides a statistical basis for estimating sampling error and selecting a sample size with an acceptable amount of likely sampling error. We also referred you to this appendix if you wished to examine the more mathematical aspects of how probability theory works. Probability theory enables us to estimate sampling error by way of the concept of sampling distributions. A single sample selected from a population will give an estimate of the population parameter. Other samples would give the same or slightly different estimates. Probability theory tells us about the distribution of estimates that would be produced by a large number of such samples. To see how this works, we’ll look at two examples of sampling distributions, beginning with a simple example in which our population consists of just 10 cases.
which person we selected, we’d estimate the group’s mean as anywhere from $0 to $9. Figure B-2 displays those 10 possible samples. The 10 dots shown on the graph represent the 10 “sample” means we would get as estimates of the population. The dots’ distribution on the graph is called the sampling distribution. Obviously, selecting a sample of only one would not be a good idea, because we stand a strong chance of missing the true mean of $4.50 by quite a bit.
8
1
8
1
$8
$1
8
8
1
7
1
7
$7 7
0
THE SAMPLING DISTRIBUTION OF 10 CASES
7
2
0
2
$0
$2
0
Suppose 10 people are in a g roup, and each person has a certain amount of money in his or her pocket. To simplify, let’s assume that one person has no money, another has one dollar, another has two dollars, and so forth up to the person with nine dollars. Figure B -1 presents the population of 10 people. Our task is to determine the average amount of money one person has—speci�cally, the mean number of dollars. If you simply add the money in Figure B-1, you’ll �nd that the total is $45, so the mean is $4.50. Our purpose in the rest of this exercise is to estimate that mean without actually observing all 10 individuals. We’ll do that by selecting random samples from the population and using the means of those samples to estimate the mean of the whole population. To start, suppose we were to select—at random—a sample of only one person from the 10. Depending on
6
0
2
6
4
2
4
$6
$4
6
6
4
9
4
9
$9 9
3
9
3
5
$3 3
5
$5 3
5
5
Figure B-1 A Population of 10 People with $0–$9 607
Licensed to: iChapters User 608
A P P E N D I X B / S T AT I S T I C S F O R E S T I M A T I N G S A M P L I N G E R RO R
10
) 0 1 = l a t o T ( s e l p m a s f o r e b m u N
9 8 True mean = $4.50
7 6 5 4 3 2 1 0 $0
$1
$2 $3 $4 $5 $6 $7 $8 Estimate of mean (Sample size = 1)
$9
Figure B-2 The Sampling Distribution of Samples of 1
10
) 5 4 = l a t o T ( s e l p m a s f o r e b m u N
True mean = $4.50
9 8 7 6 5 4 3 2 1 0 $0
$1
$2 $3 $4 $5 $6 $7 $8 Estimate of mean (Sample size = 2)
$9
Figure B-3 The Sampling Distribution of Samples of 2
But what if we take samples of two each? As you can see from Figure B-3, increasing the sample size improves our estimations. We now have 45 possible samples: [$0 $1], [$0 $2], . . . [$7 $8], [$8 $9]. Moreover, some of those samples produce the same means. For example, [$0 $6], [$1 $5], and [$2 $4] all produce means of $3. In Figure B-3, the three dots shown above the $3 mean represent those three samples. The 45 sample means are not evenly distributed, as you can see. Rather, they are somewhat clustered around the true value of $4.50. Only two samples deviate by as much as four dollars from the true value ([$0 $1] and [$8 $9]), whereas �ve of the samples would give the true estimate of $4.50; another eight samples miss the mark by only 50 cents (plus or minus). Now suppose we select even larger samples. What do you suppose that will do to our estimates of the
mean? Figure B-4 presents the sampling distributions of samples of 3, 4, 5, and 6. The progression of sampling distributions is clear. Every increase in sample size improves the distribution of estimates of the mean. The limiting ca se in this procedure, of course, is to select a sample of 10: Only one sample of that size is possible—everyone—and it would give us the true mean of $4.50. As we will see shortly, this principle applies to actual sampling of meaningful populations. The larger the sample selected, the more accurate it is as an estimation of the population from which it was drawn.
SAMPLING DISTRIBUTION AND ESTIMATES OF SAMPLING ERROR Let’s turn now to a more realistic sampling situation and see how the notion of sampling distribution applies, using a simple example that involves a population much larger than 10. Let’s assume for the moment that we wish to study the adult population of a small town in a rural region. We want to determine whether residents would approve or disapprove of the establishment there of a community-based residential facility for formerly institutionalized, chronically mentally disabled individuals. The study population will be that aggregation of, say, 20,000 adults as identi�ed in the city directory: the sampling frame. (As we discuss in Chapter 14, sampling frames are the lists of elements from which a sample is selected.) The elements will be the town’s adult residents. The variable under consideration will be attitudes toward the facility; it is a binomial variable— approve and disapprove. (The logic of probability sampling applies to the examination of other types of variables, such as mean income, but the computations are somewhat more complicated. Consequently, this introduction focuses on binomials.) We’ll select a random sample of, say, 100 residents to estimate the entire population of the town. The horizontal axis of Figure B-5 presents all possible values of this parameter in the population— from zero percent approval to 100 percent approval. The midpoint of the axis—50 percent—represents one-half the residents approving the facility and the other half disapproving. To choose our sample, we g ive each resident in the directory a number and select 100 random numbers from a table of random numbers. (How to use a table of random numbers, such as the one in Table B-1, is explained in the box “Using a Table of Random
Licensed to: iChapters User SAMPLING DISTRI BUTION AND ESTIM ATES OF SAMPLING ERROR
20 19 18 ) 17 0 2 16 1 15 = l 14 a t o 13 T 12 ( s 11 e l p 10 m 9 a s 8 f o 7 r e 6 b m 5 u 4 N 3 2 1
20 19 18 ) 17 0 1 16 2 15 = l 14 a t o 13 T 12 ( s 11 e l p 10 m 9 a s 8 f o 7 r e 6 b m 5 u 4 N 3 2 1
A. Samples of 3
True mean = $4.50
$0 $1
$2 $3 $4 $5 $6 $7 $8 $9 Estimate of mean (Sample size = 3)
$0 $1
609
True mean = $4.50 B. Samples of 4
$2 $3 $4 $5 $6 $7 $8 $9 Estimate of mean (Sample size = 4)
True mean = $4.50 20 19 18 ) 17 2 5 16 2 15 = l 14 a t o 13 T ( 12 s 11 e l p 10 m 9 a s 8 f o 7 r e 6 b m 5 u 4 N 3 2 1
20 19 18 ) 17 0 1 16 2 15 = l 14 a t o 13 T ( 12 s 11 e l p 10 m 9 a s 8 f o 7 r e 6 b m 5 u 4 N 3 2 1
C. Samples of 5
$0 $1
$2 $3 $4 $5 $6 $7 $8 $9 Estimate of mean (Sample size = 5)
$0 $1
True mean = $4.50 D. Samples of 6
$2 $3 $4 $5 $6 $7 $8 $9 Estimate of mean (Sample size = 6)
Figure B-4 The Sampling Distributions of Samples of 3, 4, 5, and 6
0
50 Percent of residents approving of the facility
100
Figure B-5 Range of Possible Sample Study Results
Numbers.” A speedier alternative is to use a computer software program that can select cases randomly.) Then we interview the 100 residents whose numbers have been selected and ask for their attitudes toward the facility: whether they approve or disapprove.
Suppose this operation gives us 48 residents who approve of the facility and 52 who disapprove. This summary description of a variable in a sample is called a statistic. We present this statistic by placing a dot on the x-axis at the point that represents 48 percent. Now let’s suppose we select another sample of 100 residents in exactly the same fashion and measure their approval or disapproval of the facility. Perhaps 51 residents in the second sample approve of the facility. We place another dot in the appropriate place on the x-axis. Repeating this process once more, we may
Licensed to: iChapters User
USING A TABLE OF RANDOM NUMBERS Suppose you want to select a simple random sample of 100 people (or other units) out of a population totaling 980. 1. To begin, number the members of the population in this case, from 1 to 980. Now the problem is to select 100 random numbers. Once you’ve done that, your sample will consist of the people having the numbers you’ve selected. ( Note: It’s not essential to actually number them, as long as you’re sure of the total. I f you have them in a list, for example, you can always count through the list after you’ve selected the numbers.) 2. The next step is to determine the number of digits you will need in the random numbers you select. In our example, there are 980 members of the population, so you will need three-digit numbers to give everyone a chance of selection. (If there were 11,825 members of the population, you’d need to select �ve-digit numbers.) Thus, we want to select 100 random numbers in the range from 001 to 980. 3. Now turn to the �rst page of Table B-1, the table of random numbers. Notice there are several rows and columns of �ve-digit numbers, and there are two pages. The table represents a series of random numbers in the range from 00001 to 99999. To use the table for your hypothetical sample, you have to answer these questions:
a. How will you create three-digit numbers out of �ve-digit numbers? b. What pattern will you follow in moving through the table to select your numbers? c. Where will you start? Each of these questions has several satisfactory answers. The key is to create a plan and follow it. Here’s an example. 4. To create three-digit numbers from �ve-digit numbers, let’s agree to select �ve-digit numbers from the table but consider only the left-most three digits in each case. If we picked the �rst number on the �rst page—10480—we would only consider the 104. (We could agree to take the digits furthest to the right, 480, or the middle three digits, 048, and any of these plans would work.) The key is to make a plan and stick with it. For convenience, let’s use the left-most three digits.
5. We can also choose to progress through the table any way we want: down the columns, up them, across to the right or to the left, or diagonally. Again, any of these plans will work just �ne so long as we stick to it. For convenience, let’s agree to move down the columns. When we get to the bottom of one column, we’ll go to the top of the next; when we exhaust a given page, we’ll start at the top of the �rst column of the next page. 6. Now, where do we start? You can close your eyes and stick a pencil into the table and start wherever the pencil point lands. (We know it doesn’t sound scienti�c, but it works.) Or, if you’re afraid you’ll hurt the book or miss it altogether, close your eyes and make up a column number and a row number. (“I’ll pick the number in the �fth row of column 2.”) Start with that number. If you prefer more methodological purity, you might use the �rst two numbers on a dollar bill, which are randomly distributed, to determine the row and column on which to start. 7. Let’s suppose we decide to start with the �fth number in column 2. If you look on the �rst page of the table, you’ll see that the starting number is 39975. We have selected 399 as our �rst random number, and we have 99 more to go. Moving down the second column, we select 069, 729, 919, 143, 368, 695, 409, 939, and so forth. At the bottom of column 2, we selec t number 649 and continue to the top of column 3: 015, 255, and so on. 8. See how easy it is? But trouble lies ahead. When we reach column 5, we are speeding along, selecting 816, 309, 763, 078, 061, 277, 988. . . . Wait a minute! There are only 980 students in the senior class. How can we pick number 988? The solution is simple: Ignore it. Any time you come across a number that lies outside your range, skip it and continue on your way: 188, 174, and so forth. The same solution applies if the same number comes up more than once. If you select 399 again, for example, just ignore it the second time. 9. That’s it. You keep up the procedure until you’ve selected 100 random numbers. Returning to your list, your sample consists of person number 399, person number 69, person number 729, and so forth.
Licensed to: iChapters User SAMPLING DISTRI BUTION AND ESTIM ATES OF SAMPLING ERROR Table B-1
611
Random Numbers
10480 22368 24130 42167 37570
15011 46573 48360 93093 39975
01536 25595 22527 06243 81837
02011 85393 97265 61680 16656
81647 30995 76393 07856 06121
91646 89198 64809 16376 91782
69179 27982 15179 39440 60468
14194 53402 24830 53537 81305
62590 93965 49340 71341 49684
36207 34095 32081 57004 60672
20969 52666 30680 00849 14110
99570 19174 19655 74917 06927
91291 39615 63348 97758 01263
90700 99505 58629 16379 54613
77921 99562 96301 89579 85475
06907 72905 91977 14342 36857
11008 56420 05463 63661 53342
42751 69994 07972 10281 53988
27756 98872 18876 17453 53060
53498 31016 20922 18103 59533
18602 71194 94595 57740 38867
70659 18738 56869 84378 62300
90655 44013 69014 25331 08158
15053 48840 60045 12566 17983
21916 63213 18425 58678 16439
81825 21069 84903 44947 11458
44394 10634 42508 05585 18593
42880 12952 32307 56941 64952
28918 63553 09429 10365 07119
69578 40961 93969 61129 97336
88231 48235 52636 87529 71048
33276 03427 92737 85689 08178
70997 49626 88974 48237 77233
79936 69445 33488 52267 13916
56865 18663 36320 67689 47564
05859 72695 17617 93394 81056
90106 52180 30015 01511 97735
31595 20847 08272 263 58 85977
01547 12234 84115 85104 29372
85590 90511 27156 20285 74461
91610 33703 30613 29975 28551
78188 90322 74952 89868 90707
51085 02368 01011 52162 07056
12765 21382 54092 53916 97628
51821 52404 33362 46369 33787
5 1259 60268 94904 58586 09998
77452 89368 31273 23216 42698
16308 19885 04146 14513 06691
60756 55322 18594 83149 76988
92 144 44 819 29852 98736 13602
49442 01188 71585 23495 51851
53900 652 55 85030 64350 46104
70960 64835 51132 94738 88916
63990 44919 01915 17752 19509
75601 05944 92747 35156 25625
40719 55157 64951 35749 58104
48663 54164 32639 29334 02488
91245 58492 32363 27001 33062
85828 22421 05597 87637 28834
14346 74103 24200 87308 07351
09172 47070 13363 58731 19731
30168 25306 38005 00256 92420
90229 76468 94342 45834 60952
04734 26384 28728 15398 61280
59193 58151 35806 46557 50001
22178 06646 06912 41135 67658
30421 21524 17012 10367 32586
61666 15227 64161 07684 86679
99904 96909 18296 36188 50720
32812 44592 22851 18510 94953
81525 29676 00742 05366 91921
72295 20591 57392 04213 26418
04839 68086 39064 25669 64117
96423 26432 66432 26422 94305
24878 46901 84673 44407 26766
82651 20849 40027 44048 25940
66566 89768 32832 37937 39972
14778 81536 61362 63904 22209
76797 86645 98947 45766 71500
14780 12659 96067 66134 645 68
13300 92259 64760 75470 91402
87074 57102 64584 66520 42416
79666 80428 96096 34693 07844
95725 25280 98253 90449 69618
00582 00725 69011 25976 09763
04711 69884 65795 57948 83473
87917 62797 95876 29888 73577
77341 56170 55293 88604 12908
42206 86324 18988 67917 30883
35126 88072 27354 48708 18317
74087 76222 26575 18912 28290
99547 36086 08625 82271 35797
81817 84637 40801 65424 05998
42607 93161 59920 69774 41688
438 08 76038 29841 33611 34952
76655 65855 80150 54262 37888
62028 77919 12777 85963 38917
76630 88006 48501 03547 88050
91567 17955 46503 92157 14577
42595 56349 18584 89634 62765
27958 90999 18845 94824 35605
30134 49127 4 9618 78171 8 1263
04024 20044 02304 84610 39667
86385 59931 51038 82834 47358
29880 06115 20655 09922 56873
99730 20542 58727 25417 56307
55536 18059 28168 44137 61607
84855 02008 15475 48413 49518
29080 73708 56942 25555 89656
09250 83517 53389 21246 20103
79656 36103 20562 35509 77490
73211 42791 87338 20468 18062
98427 34914 70060 53976 76072
07523 63976 28277 54914 29515
33362 88720 39475 06990 40980
64270 8 2765 46473 67245 07391
01638 34476 23219 68350 58745
92477 17032 53416 82948 25774
66969 87589 94970 11398 22987
98420 40836 25832 42878 80059
04880 32427 69975 80287 39911
45585 70 002 94884 88267 96189
46565 70663 19661 47363 41151
04102 88863 72828 46634 14222
46880 77775 00102 06541 60697
45709 69348 66794 97809 59583
90725 64364 08962 95012 15664
52210 67412 00358 68379 10493
83974 33339 31662 93526 20492
29992 31926 25388 70765 38391
65831 14883 61642 10592 91132
38857 24413 34072 04542 21999
50490 59744 81249 76463 59 516
83765 92351 35648 54328 81652
55657 97473 56891 02349 27195
14361 89286 69352 17247 48 223
31720 35931 48373 28865 46751
57375 04110 45578 14777 22923
56228 23726 78547 62730 32261
41546 51900 81788 92277 85653
16408 18629 73115 57491 30405
81899 81953 35101 16703 83946
04153 05520 47498 23167 23792
53381 9 1962 87637 49323 14422
79401 04739 99016 45021 15059
21438 13092 71060 33 132 45799
83035 97662 88824 12544 22716
92350 24822 71013 41035 19792
36693 94730 18735 80780 09983
31238 064 96 20286 45393 74353
59649 35090 23153 44812 68668
91754 04822 72924 12515 30429
72772 86774 35165 98931 70735
02338 98289 43040 91202 25499
16631 96773 38935 31624 78919
35006 20206 64202 76384 19474
85900 42559 14349 17403 23632
98275 78985 82674 53363 27889
32388 05300 66523 44167 47914
52390 22164 44133 64486 02584
16815 24369 00697 64758 37680
69298 54224 35552 75366 20801
82732 35083 35970 76554 72152
38480 19687 19124 31601 39339
73817 11052 63318 12614 34806
32523 91491 29686 33072 08930
41961 60383 03387 60332 85001
44437 19746 59846 92325 87820
03931 74426 09066 42238 16153
33309 33278 00903 12426 08002
57047 43972 20795 87025 26504
74211 10119 95452 14267 41744
63445 89917 92648 20979 81959
17361 15665 45454 04508 65642
62825 52872 09552 64535 74240
39908 73823 88815 31355 56302
05607 73144 16553 86064 00033
91284 88662 51125 294 72 67107
68833 88970 79375 47689 77510
25570 74492 97596 05974 70625
38818 51805 16296 52468 28725
46920 99378 66092 16834 34191
21457 21581 55612 44657 91340
40742 57802 78095 66999 84979
29820 02050 83197 99324 46949
96783 89728 33732 51281 81973
29400 17937 05810 84463 37949
21840 37621 24813 60563 61023
15035 47075 86902 79312 43997
34537 42080 60397 93454 15263
33310 97403 16489 68876 80644
06116 48626 03264 25471 43942
95240 68995 88525 93911 89203
15957 43805 42786 25650 71795
16572 33386 05269 12682 99533
06004 21597 92532 73572 50501
91227 50001 65390 27504 37169
21199 38140 05224 96131 94851
31935 66321 72958 83944 39117
27022 19924 28609 4 1575 89632
84067 72163 81406 10573 00959
05462 09538 39147 08 619 16487
35216 12151 25549 64482 65536
14486 06878 48542 73923 49071
29891 91903 42627 36152 39782
68607 18749 45233 05184 17095
41867 34405 57202 94142 02330
14951 56087 94617 25299 74301
91696 82790 23772 84387 00275
85065 70925 07896 34925 48280
11508 37449 46515 30986 63798
70225 30362 70331 81223 64995
51111 06694 85922 42416 46583
38351 54690 38329 58353 09785
19444 04052 57015 21532 44160
66499 53115 15765 30502 78128
71945 62757 97161 32305 83991
05422 95348 17869 86482 42865
13442 78662 45349 05174 92520
78675 11163 61796 07901 83531
84081 81651 66345 54339 80377
66938 50245 81073 58861 35909
93654 34971 49106 74818 81250
59894 52924 79860 46942 54238 (continued )
Licensed to: iChapters User 612
A P P E N D I X B / S T AT I S T I C S F O R E S T I M A T I N G S A M P L I N G E R RO R
Table B-1
(continued )
82486 21885 60336 43937 97656
84846 32906 98782 46891 63175
99254 92431 07408 24010 89303
67632 09060 53458 25560 16275
43218 64297 13564 86355 07100
50076 51674 59089 33941 92063
21361 64126 26445 25786 2 1942
64816 62570 29789 54990 18611
51202 26123 85205 71899 47348
88124 05155 41001 15475 2 0203
41870 59194 12535 95434 18534
52689 52799 12133 98227 03862
51275 28225 14645 21824 78095
83556 85762 23541 19585 50136
03299 79626 85636 18039 08362
01221 06486 68335 14367 15656
05418 03574 47539 61337 60627
38982 17668 03129 06177 36478
55758 07785 65651 12143 65648
92237 76020 11977 46609 16764
26759 79924 02510 32989 53412
86367 25651 26113 74014 09013
21216 83325 99447 64708 07832
98442 88428 68645 0 0533 41574
08 303 85 076 34327 35398 17639
56613 72811 15152 58408 82163
91511 22717 55230 13261 60859
75928 50585 93448 47908 75567
79556 92608 23982 09915 59037
29068 82674 25835 96306 33300
04142 27072 40055 05908 26695
16268 32534 67006 97901 62247
15387 17075 12293 28395 69927
12856 27698 02753 14186 76123
66227 98204 14827 00821 50842
38358 63863 23235 80703 43834
22478 11951 35071 70426 86654
73373 34 648 99704 75647 70959
88732 88022 37543 76310 79725
09443 56148 11601 88717 93872
82558 34925 35503 37890 28117
05250 57031 85171 40129 19233
42488 46764 03237 86591 38534
78077 86273 45430 81482 01715
69882 63003 55417 52667 94964
61657 93017 63282 61582 87288
34136 31204 90816 14972 65680
79180 36692 17349 90053 43772
97526 40202 88298 89534 39560
43092 35275 90183 76036 12918
04098 57306 36600 49199 86537
73571 55543 78406 43716 62738
80799 53203 06216 97548 19636
76536 18098 95787 04379 51132
71255 47625 42579 46370 25739
64239 88684 90730 28672 56947
Abridged from Handbook of Tables for Probability and Statistics, Second Edition, edited by William H. Beyer (Cleveland: The Chemical Rubber Company, 1968). Used by permission of The Chemical Rubber Company.
Sample 1 (48%) 0
Sample 2 (51%) Sample 3 (52%)
50 Percent of residents approving of the facility
100
Figure B-6 Results Produced by Three Hypothetical Studies
discover that 52 residents in the th ird sample approve of the facility. Figure B-6 presents the three different sample statistics that represent the percentages of residents in each of the three random samples who approved of the facility. The basic rule of random sampling is that such samples drawn from a population give estimates of the parameter that pertains in the total population. Each random sample, then, gives us an estimate of the percentage of residents in the town population who approve of the facility. Unhappily, however, we have selected three samples and now have three separate estimates. To resolve this dilemma, let’s draw more and more samples of 100 residents each, question each sample about its approval or disapproval of the facility, and plot the new sample statistics on our summar y graph. In drawing many such samples, we discover that some of the new samples provide duplicate estimates, as in Figures B-3 and B-4 for the previous example with a population of 10 cases. Figure B-7 shows the sampling distribution of, say, hundreds of samples. This is often referred to as a normal curve.
s e l p m a s f o r 90 e b 60 m u 30 N
0
0
50 100 Percent of students approving of the facility
Figure B-7 The Sampling Distribution
Note that by increasing the number of samples selected and interviewed, we have also increased the range of estimates that are provided by the sampling operation. In one sense, we have increased our dilemma in attempting to guess the parameter in the population. Probability theory, however, provides certain important rules about the sampling distribution in Figure B-7. First, if many independent random samples are selected from a population, then the sample statistics provided by those samples will be distributed around the population parameter in a known way. Thus, although Figure B-7 shows a wide range of estimates, more of them are in the vicinity of 50 percent than elsewhere in the graph. Probability theory tells us, then, that the true value is in the vicinity of 50 percent. Second, probability theory gives us a formula for estimating how closely the sample statistics are clustered around the true value. To put it another way,
Licensed to: iChapters User S A M P L I N G D I S T R I B U T I O N A N D E S T I M A T E S O F SA M P L I N G E R RO R
613
probability theory enables us to estimate the sampling error —the degree of error to be expected for a given sample design. This formula contains three factors: the parameter, the sample size, and the standard error (a measure of sampling error): s5
Å
P3Q n
The symbols P and Q in the formula equal the population parameters for the binomial: If 60 percent of the residents approve of the facility and 40 percent disapprove, then P and Q are 60 percent and 40 percent, respectively, or .6 and .4. Note that Q 5 1 2 P and P 5 1 2 Q. The symbol n equals the number of cases in each sample, and s is the standard error. Let’s assume that the population parameter in the hypothetical small town is 50 percent approving of the facility and 50 percent disapproving. Recall that we have been selecting samples of 100 cases each. When these numbers are put into the formula, we � nd that the standard error equals .05, or 5 percent. In probability theory, the standard error is a valuable piece of information because it indicates the extent to which the sample estimates will be distributed around the population parameter. If you are familiar with the standard deviation in statistics, you may recognize that the standard error in this case is the standard deviation of the sampling distribution. (We discuss the meaning of the standard deviation in Chapter 20.) Speci�cally, probability theory indicates that certain proportions of the sample estimates will fall within speci�ed increments—each equal to one standard error—from the population parameter. Approximately 34 percent (.3413) of the sample estimates will fall within one standard error increment above the population parameter, and another 34 percent will fall within one standard error below the parameter. In our example, the standard error increment is 5 percent, so we know that 34 percent of our samples will give estimates of resident approval between 50 percent (the parameter) and 55 percent (one standard error above); another 34 percent of the samples will give estimates between 50 percent and 45 percent (one standard error below the parameter). Taken together, then, we know that roughly two-thirds (68 percent) of the samples will give estimates within 65 percent of the parameter. Moreover, probability theory dict ates that roughly 95 percent of the samples will fall within plus or minus two standard errors of the true value, and
.0013 .0215 .1359 .3413 .3413 .1359 .0215 .0013 –3 SD –2 SD –1 SD MEAN +1 SD +2 SD +3 SD 68.26% 95.44% 99.74% 47.2% 50%
49.87%
SD = Standard deviation
Figure B-8 Standard Deviation Proportions of the Normal Curve
99.9 percent of the samples will fall within plus or minus three standard errors. In our current example, then, we know that only one sample out of a thousand would give an estimate lower than 35 percent approval or higher than 65 percent. Figure B-8 graphically illustrates a normal (bell-shaped) curve with the standard deviation proportions that apply to any normal curve. The normal curve represents the sampling distribution—how an in�nite number of randomly drawn samples would be distributed. The mean of the curve is the true population parameter. The proportion of samples that fall within one, two, or three standard errors of the population parameter is constant for any random sampling procedure such as the one just described—if a large number of samples are selected. The size of the standa rd error in any given case, however, is a function of the population parameter and the sample size. If we return to the formula for a moment, we note that the standard error will increase as a function of an increase in the quantity P times Q. Note further that this quantity reaches its maximum in the situation of an even split in the population: If P 5 .5, PQ 5.25 If P 5 .6, PQ 5 .24 If P 5 .8, PQ 5 .16 If P 5 .99, PQ 5 .0099
Licensed to: iChapters User 614
A P P E N D I X B / S T AT I S T I C S F O R E S T I M A T I N G S A M P L I N G E R RO R
By extension, if P is either 0.0 or 1.0 (either zero percent or 100 percent approve of the facility), then the standard error will be zero. If everyone in the population has the same attitude (no variation), then every sample will give exactly that estimate. The standard error is also a function of the sample size—and an inverse function. As the sample size increases, the standard error decreases. As the sample size increases, the several samples will cluster closer to the true value. Another rule of thumb is evident in the formula: Because of the square root formula, the standard error is reduced by half if the sample size quadruples. In our current example, samples of 100 produce a standard error of 5 percent; to reduce the standard error to 2.5 percent, we must increase the sample size to 400. All of this information is provided by established probability theory as it relates to the selection of large numbers of random samples. (If you’ve taken a statistics course, you may know this as the “central tendency theorem.”) If the population parameter is known and a large number of random samples are selected, then we can predict how many of the sample estimates will fall within speci�ed intervals from the parameter. Be clear that this discussion only illustrates the logic of probability sampling and does not describe the way research is actually conducted. Usually, we do not know the parameter: We conduct a sample survey to estimate that value. Moreover, we don’t actually select large numbers of samples: We select only one sample. Nevertheless, the preceding discussion of probability theory provides the basis for inferences about the typical social research situation. Knowing what it would be like to select thousands of samples allows us to make assumptions about the one sample we do select and study.
CONFIDENCE LEVELS AND CONFIDENCE INTERVALS Whereas probability theory speci�es that 68 percent of that �ctitious large number of samples would produce estimates that fall within one standard error of the parameter, we turn the logic around and infer that any single random sample estimate has a 68 percent chance of falling within that range. This observation leads us to the t wo key components of sampling error estimates: con�dence level and con�dence interval. We express the accuracy of our sample statistics in terms of a level of con�dence that the statistics fall
within a speci�ed interval from the parameter. For example, we are 68 percent con�dent that our sa mple estimate is within one standard error of the parameter. Or we may say that we are 95 percent con�dent that the sample statistic is within two standard errors of the parameter, and so forth. Quite reasonably, our con�dence increases as the margin for error is extended. We are virtually positive (99.74 percent) that we are within three standard errors of the true value. Although we may be con�dent (at some level) of being within a certain range of the parameter, we have already noted that we seldom know what the parameter is. To resolve this dilemma, we substitute our sample estimate for the parameter in the formula; lacki ng the true value, we substitute the best available guess. The result of these inferences and estimations is that we are able to estimate a population parameter as well as the expected degree of error on the basis of one sample drawn from a population. Beginning with the question “What percentage of the town population approves of the facility?” we could select a random sample of 100 residents and interview them. We might then report that our best estimate is that 50 percent of the population approves of the facility and that we are 95 percent con�dent that between 40 and 60 percent (plus or minus two standard errors) approves. The range from 40 to 60 percent is called the con�dence interval. (At the 68 percent con�dence level, the con�dence interval would be 45 percent to 55 percent.) The logic of con�dence levels and con�dence intervals also provides the basis for determining the appropriate sample size for a study. Once you have decided on the degree of sampling error you can tolerate, you’ll be able to calculate the number of cases needed in your sample. Thus, for example, if you want to be 95 percent con�dent that your study �ndings are accurate within plus or minus 5 percentage points of the population parameters, then you should select a sample of at least 400. The foregoing discussion has considered only one type of statistic: the percentages produced by a binomial or dichotomous variable. The same logic, however, would apply to the examination of other statistics, such as mean income. Two cautions are in order here. First, the survey uses of probability theory as discussed here are not wholly justi�ed technically. The theory of sampling distribution makes assumptions that almost never apply in survey conditions. The exact proportion of samples contained within speci�ed increments of standard errors, for example, mathematically assumes an
Licensed to: iChapters User C O N F I D E N C E L E V E L S A N D C O N F I D E N C E I N T E R VA L S
in�nitely large population, an in�nite number of samples, and sampling with replacement—that is, every sampling unit selected is “thrown back into the pot” and could be selected again. Second, our discussion has greatly oversimpli�ed the inferential jump from the distribution of several samples to the probable characteristics of one sample. These cautions are offered as perspective. Researchers often appear to overestimate the precision of estimates produced by using probability theory. As has been mentioned elsewhere in this appendix and throughout the book, variations in sampling techniques and nonsampling factors may further reduce the legitimacy of such estimates. For example, those selected in a sample who fail or refuse to participate
615
further detract from the representativeness of the sample. Nevertheless, the calculations discussed in this appendix can be extremely valuable to you in understanding and evaluating your data. Although the calculations do not provide estimates that are as precise as some researchers might assume, they can be quite valid for practical purposes. They are unquestionably more valid than less rigorously derived estimates based on less rigorous sampling methods. Most important, you should be familiar with the basic logic underlying the calculations. If you are so informed, then you will be able to react sensibly to your own data and those reported by others.
Licensed to: iChapters User
Licensed to: iChapters User
Glossary
AB design The simplest single-case evaluation design that includes one baseline phase (A) and one intervention phase (B). This is a popular design among practitioners and researchers be cause it involves only one baseline phase and therefore poses the least con�ict with service delivery priorities. It has less control for history, however, than most alternative single-case evaluation designs. See Chapter 12.
transient or homeless participants—for f uture follow-up sessions or interviews. See Chapter 5.
ABAB withdrawal/reversal design A single-case evaluation design that adds a second baseline phase (A) and a second intervention phase (B). This design assumes that if t he intervention caused the improvement in the target problem during the first intervention period, then the t arget problem will reverse toward its original baseline level during the second baseline. When the intervention is reintroduced, the target problem should start improving again. The basic inferential principle here is that if shifts in the trend or level of the target problem occur successively each time the intervention is introduced or withdrawn, then it is not plausible that history explains the change. See Chapter 12.
anonymous enrollment A method of recruiting members of hidden and oppressed populations to participate in research studies; the method emphasizes techniques that enable prospective participants to feel safer in responding to recru itment efforts and participating in studies. See Chapter 5.
anonymity An arrangement that makes it impossible for a researcher to link any research data with a given research participant. Distinguished from con�dentiality, in which the researcher is able to identify a given person’s responses but essentially promises not to do so publicly. See Chapter 4.
area probability sample A form of multistage cluster sample in which geographic areas such as census blocks or tracts serve as the �rst-stage sampling unit. Units selected in the �rst stage of sampling are then listed—all the households on each selected block would be written down af ter a trip to the block—and such lists would be subsampled. See Chapter 14.
abstract A separate page at the beginning of a research proposal or report that briefly summarizes the proposed or completed study. See Chapter 23.
assent form A brief consent form that a child can understand and sign before participating in a study; it uses simpler language than consent forms for adults about the features of the study that might affect their decision about whether they want to participate in it. See consent form and Chapter 4.
accidental sampling See availability sampling. acculturation The process in which a group or individual changes after coming into contact with a majority culture, taking on its language, values, attitudes, and lifestyle preferences. See Chapter 5.
attributes Characteristics of persons or things. See variables and Chapters 3 and 7. attrition A threat to the validity of an experiment that occur s when participants drop out of an exp eriment before it is completed. Also called experimental mortality. See Chapter 10.
acquiescent response set A source of measurement error in which people agree or disagree with most or all statements regardless of their content. See Chapter 8.
auditing A strategy for improving the trustworthiness of qualitative research �ndings in wh ich the researcher leaves a paper trail of �eld notes, transcripts of interviews, journals, and memos documenting dec isions made along the way, and so on. This enables an impartial and qualitatively adept investigator who is not part of the study to scrutinize what was done in order to determine if efforts to control for bias and reactivity were thorough, if the procedures used were justi�able, and if the interpretations �t the data that were collected. See Chapter 17.
agency tracking Asking service providers or other community agencies whether they have been in recent contact with research participants—particularly those who are transient or homeless— whom you are unable to locate and whom you need to contact for further sessions or interviews. See Chapter 5. alternative treatment design with pretest An experiment that compares the effectiveness of two alternative treatments. Participants are assigned randomly to two experimental groups, each of which receives a different intervention being evaluated, and to a control group that does not receive any intervention. Each group is tested on the dependent variable before and after the experimental groups receive the intervention. See Chapter 10.
availability sampling A sampling method that selects elements simply because of their ready availability and convenience. Frequently used in social work because it is usually less expensive than other methods and because other methods may not be feasible for a particular type of study or population. See Chapter 14.
analysis of variance A form of data analysis in which the variance of a dependent variable is examined for the whole sample and for separate subgroups created on the basis of one or more independent variables. See Chapter 22.
available records A source of data for a study in which the information of concern already has been gathered by others. For example, an evaluation of a statewide dropout prevention program may use available school records on d ropout rates. See Chapters 7 and 16.
anchor points Pieces of information about the various places you may be able to �nd particular research participants— particularly
617
Licensed to: iChapters User 618
GLOSSARY
average An ambiguous term that generally suggests typical or normal. Mean, median, and mode are speci�c examples of mathematical averages, or measures of central tendency. See Chap ter 20.
clinical signi�cance The term used for substantive signi�cance in clinical outcome studies. See also substantive signi�cance and Chapter 21.
back-t ranslation A method used when translating instruments from one language i nto another. The steps are: (1) a bilingua l person translates the instrument and its instructions to a target language, (2) another bilingual person translates from the target language back to the original language (not seeing the original version of the instrument), (3) the original instrument is compared to the back-translated version, and (4) items with discrepancies are further modi�ed. See Chapter 5.
closed-ended questions Unlike in open-ended questions, the respondent is asked to select an answer from among a list provided by the researcher. See Chapter 9.
base li ne The phase of a single-case evaluation design that consists of repeated measures before a new intervention or policy is introduced. See Chapter 12. bias (1) That quality of a measurement device that tends to result in a misrepresentation of what is being measured in a particular direction. For example, the questionnaire item “Don’t you agree that the president is doing a good job?” would be biased because it would generally encou rage more favorable responses. See Chapters 8 and 9 for more on this topic. (2) The thing inside a person that makes other people or groups seem consistently better or worse than they really are. binomia l var iable A variable that has only two attributes is binomial. “Gender” would be an example, having the attributes “male” and “female.” bivar iate anal ysis The analysis of two variables simultaneously to determine the empirical relationship between them. The construction of a simple percentage table or the computation of a simple correlation coef�cient would be examples of bivariate analyses. See Chapter 20. CA See conversation analysis. case-control design A design for evaluating interventions that compares groups of cases that have had contrasting outcomes and then collects retrospective data about past differences that might explain the difference in outcomes. It relies on multivariate statistical procedures. See Chapter 11. case-oriented analysis An idiographic qualitative data analysis method that focuses on attempting to understand a particular case fully. See Chapter 19. case study An idiographic examination of a single individual, family, group, organization, community, or society using a full variety of evidence regarding that case. See Chapter 17. causal inference An inference derived from a research design and �ndings th at logically imply that the independent variable really has a causal impact on the dependent variable. See Chapter 10. census An enumeration of the characteristics of some population. A census is often similar to a survey, with the difference that the census collect s data from all members of the population and the survey is limited to a sample. See Chapter 15. chi-square A statistical signi�cance test used when both the independent and dependent variables are nominal level. See Chapter 22. client logs A qualitative or quantitative method that can be used as part of case studies or single-case evaluations in which clients keep journals of events that are relevant to their problems. See Chapter 17.
cluster sample A sample drawn using cluster sampling procedures. See Chapter 14. cluster sampling A multistage sampling procedure in which natural groups (clusters) are sampled initially, with the members of each selected group being subsampled afterward. For example, we might select a sample of U.S. colleges and universities from a directory, get lists of the students at all the selected schools, and then d raw samples of students from each. This procedure is discussed in Chapter 14. codebook The document used in data processing and analysis that tells the location of d ifferent data items in a data � le. Typically, the codebook identi�es the locations of d ata items and the meaning of the codes used to represent different attributes of variables. See Chapter 20. coding The process whereby raw data are transfor med into a standardized form th at is suitable for machine processing and analysis. See Chapters 19 and 20. coef�cient alpha A statistic for depicting the internal consistency reliability of an instrument; it represents the average of the correlations between the subscores of all possible subsets of half of the items on the instrument. See Chapter 8. cohort study A study in which some speci�c group is studied over time, although data may be collected from different members in each set of observations. For example, a study of the professional careers of students earning their social work degrees in 1990, in which questionnaires were sent every �ve years, would be a cohort study. See Chapter 6. community forum An approach to needs assessment that involves holding a meeting where concerned members of the community can express their views and interact freely about their needs. See Chapter 13. compensatory equalization A threat to the validity of an evaluation of an intervention’s effectiveness that occurs when practitioners in the comparison routine-treatment condition compensate for the differences in treatment bet ween their group and the experimental group by providing enhanced services that go beyond the routine-treatment regimen for their clients, thus potentially blurring the true effects of the tested intervention. See Chapter 10. compensatory rivalry A threat to the validity of an evaluation of an intervention’s effectiveness that occurs when practitioners in the comparison routine-treatment condition decide to compete with the therapists in the other unit. T hey may start reading more, attending more continuing education workshops, and increasing their therapeutic contact with clients. Their extra efforts might improve their effectiveness and thus blur the true effects of the tested intervention. See Chapter 10. computer-assisted telephone interviewing (CATI) Interviewing over the phone by reading questions from a computer screen and immediately entering responses into the computer. See Chapter 15. concept A mental image that symbolizes an idea, an object, an event, or a person. See Chapter 3.
Licensed to: iChapters User GLOSSARY
concept mapping A qualitative data analysis method in which relationships among concepts are examined and diagrammed in a graphical format. See Chapter 19. conceptual equivalence Instru ments and observed behaviors having the same meanings across cultures. See Chapter 5. conceptualization The mental process whereby fuzz y and imprecise notions (concepts) are made more speci�c and precise. So you want to study prejudice. What do you mean by “prejudice”? Are there different kinds? What are they? See Chapter 7. concurrent validity A form of criterion-related validity examining a measure’s correspondence to a criterion that is known concurrently. See Chapter 8. con�dence interval The range of values within which a population parameter is es timated to lie. A su rvey, for example, may show 40 percent of a sample favoring candidate A (poor devil). Although the best estimate of the support existing among all voters would also be 40 percent, we would not expect it to be exactly that. We might, therefore, compute a con�dence interval (for example, from 35 to 45 percent) within which the actual percentage of the population probably lies. Note that it’s necessary to specify a con�dence level in connection with every con�dence interval. See Appendix B. con�dence level The estimate d probability that a population parameter lies within a given con�dence interval. Thus, we might be 95 percent con�dent that between 35 and 45 percent of all voters favor candidate A. See Appendix B. con�dentiality A promise by the researcher not to publicly identify a given research pa rticipant’s data. Distinguished from anonymity, which makes it impossible for a researcher to link any research data with a given research participant. See Chapter 4. consent form A form that human subjects sign before partic ipating in a study that provides full information about the features of the study that might affect their decision about whether to participate—particularly regarding its procedures, potential harm, and anonymity and con�dentiality. See Chapter 4. constant comparative method A qualitative data analysis method in which the researcher looks for patterns in inductive observations, develops concepts and working hypotheses based on those patterns, seeks out more cases and conducts more observations, and then compares those observations against the concepts and hypotheses developed from the earlier observations. The selection of new cases is guided by t heoretical sampling concepts in which new cases are selected that seem to be similar to those generated by previously detected concepts and hypotheses. Once the researcher perceives that no new insights are being generated from the observation of similar cases, a different type of case is selected and the same process is repeated. Additional cases similar to this new type of case are selected until no new insights are being generated. This cycle of exhausting similar cases and then seeking a dif ferent category of cases is repeated until the researcher believes that further seeking of new types of cases will not alter the �ndings. See Chapter 19. construct validity The degree to which a measure relates to other variables as expected within a system of theoretical relationships and as re�ected by the degree of its convergent validity and discriminant validity. See also convergent validity, discriminant validity, and Chapter 8.
61 9
contemporary positivism A parad igm that recognizes the virtual impossibility of being completely objective yet assumes that there is an objective answer to research questions and that it is worth trying to investigate things as objectively as possible to attempt to maximize the accuracy of answers to research questions. See Chapter 3. content analysis A research method for studying virtually any form of communication, consisting primarily of coding and tabulating the occurrences of certain forms of content that are being communicated. See Chapter 16. content validity The degree to which a measure covers the range of meanings included within the concept. See Chapter 8. contingency question A survey question that is to be asked of only some of the respondents, depending on their responses to some other question. For example, all respondents might be asked whether they belong to the Cosa Nostra, and only those who said yes would be asked how often t hey go to company meetings and picnics. The latter would be a contingency question. See Chapter 9 for illustrations of this topic. contingency table Any table format for presenting the relationships among variables in the form of percentage distributions. See Chapter 20. control group In experimentation, a group of participants who do not receive the intervention being evaluated and who should resemble the experimental group in all other respects. The comparison of the control and experimental g roups at the end of the experiment points to the effect of the tested intervention. See Chapter 10. control variable A variable that is held constant in an at tempt to further clarify the relationship between two other variables. Having discovered a relationship between education and prejudice, for example, we might hold gender constant by examining the relationship between education and prejudice among men only and then among women only. In this example, “gender” would be the control variable. See Chapter 7, and also Chapter 10 to see the importance of the proper use of control variables in analysis. convenience sampling See availability sampling. convergent validity The degree to which scores on a measure correspond to scores on other measures of the same construct. See also construct validity, discriminant validity, and Chapter 8. conversation analysis (CA) A qualitative data analysis approach that aims to uncover the implicit assumptions and structures in social life through an extremely close scrutiny of the way we converse with one another. See Chapter 19. cost–benefit analysis An assessment of program efficiency in which an attempt is made to monetize the benefits associated with a program’s outcome and thus see if those monetary bene�ts exceed program costs. See Chapter 13. cost-effectiveness analysis An assessment of program ef�ciency in which the only monetary considerations are the costs of the program; the monetary bene�ts of the program’s effects are not assessed. Cost-effect iveness analysis looks at the cost per unit of outcome without monetizing the outcome. See Chapter 13. criterion-related validity The degree to which a measure relates with some external criterion. For example, the validity of the college board exam is shown in its ability to predict the college
Licensed to: iChapters User 620
GLOSSARY
success of students. See known groups validity, concurrent validity, predictive validity, and Chapter 8. critical region Those values in the statistically signi�cant zone of a theoretical sampling distribution. See Chapter 21. critical social science A paradigm distinguished by its focus on oppression and its commitment to use research procedures to empower oppressed groups. See Chapter 3. cross-case analysis A qualitative data analysis method that is an extension of case-oriented analysis, in which the researcher turns to other subjects, looking into the full details of their lives as well but paying special note to the variables that seemed important in the �rst case. Some subsequent cases will closely parallel the �rst one in the apparent impact of particular variables. Other cases will bear no resemblance to the �rst. These latter cases may require the identi�c ation of other important variables, which may invite the resea rcher to explore why some cases seem to re�ect one pattern whereas others re�ect another. See case-oriented analysis and Chapter 19. cross-sectional study A study based on observations that represent a single point in time. Contrasted with a longitudinal study. See Chapters 6 and 11. cultural bias A source of measurement error or sampling error stemming from researcher ignorance or insensitivity regarding how cultural differences can influence measurement or generalizations made to the entire population when certain minority groups are inadequately represented in the sample. A measurement procedure is culturally biased when it is administered to a minority culture without adjusting for the ways in which t he minority culture’s unique values, attitudes, lifestyles, or limited opportunities alter the accuracy or meaning of what is really being measured. See Chapters 5 and 8. cultural competence A researcher’s ability to obtain and provide information that is relevant, useful, and valid for minority and oppressed populations. Cultural competence involves knowledge about the minority cu lture’s historical experiences, traditions, values, family systems, socioeconomic issues, and attitudes about social services and social policies; awareness of how one’s own attitudes are connected to one’s own cultural background and how they may differ from the worldview of members of the minority culture; and skills in communicating effectively both verbally and nonverbally with members of the minority culture and establishing rapport with them. See Chapter 5. culturally competent research Being aware of and appropriately responding to the ways in which cultural factors and cultural differences should in�uence what we investigate, how we investigate, and how we interpret our �ndings—thus resulting in studies that are useful and valid for minority and oppressed populations. See Chapter 5. curvilinear relationship A relationship between two variables that changes in nature at different values of the variables. For example, a curvilinear relationship might exist bet ween amount of social work practice experience and practice effectiveness, particularly if we assume that practitioners with a moderate amount of experience are more effective than those with none and at least as effective as those nearing retirement. See Chapter 7. deduction The logical model in which speci�c expectations of hy potheses are developed on the basis of general principles. Starting
from the general principle that all deans are meanie s, you might anticipate that Dean Moe won’t let you change courses. That anticipation would be the result of deduction. See also induction and Chapter 3. dependent variable That variable that is assumed to depend on, or be caused by, another (called the independent variable). If you �nd that income is partly a f unction of amount of formal education, then income is being treated as a dependent variable. See Chapters 3 and 7. descriptive statistics Statistic al computations that describe either the characteristics of a sample or the relationship among variables in a sample. Descriptive statistics merely summarize a set of sample observations, whereas inferential statistics move beyond the description of speci�c observations to make inferences about the larger population from which t he sample observations were drawn. See Chapter 20. deviant case sampling A type of nonprobability sampling in which cases selected for observation are those that are not thought to �t the regu lar pattern. For example, the deviant cases might exhibit a much greater or lesser extent of something. See Chapters 14 and 17. dichotomous variable A variable that has only two categories. See also binomial variable. diffusion (or imitation) of treatments A threat to the validity of an evaluation of an intervention’s effectiveness that occurs when practitioners who are supposed to provide routine services to a comparison group implement aspects of the experimental group’s intervention in ways that tend to diminish the planned differences in the interventions received by the groups being compared. See Chapter 10. dimension A speci�able aspect or facet of a concept. direct behavioral observation A source of data, or type of data collection, in which researchers watch what people do rather than rely on what they say about themselves or what others say about them. See Chapters 7 and 8. direct observation A way to operationally de�ne variables based on observing actual behavior. See also direct behavioral observation and Chapters 7 and 8. discriminant validity The degree to which scores on an instrument correspond more highly to measures of the same construct than they do to scores on measures of other constructs. See also convergent validity, construct validity, and Chapter 8. dismantling studies Experiments designed to test not only whether an intervention is effective, but also which components of the intervention may or may not be necessary to achieve its effects. Part icipants are assigned randomly to groups that either receive the entire intervention package, separate components of it, or a control condition, and are tested on a dependent variable before and after the intervention components are provided. See Chapter 10. dispersion The distribution of values around some central value such as an average. The range is a simple example of a measure of dispersion. Thus, we may report t hat the mean age of a g roup is 37.9, and the range is from 12 to 89. See Chapter 20. disproportionate strati�ed sampling A sampling method aimed at ensuring that enough cases of certain minority groups are
Licensed to: iChapters User GLOSSARY selected to allow for subgroup comparisons within each of those minority groups. See Chapters 5 and 14. double-barreled question Asking for a single answer to a question that really contains multiple questions; for example, “Should taxes be raised so welfare fundi ng can be increased?” See Chapter 9. ecological fallacy Erroneously drawing conclusions about individuals based solely on the observation of groups. See Chapter 6. effect size A statistic that portrays the strength of association between variables. Effect-size statistics might refer to various measures of proportion of dependent variable variation explained or speci�cally to the difference between the means of t wo groups divided by the standard deviation. The latter is usually called the effect size, ES, or Cohen’s d. See Chapter 21. element That unit in a sample about which information is collected and that provides the basis of analysis. Typically, in survey research, elements are people or certain t ypes of people. See Chapter 14. emic perspective Trying to adopt the beliefs, attitude s, and other points of view shared by the members of the cu lture being studied. See Chapter 18. empirical support Observations that are consistent with what we would expect to experience if a theory is correct or an intervention is effective. See Chapters 1, 2, and 3. EPSEM See equal probability of selection method. equal probability of selection method (EPSEM) A sample design in which each member of a population has the same chance of being selected into the sample. See Chapter 14. ES See effect size. ethnocentrism The belief in the superiority of one’s own culture. See Chapter 5. ethnography A qualitative research approach that focuses on providing a detailed and accurate description of a culture from the viewpoint of an insider rather than the way the researcher understands things. See Chapter 17. etic perspective Maintaining objectivit y as an outsider and raising questions about the culture being observed that wouldn’t occur to members of that culture. See Chapter 18. evidence-based practice Using the best scienti�c evidence available in deciding how to intervene with individuals, families, groups, or communities. See Chapter 2. existing statistics analysis Research involving the analysis of statistical information in official government or agency documents and reports. See Chapter 16. experimental demand characteristics Research participants learn what experimenters want them to say or do, and then they cooperate with those “demands” or expectations. See C hapter 11. experimental design A research method that attempts to provide maximum control for threats to internal validity by: (1) randomly assigning individuals to experimental and control groups, (2) introducing the independent variable (which typically is a program or intervention method) to the experimental group while withholding it from the control group, and (3) comparing the amount of experimental and control group change on the dependent variable. See Chapter 10.
62 1
experimental group In experiments, a group of participants who receive the intervention being evaluated and who should resemble the control group in all other respects. The comparison of the experimental group and the control group at the end of the experiment points to the effect of t he tested intervention. See Chapter 10. experimental mortality A threat to the validity of an experiment that occurs when participants drop out of an experiment before it is completed. Also called attrition. See Chapter 11. experimenter expectancies Research participants learn what experimenters want them to say or do, and then they cooperate with those “demands” or expectations. See Chapter 10. external evaluators Program evaluators who do not work for the agency being evaluated but instead work for external agencies such as government or regulating agencies, private research consultation �rms, or universities. See Chapter 13. external validity Refers to the extent to which we can generalize the � ndings of a study to set tings and populations beyond the study conditions. See Chapter 10. extraneous variable See control variable. face validity That quality of an indicator that makes it seem a reasonable measure of some variable. That the frequency of church attendance is some indication of a person’s religiosity seems to make sense without a lot of explanation: It has face validity. See Chapter 8. factor analysis A statistic al procedure that identifies which subsets of variables or items on a scale correlate with each other more than with other subsets. In so doing, it identi�es how many dimensions a scale contains and which items cluster on which dimensions. See Chapter 8. factorial validity Whether the number of constructs and the items that make up those constructs on a measurement scale are what the researcher intends. See Chapter 8. �eld tracking Talking with people on the streets about where to find research participants—particularly those who are homeless—to secure their participation in future sessions or interviews. See Chapter 5. File drawer effect A term based on the notion that authors of studies with �ndings that don’t support the ef fectiveness of an intervention will just �le their studies away rather than submit them for publication. See Chapter 22. focus groups An approach to needs assessment in which a small group of people are brought together to engage in a guided discussion of a speci�ed topic. See Chapters 13 and 18. formative evaluation A type of program evaluation not concerned with testing the success of a program, but focusing instead on obtaining information that is helpful in planning the program and improving its implementation and performance. See Chapter 13. frequency distribution A description of the number of times the various attributes of a variable are observed in a sample. The report that 53 percent of a sample were men and 47 percent were women would be a simple example of a frequency distribution. Another example would be the report that 15 of the cities studied had populations under 10,000, 23 had populations between 10,000 and 25,000, and so forth. Se e Chapter 20.
Licensed to: iChapters User 622
GLOSSARY
gender bias The unwarranted generaliz ation of research �ndings to the population as a whole when one gender is not adequately represented in the research sample. See Chapter 14.
on. This kind of explanation won’t necessarily help us understand bigotry in general, but we’d feel we really understood Uncle E d. By contrast, see nomothetic. See Chapter 3.
generalizability That quality of a research �nding that justi�es the inference that it represents something more than the speci�c observations on which it was based. Sometimes, t his involves the generalization of �ndings from a sample to a population. Other times it is a matter of concepts: If you are able to discover why people commit burglaries, can you generalize that discovery to other crimes as well? See Chapter 17.
independent variable A variable whose values are not problematical in an analysis but are taken as simply given. An independent variable is presumed to cause or explain a dependent variable. If we discover that religiosity is partly a function of gender—women are more religious than men—gender is t he independent variable and religiosity is the dependent variable. Note that any given variable might be treated as independent in one part of an analysis and dependent in another par t of the analysis. Religiosity might become an independent variable in the explanation of crime. See Chapters 3 and 7.
generalization of effects A rival explan ation in a multiplebaseline design that occurs when an intervention that is intended to apply to only one behavior or setting affects other behaviors or settings that are still in baseline. See Chapter 12. generalize To infer that the �ndings of a particular study represent causal processes or apply to settings or populations beyond the study conditions. See Chapter 17. going native A risk in qualitative field research that occu rs when researchers overidentify with their respondents and lose their objective, analytic stance or their own sense of identity. See Chapters 17 and 18. grounded theory A qualitative research approach that begins with observations and looks for patterns, themes, or common categories. See Chapters 17, 18, and 19.
index A type of composite measure that summarize s several speci�c observations and represents some more general dimension. See Chapter 9. induction The logical model in which general principles are developed from speci�c observations. Having noted that Jews and Catholics are more likely to vote Democratic than are Protestants, you might conclude that religious minorities in t he United States are more af�liated with the Democratic Party and explain why. That would be an example of induction. See also deduction and Chapter 3. inference A conclusion that can be logically drawn in light of a research design and �ndings. See Chapter 10.
grounded theory method (GTM) A qualitative methodology for building theory from data by beginning w ith observations and looking for patterns, themes, or common categories in t hose observations. See Chapters 17 and 19.
inferential statistics The body of statistica l computations that is relevant to making inferences from �ndings based on sample observations to some larger population. See also descriptive statistics and Chapters 21 and 22.
GTM
informal conversational interview An unplanned and unanticipated interaction between an interviewer and a respondent that occurs naturally during the course of �eldwork observation. It is the most open-ended form of interviewing, and the interviewee might not think of the interaction as an interview. Flexibility to pursue relevant information in whatever direction seems appropriate is emphasized, and questions should be generated naturally and spontaneously from what is observed at a particular point in a particular setti ng or from what individuals in that setting happen to say. See Chapter 18.
See grounded theory method.
hermeneutics A qualitative research approach in which the researcher mentally tries to take on the circumstances, views, and feelings of those being st udied in order to interpret their actions appropriately. See Chapter 16. historical and comparative research A research method that traces the development of social forms over time and compares those developmental processes across cultures, seeking to d iscover common patterns that recur in different times and places. See Chapter 16. history A threat to internal validity referring to extraneous events that coincide in time with the manipulation of the inde pendent variable. See Chapters 10 and 12. hypothesis A tentative and testable prediction about how changes in one thing are expec ted to explain and be accompanied by changes in something else. A statement of something that ought to be observed in the real world if a theory is correct. See deduction and also Chapters 3, 6, and 7. hypothesis testing The determination of whether the expectations that a hypothesis represents are actually found to exist in the real world. See Chapters 3 and 6. ideology A closed system of beliefs and values that shapes the understanding and behavior of those who believe in it. See Chapter 3. idiographic An approach to explanation in which we attempt to explain a single c ase fully, using as many idiosyncratic, explanatory factors as may be necessary. We might explain why Uncle Ed is such a bigot by talking about what happened to him that summer at the beach, what his col lege roommate did to him, and so
informant Someone who is well versed in the social phenomenon that you wish to study and willing to tell you what he or she knows. If you were planning par ticipant observation among the members of a religious sect, then you would do well to make friends with someone who already knows about the members— possibly even a sect member—who could give you background information about them. Not to be confused with a respondent. See Chapters 14 and 18. in-house evaluators Program evaluators who work for the agency being evaluated and therefore may be under pressure to produce biased studies or results that portray the agency favorably. See Chapter 13. institutional review board (IRB) An independent panel of professionals that is required to approve the ethics of research involving human subjects. See Chapter 4. internal consistency reliability A practical and commonly used approach to assessing reliability that examines the homogeneity of a measurement instrument by dividing the instrument
Licensed to: iChapters User GLOSSARY into equivalent halves and then calculating t he correlation of the scores of the two halves. See Chapter 8. internal invalidity Refers to the possibility that the conclusions drawn from experimental results may not accurately re�ect what went on in the experiment itself. See Chapter 10 and also external invalidity. internal validity The degree to which an effect observed in an experiment was actually produced by the experimental stimulus and not the result of other factors. See Chapter 10 and external validity. interobserver reliability See interrater reliability. interpretation A technica l term used in connection with the elaboration model. It represents the research outcome in which a control variable is discovered to be the mediating factor through which an independent variable affects a dependent variable. See Chapter 10. interpretivism An approach to social research that focuses on gaining an empathic understanding of how people feel inside, seeking to interpret individuals’ everyday experiences, deeper meanings and feelings, and idiosyncratic reasons for their behaviors. See Chapter 3. interrater reliability The extent of consistency among different observers in their judgments, as re�ected in the p ercentage of agreement or degree of correlation in their independent ratings. See Chapter 8. interrupted time-series with a nonequivalent comparison group time-series design The most common form of multiple timeseries design , in which an experimental group and a control group are measured at multiple points in time before and after an intervention is introduced to the control group. See Chapter 11. interval measure A level of measurement that describes a variable whose attributes are rank-ordered and have equal distances between adjacent attributes. The Fah renheit temperature scale is an example of this, because the distance between 17° and 18° is the same as that between 89° and 90°. See also nominal measure, ordinal measure, ratio measure, and Chapter 20. intervening variable See mediating variable. intervention �delity The degree to which an intervention being evaluated is actually delivered to clients as intended. See Chapter 11. interview A data-collection encounter in which one person (an interviewer) asks questions of another (a respondent ). Interviews may be conducted face-to-face or by telephone. See C hapters 15 and 18 for more information on interviewing. interview guide approach A semistructured form of qualitative interviewing that lists in outline form the topics and issues that the interviewer should cover in the interview, but allows the interviewer to adapt the sequencing and wording of questions to each particular interview. See Chapter 18. inverse relationship See negative relationship. IRB
See institutional review board.
judgmental sample A type of nonprobability sample in which we select the units to be observed on the basis of our own judgment about which ones will be the most useful or representative. Another name for this is pur posive sampl e. See Chapter 14 for more details.
62 3
key informants An approach to needs assessment that is based on expert opinions of individuals who are presumed to have special knowledge about a target population’s problems or needs. See Chapter 13. known groups validity A form of criterion-related validity that pertains to the degree to which an instrument accurately differentiates between groups that are known to differ in respect to the variable being measured. See Chapter 8. latent content As used in connection with content analysis, the underlying meaning of communications as distinguished from their manifest content. See Chapter 16. level of signi�cance See signi�cance level. life history (or life story or oral history interviews) A qualitative research method in which researchers ask open-ended questions to discover how the participants in a study understand the signi�cant events and meanings in their own lives. See Chapter 18. life story See life history. Likert scale A type of composite measure developed by Rensis Likert in an at tempt to improve the levels of measurement in social research through the use of standardized response categories in survey questionnaires. “Likert items” use such response categories as strongly agree, agree, disagree, and strongly disagree. Such items may be used in the construction of true Likert scales and also be used in the constr uction of other types of composite measures. See Chapter 9. linguistic equivalence (or translation equivalence) The result of a successful translation and back-translation of an instru ment originally developed for the majority language, but which will be used with research participants who don’t speak the majority language. See Chapter 5. logic model A graphic portrayal that depicts the essential components of a program, shows how those components are linked to short-term process objectives, speci�es measurable indicators of success in achieving short-term objectives, conveys how those short-term objectives lead to long-term program outcomes, and identi�es measurable indicators of success in achieving long-term outcomes. See Chapter 13. longitudinal study A study design that involves the collect ion of data at different points in time, as contrasted with a crosssectional study. See Chapter 6. mail tracking A method of locating and contacti ng research participants by mailing reminder notices about impending interviews or about the need to call in to update any changes in how they can be contacted. It might also include sending birthday cards, holiday greetings, and certi�cates of appreciation for participation. See Chapter 5. managed care A variety of arrangements that try to control the costs of health and human services by having a large organization that pays for the cost of services for many people contract with care providers who agree to provide that care at reduced costs. Managed care is t hought to have contributed to the growth of program evaluation. See Chapter 13. manifest content In connection with content analysis, the concrete terms contained in a communication, as distinguished from latent content. See Chapter 16.
Licensed to: iChapters User 624
GLOSSARY
matching In connection with experiments, the procedure whereby pairs of subjects are matched on the basis of t heir similarities on one or more variables, and one member of the pair is assigned to the experimental group and the other to the control group. See Chapter 11. maturation A threat to internal validity referring to aging effects or developmental changes that in�uence the dependent variable. See Chapters 10 and 11. mean An average, computed by summing the values of several observations and dividing by the number of observations. If you now have a grade point average of 4.0 based on 10 courses and you get an F in this course, then your new grade point average (the mean) will be 3.6. See Chapter 20. measurement equivalence The degree to which instruments or observed behaviors have the same meaning across cultures, relate to referent theoretical constructs in t he same way across cultures, and have the same causal linkages across cultures. See Chapter 5. median Another average; it represents the value of the “middle” case in a rank-ordered set of observations. If the ages of �ve men are 16, 17, 20, 54, and 88, then the median would be 20 (the mean would be 39). See Chapter 20. mediating variable (or intervening variable) The mechanism by which an independent variable affects a dependent variable. See Chapter 7. member checking A strategy for improving the trustworthiness of qualitative research �ndings in which researchers ask the participants in their research to con�rm or discon�rm the accuracy of the research observations and interpretations. Do the reported observations and interpretations ring true and have meaning to the participants? See Chapter 17. memoing A qualitative data analysis technique used at several stages of data processing to capture code meanings, theoretical ideas, preliminary conclusions, and other thoughts that will be useful during analysis. See Chapter 19. meta-analysis A procedure for calculating the average strength of association between variables (that is, the mean effect size) across previously completed research studies in a particular �eld. See Chapter 22. metric equivalence (or psychometric equivalence or scalar equivalence) Scores on a measure being comparable across cultures. See Chapter 5. mode The most frequently observed value or attribute. If a sample contains 1,000 Protestants, 275 Catholics, and 33 Jews, then Protestant is the modal category. See Chapter 20. moderating variable A variable that in�uences the strength or direction of a relationship between independent and dependent variables. See Chapter 7.
multiple regression analysis A multivariate statistic al procedure that shows the overall correlation between a set (or sets) of inde pendent variables and an interval- or ratio-level dependen t variable. See Chapter 22. multiple time-series designs A form of time-series analysis in which both an experimental group and a nonequivalent comparison group are measured at multiple points in time before and after an intervention is introduced to the experimental group. See Chapter 11. multivariate analysis The analysis of the simultaneous relationships among several variables. Examining simultaneously the effects of age, sex, and social class on religiosity would be an example of multivariate analysis. See Chapters 10, 20, and 22. naturalism A qualitative research paradigm that emphasizes observing people in their natural, everyday social settings and on reporting their stories the way they tell them. See Chapter 17. needs assessment Systematically researching diagnostic questions for program planning purposes. For example, community residents might be surveyed to assess their need for new childcare services. See Chapter 13. negative case analysis A strategy for improving the trust worthiness of qualitative research �ndings in which researchers show they have searched thoroughly for discon�rming evidence—looking for deviant cases that do not �t the researcher’s interpretations. See Chapter 17. negative relationship A relationship between two variables in which one variable increases in value as the other variable decreases. For example, we might expect to �nd a negative relationship between the level of utilization of community-based aftercare services and rehospitalization rates. See Chapter 7. nominal measure A level of measurement that describes a variable whose different attributes differ only categorically and not metrically, as distinguished from ordinal, interval, or ratio measures. Gender would be an example of a nominal measure. See Chapters 9 and 20. nomothetic An approach to explanation in which we attempt to discover factors that can offer a general, though imperfect, explanation of some phenomenon. For example, we might note that education seems to reduce prejudice in general. Even though we recognize that some educated people are prejudiced and some uneducated people are not, we have learned some of what causes prejudice or tolerance in general. By contrast, see idiographic. See Chapter 3. nondirectional hypotheses Predicted relationships between variables that do not specify whether the predicted relationship will be positive or negative. See Chapter 21. nonequivalent comparison groups design A quasi-experimental design in which the researcher finds two existing groups that appear to be similar and measures change on a dependent variable before and after an intervention is introduced to one of the groups. See Chapter 11.
multiple-baseline design A type of single-case evaluation design that attempts to control for extraneous variables by having more than one baseline and intervention phase. See Chapter 12.
nonparametric tests Tests of statistical signi�cance that have been created for use when not all of the assumptions of parametric statistics can be met. Chi-square is the most commonly used nonparametric test. See Chapter 22.
multiple-component design A type of single-case evaluation design that attempts to determine which par ts of an intervention package really account for the change in the target problem. See Chapter 12.
nonprobability sample A sample selected in some fashion other than those suggested by probability theory. Examples include judgme ntal (pur posive), quota , and snowball samples. See Chapters 14 and 17.
Licensed to: iChapters User GLOSSARY novelty and disruption effects A form of research reactivity in experiments in which the sense of excitement, energy, and enthusiasm among recipients of an evaluated intervention—and not the intervention itself—causes the desired change in their behavior. See Chapter 10. NUD*IST A computer program designed to assist researchers in the analysis of qualitative data. See Chapter 19. null hypothesis In connection with hypothesis testing and tests of statistical signi�cance, the hypothesis that suggests there is no relationship bet ween the variables u nder study. You may conclude that the two variables are related after having statistically rejected the null hypothesis. See Chapters 21 and 22. observations Information we gather by experience in the real world that helps us build a theory or verify whether it is correct when testing hypotheses. See Chapter 3. obtrusive observation Occurs when the participant is keenly aware of being observed and t hus may be predisposed to behave in socially desirable ways and in ways that meet experimenter ex pectancies. See Chapters 10, 11, 12, and 16. one-group pretest–posttest design A pre-experimental design, with low internal validity, that assesses a dependent variable before and after a stimulus is introduced but does not attempt to control for alternative explanations of any changes in scores that are observed. See Chapters 10 and 11. one-shot case study A pre-experimental research design, with low internal validity, that simply measures a single group of subjects on a dependent variable at one point in time after they have been exposed to a stimulus. See Chapters 10 and 11. one-tailed tests of signi�cance Statistical signi�cance tests that place the entire critical region at the predicted end of the theoretical sampling distribution and thus limit the inference of statistical signi�cance to �ndings that are only in the critical region of the predicted direction. See Chapter 21. online surveys Surveys conducted via the Internet—either by e-mail or through a website. See Chapter 15. open coding A qualitative data-proce ssing method in which, instead of starting out with a list of code categories derived from theory, one develops code categories through close examin ation of qualitative data. During open coding, the data are broken down into discrete parts, closely examined, and compared for similarities and differences. Questions are asked about the phenomena as re�ected in the data. Through this process, one’s own and others’ assumptions about phenomena are questioned or explored, leading to new discoveries. See Chapter 19. open-ended questions Questions for which respondents are asked to provide their own answer, rather than selecting from among a list of possible response s provided by the researcher as for closed-ended questions. See Chapter 9. operational de�nition The concrete and speci�c de�nition of something in terms of the operations by which observations are to be categorized. The operational de�nition of “earning an A in this course” might be “correctly answering at least 90 percent of the �nal exam questions.” See Chapters 7 and 12. operationalization One step beyond conceptualization. Operationalization is the process of developing operational de�nitions. See Chapte r 7.
62 5
oral history interviews See life history. ordinal measure A level of measurement describing a variable whose attributes may be rank-ordered along some dimension. An example would be measuring “socioeconomic status” by the attributes high, medium, and low. See also nominal measure, interval measure, and ratio measure and Chapters 9 and 20. panel attrition A problem facing panel studi es, based on the fact that some respondents who are studied in the �rst wave of the survey may not participate later. See Chapter 6. panel studies Longitudinal studies in which data are collected from the same sample (the panel) at several points in time. See Chapter 6. PAR See participatory action research. paradigm (1) A model or frame of reference that shapes our observations and understandings. For ex ample, “functionalism” leads us to examine society in terms of the fu nctions served by its constituent parts, whereas “interactionism” leads us to focus attention on the ways people deal with each other face-to-face and arrive at shared meanings for things. (2) Almost a quarter. See Chapter 3. parallel-forms reliability Consistency of measurement between two equivalent measurement instruments. See Chapter 8. parameter A summary statistic describing a given variable in a population, such as the mean income of all families in a city or the age distribution of the city’s population. See Chapter 14 and Appendix B. parametric tests Tests of statistic al signi�c ance that assume that at least one variable being studied has an interval or ratio level of measurement, that the sample distribution of the relevant parameters of those variables is normal, and that the different groups being compared have been randomly selected and are independent of one another. Commonly used parametric tests are the t-test, analysis of variance, and Pearson product-moment correlation. See Chapter 22. participatory action research (PAR) A qualitative research paradigm in which the researcher’s function is to serve as a resource to those b eing studied—t ypically, disadvantaged groups—as an opportunity for them to act effectively in their own interest. The disadvantaged participants de�ne their problems, de�ne the remedies desired, and take the lead in designing the research that will help them reali ze their aims. S ee Chapter 17. passage of time A threat to internal validity referring to changes in a dependent variable that occur naturally as time passes and not because of the independent variable. See Chapters 10 and 11. path analysis A statistic al procedure, based on regression analysis, that provides a graphic picture of a causal model for understanding relationships between variables. See Chapter 22. Pearson product-moment correlation (r ) A paramet ric measure of association, ranging from –1.0 to +1.0, used when both the independent and dependent variables are at the interval or ratio level of measurement. See Chapter 22. peer debrie�ng and support A strategy for improving the trustworthiness of qualitative research �ndings in which teams of investigators meet regularly to give each other feedback, emotional support, alternative perspectives, and new ideas about how they are collecting data or about problems, and about meanings in the data already collected. See Chapter 17.
Licensed to: iChapters User 626
GLOSSARY
phone tracking A method of locating and contacti ng research participants—particularly those who are transient or homeless— to secure their participation in f uture sessions or interviews. This method involves repeated telephoning of anchor points in advance to schedule an interview and providing participants a tollfree number where they can leave messages about appointment changes or changes in how to locate them, incentives for leaving such messages, and a card that lists appointment times and the research project’s address and telephone number. See Chapter 5. placebo control group design An experimental design that controls for placebo effec ts by randomly assigning subjects to an ex per ime nta l group and two control groups and exposing one of the control groups to a stimulus that is designed to resemble the special attention received by subjects in the experimental group. See placebo effects and Chapter 10. placebo effects Changes in a dependent variable that are caused by the power of suggestion among part icipants in an experimental group that they are receiving something special that is expected to help them. These changes would not occur if they received the experimental intervention without that awareness. See Chapter 10. plagiarism Presenting someone else’s words or thoughts as though they were your own; constitutes intellectual theft. See Chapter 23. population The group or collect ion that a researcher is interested in generalizing about. More formally, it is the t heoretically speci�ed aggregation of study elements. See Chapter 14. positive relationship A relationship between two variables in which one variable increases in value as the other variable also increases in value (or one decreases as the other decreases). For example, we might expect to �nd a positive relationship between rate of unemployment and extent of homelessness. See Chapter 7. positivism A paradigm introduced by August Comte that held that social behavior could be studied and understood in a rational, scienti�c manner—i n contrast to explanations based in religion or superstition. See Chapter 3. possible-code cleaning Examining the distribution of responses to each item in a data set to check for errors in d ata entered into a computer by looking for impossible code categories that have some responses and then correcting the errors. See Chapter 20.
PRE
See proportionate reduction of error.
predictive validity A form of criterion-related validity involving a measure’s ability to predict a criterion that will occur in the future. See Chapter 8. pre-experimental designs Pilot study designs for evaluating the effectiveness of interventions; they do not control for threats to internal validity. See Chapters 10 and 11. pretest–posttest control group design The classical experimental design in which subjects are assigned randomly to an experimental group that receives an intervention being evaluated and to a control group that does not receive it. Each group is tested on the dependent variable before and after the experimental group receives the intervention. See Chapter 10. pretesting Testing out a scale or questionnaire in a dry run to see if the target population will understand it and not �nd it too unwieldy, as well as to identify any needed modific ations. See Chapters 5 and 9. probabilistic knowledge Knowledge based on probability that enables us to say that if A occu rs, then B is more likely to occur. It does not enable us to say that B will occur, or even that B wi ll probably occur. See Chapter 3. probability proportionate to size (PPS) This refers to a type of multistage cluster sample in which clusters are selected, not with equal probabilities (see equal probability of selection method ) but with probabilities proportionate to their sizes—as measured by the number of units to be subsampled. See Chapter 14. probability sample The general term for a sample selected in accord with probability theory, typically involving some random selection mechanism. Speci�c types of probability samples include area probability sample, EPSEM, PPS, simple random sample, and systematic sample. See Chapter 14. probability sampling The use of random sampling techn iques that allow a researcher to make relatively few observations and generalize from those obser vations to a much wider population. See Chapter 14.
postmodernism A paradigm that rejects the notion of a knowable objective social reality. See Chapter 3.
probe A techn ique employed in interviewing to solicit a more complete answer to a question, this nondirective phrase or question is used to encourage a respondent to elaborate on an answer. Examples include “Anything more?” and “How is that?” See Chapters 15 and 18 for discussions of interviewing.
posttest-only control group design A variation of the classical experimental design that avoids the possible testing effects associated with pretesting by testing only after the experimental group receives the intervention, based on the assumption that the process of random assignment provides for equivalence between the experimental and control groups on the dependent variable before the exposure to t he intervention. See also pretest–postt est control group design. See Chapter 10.
prolonged engagement A strategy for improving the trust worthiness of qualitative research �ndings that attempts to reduce the impact of reactivity and respondent bias by forming a long and trusting relationship with respondents and by conducting lengthy interviews or a series of follow-up interviews with the same respondent. This improves the likelihood that the respondent ultimately will disclose socially undesirable truths, and improves the researcher’s ability to detect distortion. See Chapter 17.
posttest-only design with nonequivalent groups A preexperimental design that involves two groups that may not be comparable, in which the dependent variable is assessed after the independent variable is introduced for one of the groups. See Chapter 11.
proportionate reduction of error (PRE) The proportion of errors reduced in predicting the value for one variable based on knowing the value for the other. The stronger the relationship is, the more our prediction errors will be reduced. See Chapter 21.
PPS See probabilit y proportionate to size. practice models Guides to help us organize our views about social work practice that may re�ect a synthesis of existing theories. See Chapter 3.
pseudoscience Fake science about an area of inquiry or practice that has the surface appearance of being scienti�c, but upon careful inspection can be seen to violate one or more principles of the scienti�c method or contain fallacies against which the scienti�c method attempts to guard. See Chapter 1.
Licensed to: iChapters User GLOSSARY psychometric equivalence See metric equivalence and Chapter 5. purposive sample See judgmental sample and Chapters 14 and 17. purposive sampling Selecting a sample of observations that the researcher believes will yield the most comprehensive understanding of the subject of study, based on the researcher’s intuitive feel for the subject that comes from extended observation and re�ection. See Chapters 14 and 17. qualitative analysis The nonnumerical examination and interpretation of observations for the purpose of discovering underlying meanings and patterns of relationships. This is most typical of �eld research and historical research. See Chapter 19. qualitative interview An interaction between an interviewer and a respondent in which the interviewer usually has a general plan of inquiry but not a speci�c set of questions that must be asked in particular words and in a particular order. Ideally, the respondent does most of the talking. See Chapter 18. qualitative research methods Research methods that emphasize depth of understanding and the deeper meanings of human exp erience, and that aim to generate theoret ically richer, albeit more tentative, observations. Commonly used qualitative methods include participant observation, direct observation, and unstructured or intensive interviewing. See Chapters 3, 17, 18, and 19. quantitative analysis The numerical representation and manipulation of observations for the purpose of describing and explaining the phenomena that those observations re�ect. See especially Chapter 20 and also the remainder of Part 7. quantitative methods Research methods that emphasize precise, objective, and generalizable �ndings. See Chapter 3. quasi-experimental design Design that attempts to control for threats to internal validity and thus permits causal inferences but is distinguished from true experiments primarily by the lack of random assignment of subjects. See Chapter 11. questionnaire A document that contains questions and other types of items that are designed to solicit information appropriate to analysis. Questionnaires are used primarily in survey research and also in experiments, �eld research, and other modes of observation. See Chapters 9 and 15. quota sampling A type of nonprobability sample in which units are selected into the sample on the basis of prespeci�ed characteristics so that the total sample will have the same distribution of characteristics as are assumed to exist i n the population being studied. See Chapters 14 and 17. r 2
The proportion of variation in the dependent variable that is explained by the independent variable. See Chapter 21. random error A measurement error that has no consistent pattern of effects and that reduces the reliability of measurement. For example, asking questions that respondents do not understand will yield inconsistent (random) answers. See Chapter 8. random selection A probability sampling procedure in which each element has an equal chance of selection independent of any other event in the selection process. See Chapter 14. randomization A technique for assigning experimental participants to experimental groups and control groups at random. See Chapter 10 and Appendix B. randomized clinical trials (RCTs) Experiments that use random means (such as a coin toss) to assign clients who share similar
62 7
problems or diagnoses into groups that receive different interventions. If the predicted difference in outcome is found between the groups, it is not plausible to attribute the difference to a priori differences between t wo incomparable groups. See Chapters 2 and 11. range A measure of dispersion that is composed of the highest and lowest values of a variable in some set of observations. In your class, for example, the range of ages might be from 20 to 37. See Chapter 20. rates under treatment An approach to needs assessment based on the number and characteristics of clients already using a service in a similar community. See Chapter 13. ratio measure A level of measurement that describes a variable whose attributes have all the qualities of nominal, ordinal, and interval measures and also are based on a “true zero” point. Age would be an example of a ratio measure. See Chapters 9 and 20. reactivity A process in which change in a dependent variable is induced by research procedures. See Chapters 11 and 12. recall bias A common limitation in case-control designs that occurs when a person’s current recollections of the quality and value of past experiences are tainted by knowing that things didn’t work out for them later in life. See Chapter 11. reductionism A fault of some researchers: a strict limitation (reduction) of the kinds of concepts to be considered relevant to the phenomenon under study. See Chapter 6. rei�cation The process of regarding as real things that are not real. See Chapter 7. relationship Variables that change together in a consistent, predictable fashion. See Chapters 3 and 7. reliability That quality of a measurement method that suggests that the same data would have been collected each time in repeated observations of the same phenomenon. In the context of a survey, we would expect that the question “Did you attend church last week?” would have higher reliability than the question “About how many times have you attended church in your life?” This is not to be confused with validity. See Chapter 8. reminder calls Telephoning research participant s to remind them of their scheduled treatment or assessment sessions in a study. See Chapter 5. replication (1) Generally, the duplication of a study to expose or reduce error or the reintroduction or withdrawal of an intervention to increase the internal validity of a quasi-experiment or single-case design evaluation. See Chapters 1, 3, 11, and 12. (2) One possible result in the elaboration model that occurs when an original bivariate relationship appears to be essentially t he same in the multivariate analysis as it was in t he bivariate analysis. See elaboration model and Chapter 10. representativeness That quality of a sample of having the same distribution of characteristics as the p opulation from which it was selected. By implication, descriptions and explanations derived from an analysis of the sample may be assumed to represent similar ones in the population. Representativeness is enhanced by probabilit y sampling and provides for generaliz ability and the use of inferential statistics. See Chapter 14. request for proposals (RFP) An announcement put out by funding sources that identi�es the research questions and types of designs the funding source would like to fund, encourages
Licensed to: iChapters User 628
GLOSSARY
researchers to submit proposals to carry out such research, speci�es the maximum size of the research grant, and provides other information about the source’s expectations and funding process. See Chapter 23. research contract Type of funding that provides great speci�city regarding what the funding source wants to have researched and how the research is to be conducted. Unlike a research grant, a research contract requires that the research proposal conform precisely to the funding source’s speci�cations. See Chapter 23. research design A term often used in connection with whether logical arrangements permit causal inferences; also refers to all the decisions made in planning and conducting research. See Chapter 10. research grant Type of fundi ng that usually identifies some broad priority areas the funding source has and provides researchers considerable leeway in the speci�c s of what they want to investigate within th at area and how they want to investigate it. See Chapter 23. research reactivity A process in which change in a dependent variable is induced by research procedures. See Chapters 10, 11, and 12. resentful demoralization A threat to the validity of an evaluation of an intervention’s effectiveness that occurs when practitioners or clients in the comparison routine-t reatment condition become resentful and demoralized because they did not receive the special training or t he special treatment. Consequently, their con�dence or motivation may decline and may explain their inferior performance on outcome measures. See Chapter 10. respondent A person who provides data for analysis by responding to a survey questionnaire or to an interview. See Chapters 15 and 18. response rate The number of persons who participate in a survey divided by the number selected in the sample, in the form of a percentage. This is also called the “completion rate” or, in selfadministered surveys, the “return rate”—the percentage of questionnaires sent out that are returned. See Chapter 15.
scienti�c method An approach to inquiry that attempts to safeguard against errors commonly made in casual human inquiry. Chief features include viewing all knowledge as provisional and subject to refutation, searching for evidence based on systematic and comprehensive observation, pursuing objectivity in observation, and replication. See Chapter 1. secondary analysis A form of research in which the data collected and processed by one researcher are reanalyzed—often for a different purpose—by another. This is especially appropriate in the case of survey data. Data archives are repositories or libraries for the storage and distribution of data for secondary analysis. See Chapter 16. selection bias A threat to internal validity referring to the assignment of research participants to groups in a way that does not maximize t heir comparability regarding the dependent variable. See Chapters 10 and 11. self-mailing questionnaire A mailed questionnaire that requires no return envelope: When the questionnaire is folded a particular way, the return address appears on the outside. The respondent therefore doesn’t have to worry about losing the envelope. See Ch apter 15. self-report scales A source of data in which research subjects all respond in writing to the same list of written questions or statements that has been devised to measure a particular construct. For example, a self-report scale to measure marital satisfaction might ask how often one is a nnoyed with one’s spouse, is proud of the spouse, has fun with the spouse, and so on. See Chapters 7, 8, and 12. self-reports A way to operationally de�ne variables according to what people say about their own thoughts, views, or behaviors. See Chapters 7 and 8. semantic differential A scaling format that asks respondents to choose between two opposite positions. See Chapter 9. semiotics The science of symbols and meanin gs, commonly associated with content analysis and based on language, that examines the agreements we have about the meanings associated with particular signs. See Chapter 19.
sample That part of a population from which we have data. See Chapter 14.
sensitivity The ability of an instru ment to detect subtle differences. See Chapter 8.
sampling The process of selecting a sample. See Chapter 14.
significance level The probability level that is selected in advance to serve as a cutoff point to separate � ndings that will and will not be attributed to chance. Findings at or below the selected probability level are deemed to be statistically signi�cant. See Chapter 21.
sampling error The degree of error to be expected for a given sample design, as estimated accordi ng to probability theory. See Chapter 14 and Appendix B. sampling frame That list or quasi-list of units that compose a population from which a sample is selected. If the sample is to be representative of the population, then it’s essential that the sampling frame include all (or nearly all) members of the population. See Chapter 14. sampling interval The standard distance between elements selected from a population for a sample. See Chapter 14. sampling ratio The proportion of elements in the population that are selected to be in a sample. See Chapter 14. sampling unit That element or set of elements considered for selection in some stage of sampling. See Chapter 14. scalar equivalence See metric equivalence and Chapter 5. scale A type of composite measure composed of several items that have a logical or empirical structure among them. See Chapter 9.
simple interrupted time-series design A quasi-experimental design in which no comparison group is utilized and th at attempts to develop causal inferences based on a comparison of trends over multiple measurements before and after an inter vention is introduced. See Chapter 11. simple random sample (SRS) A type of prob abil ity sample in which the units t hat compose a population are assigned numbers. A set of random numbers is then generated, and the units having those numbers are included in the sample. Although probability theory and the calculations it provides assume this basic sampling method, it’s seldom used for practical reasons. An equivalent alternative is the systematic sample (with a random start). See Chapter 14. single-case evaluation design A time-series design used to evaluate the impact of an intervention or a policy change on individual cases or systems. See Chapter 12.
Licensed to: iChapters User GLOSSARY
62 9
snowball sample A nonprobability sample that is obtained by asking each person interviewed to suggest additional people for interviewing. See Chapters 14 and 17.
random, systematic, or cluster sampling, improves the representativeness of a sample, at least in terms of the strati�c ation variables. See Chapter 14.
snowball sampling A nonprobability sampling method often employed in qualitative research. Each person interviewed may be asked to suggest additional people for interviewing. See Chapters 5, 14, and 17.
strati�ed sampling A probability sampling procedure that uses strati�cation to ensure t hat appropriate numbers of elements are drawn from homogeneous subsets of that population. See strati�cation and Chapter 14.
social desirability bias A source of systematic measurement error involving the tendency of people to say or do things that will make them or their reference group look good. See Chapter 8.
study population The aggregation of elements from which the sample is actually selected. See Chapter 14.
social indicators An approach to needs assessment based on aggregated statistics that re�ect conditions of an entire population. See Chapter 13. Solomon four-group design An experimental design that assesses testing effects by randomly assigning subjects to four groups, introducing the inter vention being evaluated to two of them, conducting both pretesting and posttesting on one group that receives the intervention and one group that does not, and conducting posttesti ng only on the other two groups. See Chapter 10. spurious relationship A relationship between two variables that are no longer related when a third variable is controlled; the third variable explains away the original relationship. Thus, the relationship between number of storks and number of hu man births in geographic areas is spurious because it is explained away by the fact that areas with more humans are more likely to have a zoo or a larger zoo. See Chapters 7 and 10. standard deviation A descriptive statistic that portrays the dispersion of values around the mean. It’s the square root of the averaged squared differences between each value and the mean. See Chapter 20. standardized open-ended interviews The most highly structured form of qualitative interviews, which are conducted in a consistent, thorough manner. Questions are written out in advance exactly the way they are to be asked in the interview, reducing the chances that variations in responses are being caused by changes in the way interviews are being conducted. See Chapter 18. static-group comparison design A cross-sectional design for comparing different groups on a dependent variable at one point in time. The validity of this design will be in�uenced by the extent to which it contains multivariate controls for alternative explanations for differences among the groups. See Chapters 10 and 11. statistic A summary description of a variable in a sample. See Appendix B. statistical power analysis Assessment of the probability of avoiding Type II errors. See Chapter 22. statistical regression A threat to internal validity referring to the tendency for extreme scores at pretest to become less extreme at posttest. See Chapter 10.
substantive signi�cance The importance, or meaningfulness, of a �nding from a practical standpoint. See Chapter 21. summative evaluation A type of program evaluation focusing on the ultimate success of a program and decisions about whether it should be continued or chosen from among alternative options. See Chapter 13. switching replication A way to detect selection bias in a quasiexperiment that involves administering the t reatment to the comparison group after the �rst posttest. If we replicate in that group—in a second posttest—the improvement made by the experimental group in the �rst posttest, then we reduce doubt as to whether the improvement at the �rst posttest was merely a function of a selection bias. If our second post test results do not replicate the improvement made by the experimental group in the �rst posttest, then the difference between the groups at the �rst posttest can be attributed to the lack of comparibility between the two groups. systematic error An error in measurement with a consistent pattern of effects. For example, when child welfare workers ask abusive parents whether they have been abusing their children, they may get biased answers that are consistently untrue because parents do not want to admit to abusive behavior. Contrast this to random error, which has no consistent pattern of effects. See Chapter 8. systematic sample A type of probabilit y sample in which every kth unit in a list is selected for inclusion in the sample—for example, every 25th student in the college directory of students. We compute k by dividing the size of the population by the desired sample size; the result is called the sampling interval. Within certain constraints, systematic sampling is a functional equivalent of simple random sampling and usually easier to do. Typically, the �rst unit is selected at random. See Chapter 14. t - test A test of the statistical significance of the di fference between the means of two groups. See Chapter 22. test–retest reliability Consistency, or stability, of measurement over time. See Chapter 8. tests of statistical signi�cance A class of statistical computations that indicate the likelihood that the relationship observed between variables in a sample can be attributed to sampling error only. See inferential statistics and Chapter 21.
statistical signi�cance A general term that refers to the unlikelihood that relationships observed in a sample could be attributed to sampling error alone. See tests of statistical signi�cance and Chapter 21.
theoretical sampling distr ibution The distribution of outcomes produced by an in�nite number of randomly drawn samples or random subdivisions of a sample. This distribution identi�es the proportion of times that each outcome of a study could be expected to occur as a result of chance. See Chapter 21.
strati�cation The grouping of the units that compose a population into homogeneous groups (or “strata”) before sampling. This procedure, which may be used in conjunction with simple
theory A systematic set of interrelated statements intended to explain some aspect of social life or enrich our sense of how people conduct and �nd meaning in their daily lives. See Chapter 3.
Licensed to: iChapters User 630
GLOSSARY
time-series designs A set of quasi-experiment al designs in which multiple observations of a dependent variable are conducted before and after an intervention is introduced. See Chapter 11. translation equivalence validity, and Chapter 5.
See lingu istic equivalence, translation
translation validity Successful translation of a measure into the language of respondents who are not �uent in the majority language, thus attaining linguistic equivalence. See Chapter 5. trend studies Longitudinal studies that monitor a given characteristic of some population over time. An example would be annual canvasses of schools of social work to identify trends over time in the number of students who specialize in direct practice, generalist practice, and administration and planning. See Chapter 6. triangulation The use of more than one imperfect data-collection alternative in which each option is vu lnerable to different potential sources of error. For example, instead of relying exclusively on a client’s self-report of how often a particular target behavior occurred during a speci�ed period, a signi�cant other (teacher, cottage parent, and so on) is asked to monitor the behavior as well. See Chapters 8, 12, and 17. two-tailed tests of signi�cance Statistical signi�cance tests that divide the critical region at both ends of the theoretical sampling distribution and add the probability at both ends when calculating the level of signi�cance. See Chapter 21. Type I error An error we risk committing whenever we reject the null hypothesis. It occurs when we reject a true null hypothesis. See Chapter 21. Type II error An error we risk committing whenever we fail to reject the null hypothesis. It occurs when we fai l to reject a false null hypothesis. See Chapters 21 and 22. units of analysis The “what” or “whom” being studied. In social science research, the most typical units of analysis are individual people. See Chapter 6.
univariate analysis The analysis of a single variable for purposes of description. Frequency distributions, averages, and measures of dispersion would be examples of univariate analysis, as distinguished from bivariate and multivariate analysis. See Chapter 20. unobtrusive observation Unlike in obtrusive observation, the participant does not notice the observation and is therefore less in�uenced to behave in socially desirable ways and ways that meet experimenter expectancies. See Chapters 10, 11, 12, and 16. validity A descriptive term used of a measure that accurately re�ects the concept that it’s intended to measure. For example, your IQ would seem a more valid measure of your intelligence than would the number of hours you spend in the library. Realize that the ultimate validity of a measure can never be proven, but we may still agree to its relative validity, content validity, construct validity, internal validation, and external validation. This must not be confused with reliability. See Chapter 8. variable-oriented analysis A qualitative data analysis method that focuses on interrelations among variables, with the people observed being the primary carriers of those variables. See Chapter 19. variables Logical groupings of attributes. The variable “gender” contains the attributes “male” and “female.” See Chapters 3 and 7. The German word meaning “understanding,” used in qualitative research in connection to hermeneutics, in which the researcher tries mentally to take on the circumstances, views, and feelings of those being studied to interpret their actions appropriately. See Chapter 16. verstehen
weighting A procedure employed in connection with sampling whereby units selected with unequal probabilities are assigned weights in such a manner as to make the sample representative of the population from which it was selected. See Chapter 14. withdrawal/reversal design See ABAB withdrawal/reversal design.
Licensed to: iChapters User
Bibliog ra phy
Abramovitz, Robert, Andre Ivanoff, Rami Mosseri, and Anne O’Sullivan. 1997. “Comments on Outcomes Measurement in Mental and Behavioral Health,” pp. 293–296 in Edward J. Mullen and Jennifer L. Magnabosco (eds.), Outcomes Measurement in the Human Services: Cross-Cutting Issues and Methods. Washington, DC: NASW Press. Acker, J., K. Barry, and J. Esseveld. 1983. “Objectivity and Truth: Problems in Doing Feminist Research,” Women’s Studies International Forum, 6, 423– 435. Adler, Patricia A., and Peter Adler. 1994. “Observational Techniques,” pp. 377–392 in Norman K. Denzin and Yvonna S. Lincoln (eds.), Handbook of Qualitative Research. Thousand Oaks, CA: Sage. Aiken, L. R. 1985. Psychological Testing and Assessment. Rockleigh, NJ: Allyn & Bacon. Alexander, Leslie B., and Phyllis Solomon (eds.). 2006. The Research Process in the Human Services: Behind the Scenes . Belmont, CA: Thomson Brooks/Cole. Allen, James, and James A. Walsh. 2000. “A Construct-Based Approach to Equivalence: Methodologies for Cross-Cultural/ Multicultural Personality Asse ssment Research,” pp. 63–85 in Richard Dana (ed.), Handbook of Cross-Cultural Personality Assessment. Mahwah, NJ: Lawrence Erlbaum Associates. Allen, Katherine R., and Alexis J. Walker. 1992. “A Feminist Analysis of Interviews with Elderly Mothers and Their Daughters,” pp. 198–214 in Jane Gilgun, Kerry Daly, and Gerald Handel (eds.), Qualitative Methods in Family Research. Thousand Oaks, CA: Sage. Altheide, David L., and John M. Johnson. 1994. “Criteria for Assessing Interpretive Validity in Qualitative Research,” pp. 485–499 in Norman K. Denzin and Yvonna S. Lincoln (eds.), Handbook of Qualitative Research. Thousand Oaks, CA: Sage. Alvidrez, Jennifer, Francisca Azocar, and Jeanne Mi randa. 1996. “Demystifying the Concept of Ethnicity for Psychotherapy Researchers,” Jou rn al of C ons ulting and Cli nic al Psychology, 6 4(5), 903 –908. American Psychiatric Association Task Force for the Handbook of Psychiatric Measures. 2000. Handbook of Psychiatric Measures. Washington, D.C.: American Psychiatric Association. Anastasi, A. 1988. Psychological Testing. New York: Macmillan. Andrulis, R. S. 1977. Adult Assessment: A Source Book of Tests and Measures of Hum an Behavior. Spring�eld, IL: Thomas. Aneshenshel, Carol S., Rosina M. Becerra, Eve P. Fiedler, and Roberleigh A. Schuler. 1989. “Participation of Mexican American Female Adolescents in a Longitudinal Panel Survey,” Public Opinion Quarterly, 53 (Winter), 548–562. Areán, Patricia A., and Dolores Gallagher-Thompson. 1996. “Issues and Recommendations for the Recru itment and Retention of Older Ethnic Mi nority Adults into Clinical Research,” Jou rn al of Cons ult ing and Clinic al Psyc hology, 64(5), 875–880. Arnold, Bill R ., and Yolanda E. Matus. 2000. “Test Translation and Cultural Equivalence Methodologies for Use with Diverse Populations,” pp. 121–135 in Israel Cuéllar and Freddy A. Paniagua (eds.), Handbook of Multicultural Mental Health: Assessment and Treatment of Diverse Populations. San Diego, CA: Academic Press. 631
Asch, Solomon E. 1958. “Effects of Group Pressure upon the Modi�cation and Distortion of Judgments,” pp. 174–183 in Eleanor E. Maccoby, Theodore M. Newcomb, and Eugene L. Hartley (eds.), Readings in Social Psychology, 3rd ed. New York: Holt, Rinehart and Winston. Asher, Ramona M., and Gary Alan Fine. 1991. “Fragile Ties: Sharing Research Relationships with Women Married to A lcoholics,” pp. 196–205 in Will iam B. Shaf� r and Roberta A. Stebbins (eds.), Experiencing Fieldwork: An Inside View of Qualitative Research. Newbury Park, CA: Sage. Babbie, Earl R. 1966. “The Third Civilization,” Review of Religious Research (Winter), 101–102. ———. 1985. You Can Make a Difference. New York: St. Martin’s Press. ———. 1986. Observing Ourselves: Essays in Social Research. Belmont, CA: Wadsworth. ———. 1990. Survey Research Methods. Belmont, CA: Wadsworth. ———, Fred Halley, and Jeanne Zaino. 2000. Adventures in Social Research. Newbury Park, CA: Pine Forge Press. Baker, Vern, and Charles Lambert. 1990. “The National Collegiate Athletic Association and the Governance of Higher Education,” Sociological Quarterly, 31(3), 403–421. Ban�eld, Edward. 1968. The Unheavenly City: The Nature and Future of Our Urban Crisis. Boston: Little, Brown. Barlow, David H., and Michel Hersen. 1984. Single Case Experimental Designs: Strategies for Studying Behavior Change, 2nd ed. New York: Pergamon Press. Bartko, John J., William T. Carpenter, and Thomas H. McGlashan. 1988. “Statistical Issues in Long-Term FollowUp Studies,” Schizophrenia Bulletin, 14(4), 575–587. Baxter, Ellen, and K im Hopper. 1982. “The New Mendicancy: Homeless in New York City,” American Journal of Ortho psychiatry, 52(3), 393–407. Bednarz, Marlene. 1996. “Push polls statement.” Report to the AAPORnet listser v, April 5 @Online. Available: mbednarz@ umich.edu. Beebe, Linda. 1993. Professional Writing for the Human Services. Washington, DC: NASW Press. Beere, C. A. 1979. Women and Women’s Issues: A Handbook of Tests and Measures. San Francisco: Jossey-Bass. ———. 1990. Sex and Gender Issues: A Handbook of Tests and Measures. New York: Greenwood Press. Belcher, John. 1991. “Understanding the Process of Social Drift Among the Homeless: A Qualitative Analysis.” Paper presented at the Research Conference on Qualitative Methods in Social Work Practice Research, Nelson A. Rockefeller Institute of Government, State University of New York at Albany, August 24. Bellah, Robert N. 1970. “Chr istianity and Symbolic Realism,” Journ al for the Scienti� c Study of Religion, 9, 89–96. ———. 1974. “Comment on the Lim its of Symbolic Realism,” Journ al for the Scienti� c Study of Religion, 13, 487–489. Bennet, Carl A., and Arthur A. Lumsdaine (eds.). 1975. Evaluation and Experiment. New York: Academic Press, 1975. Benton, J. Edwin, and John Daly. 1991. “A Question Order Effect in a Local Government Survey,” Public Opinion Quarterly, 55, 640–642.
Licensed to: iChapters User 632
BIBLIOGRAPHY
Berg, Bruce L. 1989. Qualitative Research Methods for the Social Sciences, 1st ed. Boston: Allyn & Bacon. ———. 1998. Qualitative Research Methods for the Social Sciences, 3rd ed. Boston: Al lyn & Bacon. Bernstein, Ira H., and Paul Havig. 1999. Computer Literacy: Getting the Most from Your PC. Thousand Oaks, CA: Sage. Beutler, Larry E., Michael T. Brown, Linda Crothers, Kevin Booker, and Mary Katherine Seabrook. 1996. “The Dilemma of Factitious Demographic Distinctions in Psychological Research,” Jour nal of Co nsulting and Clinic al Psy chology, 64(5), 892–902. Beveridge, W. I. B. 1950. The Art of Scienti�c Investigation. New York: Vintage Books. Bian, Yanjie. 1994. Work and Inequality in Urban China. Albany: State University of New York Press. Biggerstaff, M. A., P. M. Morris, and A. Nichols-Casebolt. 2002. “Living on the Edge: Examination of People Attending Food Pantries and Soup Kitchens,” Social Work, 47 (3), 267–277. Billups, James O., and Maria C. Julia. 1987. “Changing Pro�le of Social Work Practice: A Content Analysis ,” Social Work Research and Abstracts, 23(4), 17–22. Black, Donald. 1970. “Production of Crime R ates,” American Sociological Review, 35 (August), 733–748. Blair, Johnny, Shanyang Zhao, Barbara Bickart, and Ralph Kuhn. 1995. Sample Design for Household Telephone Surveys: A Bibliography 1949–1995. College Park: Survey Research Center, University of Maryland. Blalock, Hubert M. 1972. Social Statistics. New York: McGraw-Hill. Blaunstein, Albert, and Robert Zangrando (eds.). 1970. Civil Rights and the Black American. New York: Washington Square Press. Blood, R. O., and W. Wolfe. 1960. Husbands and Wives: The Dynamics of Married Living. New York: The Free Press. Bloom, Martin, Joel Fischer, and John G. Orme. 2006. Evaluating Practice: Guidelines for the Accou ntable Professional, 5th ed. Boston: Allyn & B acon. Blythe, Betty J., and Scott Briar. 1987. “Direct Practice Effectiveness,” Encyclopedia of Social Work, 18th ed., vol. 1, pp. 399–408. Silver Spring, MD: National Association of Social Workers. Bogdan, Robert, and Steven J. Taylor. 1990. “Looking at the Bright Side: A Positive Approach to Qualitative Policy and Evaluation Research,” Qualitative Sociology, 13(2), 183–192. Bohrnstedt, George W. 1983. “Measurement,” pp. 70–121 in Peter H. Rossi, James D. Wright, and Andy B. Anderson (eds.), Handbook of Survey Research. New York: Academic Press. Boone, Charlotte R., Claudia J. Coulton, Shirley M. Keller. 1981. “The Impact of Early and Comprehensive Social Work Services on Length of Stay,” Social Work in Health Care (Fall), 1–9. Booth, C. 1970. The Life and Labour of the People of London. New York: AMS Press. (Original work published 1891–1903.) Botein, B. 1965. “The Manhattan Bail Project: Its Impact in Criminology and the Criminal Law Process,” Texas Law Review, 43, 319–331. Boyd, L., J. Hylton, and S. Price. 1978. “Computers in Social Work Practice: A Review,” Social Work, 23(5), 368–371. Bradburn, Norman M., and Seymour Sudman. 1988. Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass. Briar, Scott. 1973. “Effective Social Work Intervention in Direct Practice: Implications for Education,” pp. 17–30 in Facing the Challenge: Plenary Session Papers from the
19th Annual Program Meeting. Alexandria, VA: Council on Social Work Education. ———. 1987. “Direct Practice: Trends and Issues,” Encyclo pedi a of Soci al Work, 18th ed., vol. 1, pp. 393–398. Silver Spring, MD: National Association of Social Workers. Briggs, H. E., and T. L. Rzepnicki (eds.). 2004. Using Evidence in Social Work Practice: Behavioral Perspectives. Chicago: Lyceum Books. Brodsky, S. L., and H. O. Smitherman. 1983. Handbook of Scales for Research in Crime and Delinquency. New York: Plenum Press. Brownlee, K. A. 1975. “A Note on the Effects of Nonresponse on Surveys,” Journ al o f the Americ an Statis tic al A ssociation, 52(227), 29–32. Buckingham, R., S. Lack, B. Mount, L. MacLean, and J. Collins. 1976. “Living with the Dying,” Canadian Medical Association Journal, 115, 1211–1215. Burnette, Denise. 1994. “Managing Chronic Ill ness Alone in Late Life: Sisyphu s at Work,” pp. 5–27 in Catherine Reissm an (ed.), Qualitative Studies in Social Work Research. Thousand Oaks, CA: Sage. ———. 1998. “Conceptual and Methodological Considerations in Research with Non-White Ethnic Elders,” pp. 71–91 in Miriam Potocky and Antoinette Y. Rodgers-Farmer (eds.), Social Work Research with Minority and Oppressed Populations: Methodological Issues and Innovations. New York: Haworth Press. Buros, O. 1978. Eighth Mental Measureme nts Yearbook. Highland Park, NJ: Gryphon Press. Campbell, Donald T. 1971. “Methods for the Experimenting Society.” Paper presented at the meeting of the Eastern Psychological Association, New York, and at the meeting of the American Psychological Association, Washington, DC. ———, and Ju lian Stanley. 1963. Experimental and Quasiexperimental Designs for Research. Chicago: Rand McNal ly. Campbell, Patricia B. 1983. “The Impact of Societal Biases on Research Methods,” pp. 197–213 in Barbara L . Richardson and Jeana Wirtenberg (eds.), Sex Role Research. New York: Praeger. Carmines, Edward G., and Richard A. Zeller. 1979. Reliability and Validity Assessment. Beverly Hills, CA: Sage. Cauce, Ana Mari, Nora Coronado, and Jennifer Watson. 1998. “Conceptual, Methodological, and Statistical Issues in Culturally Competent Research,” pp. 305–329 in Mario Hernandez and Mareasa R. Isaacs, Promoting Cultural Com petence in Chi ldren’s Mental Health Service s. Baltimore: Paul H. Brookes. Cautela, J. R. 1981. Behavior Analysis Forms for Clinical Intervention (Vol. 2). Champaign, IL: Research Press. Census Bureau. See U.S. Bureau of the Census. Chaffee, Steven, and Sun Yuel Choe. 1980. “Time of Decision and Media Use During the Ford-Carter Campaign,” Public Opinion Quarterly (Spring), 53–69. Chambless, C. M., M. J. Baker, D. H. Baucom, L. E. Beutler, K. S. Calhoun, P. Crits-Christoph et al. 1998. “Update on Empirically Validated Therapies II,” Clinical Psychologist, 51, 3–16. Chambless, D. L., W. C. Sanderson, V. Shoham, S. B. Johnson, K. S. Pope, P. Crits-Christoph et al. 1996. “An Update on Empirically Validated Therapies,” Clinical Psychologist, 49, 5–18. Chronicle of Higher Education. 1988. “Scholar Who Submitted Bogus Ar ticle to Journals May Be Disciplined,” Nov. 2, pp. A1, A7. Chun, Ki-Taek, S. Cobb, and J. R. French. 1975. Measures for Psychological Assessment: A Guid e to 3,00 0 Original Sources and Their Applications. Ann A rbor, MI: Institute for Social Research.
Licensed to: iChapters User BIBLIOGRAPHY
Ciarlo, J. A., T. R. Brown, D. W. Edwards, T. J. Kiresuk, and F. L. Newman. 1986. Assessing Mental Health Treatment Outcome Measurement Techniques. Rockville, MD: National Institute of Mental Health [DHHS Publication No. (ADM) 86–1301]. Cohen, Jacob. 1977. Statistical Power Analysis for the Behavioral Sciences. New York: Academic Press. ———. 1988. Statisti cal Power Analysis for the Behavioral Sciences, 2nd ed. New York: Lawrence Erlbaum Associates. Coleman, James. 1966. Equality of Educational Opportunity. Washington, DC: U.S. Government Printing Of�ce. Compton, Beulah R., and Burt Galaway. 1994. Social Work Processes. Homewood, IL: Dorsey Press. Comrey, A., T. Barker, and E. Glaser. 1975. A Sourcebook for Mental Health Measures. Los Angeles: Human Interaction Research Institute. Comstock, Donald. 1980. “Dimensions of In�uence in Organizations,” Paci�c Sociological Review (January), 67–84. Conoley, J. C., and J. J . Kramer. 1995. The 12th Mental Measurements Yearbook. Lincoln, NE: Buros Institute of Mental Measurements. Conrad, Kendon J., Frances L. Randolph, Michael W. Kirby, and Richard R. Bebout. 1999. “Creating and Using Logic Models: Four Perspectives,” Alcoholism Treatment Quarterly, 17 (1/2), 17–31. Cook, Elizabeth. 1995. Communication to the METHODS listserv, April 25, from Michel de Seve (
[email protected]) to Cook (
[email protected])@Online. Cook, Thomas D., and Donald T. Campbell. 1979. Quasiexperimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally. Cooper, Harris M. 1989. Integrating Research: A Guide for Literature Reviews. Newbury Park, CA: Sage. Cooper-Stephenson, Cynthia, and Athanasios Theologides. 1981. “Nutrition in Cancer: Physicians’ Knowledge, Opinions, and E ducational Needs,” Jour na l of the Am er ic an Dietetic Association (May), 472–476. Corcoran, Jacqueline. 2000. Evidence-Based Social Work Practice with Families: A Lifespan Approach. New York: Springer. ———. 2003. Clinical Applications of Evidence-Based Family Interventions. Oxford: Oxford University Press. Corcoran, K. J., and J. Fischer. 2000a. Measures for Clinical Practice: Vol. 1. Couples, Families, Children . (3rd ed). New York: The Free Press. ———. 2000b. Measures for Clinical Practice: Vol. 2. Adults, 3rd ed. New York: The Free Press. Corcoran, Kevin, and Wallace J. Gingerich. 1994. “Practice Evaluation in the Context of Managed Care: Case-Recordi ng Methods for Quality Assurance Reviews,” Research on Social Work Practice, 4(3), 326–337. Coulton, Claudia, Shanta Pandey, and Julia Chow. 1990. “Concentration of Poverty and the Changing Ecology of LowIncome, Urban Neighborhoods: An A nalysis of the Cleveland Area,” Social Work Research and Abstracts, 26(4), 5–16. Couper, Mick P. 2001. “Web Surveys: A Review of Issues and Approaches.” Public Opinion Quarterly, 64(4), 464– 494. Cournoyer, B., and G. T. Powers. 2002. “Evidence-Based Social Work: The Quiet Revolution Continues,” pp. 798–8 07 in Albert R. Roberts and Gilbert J. Greene (eds.), Social Workers’ Desk Reference. New York: Oxford University Press. Cowger, Charles D. 1984. “Statistical Signi�cance Tests: Scienti�c Ritualism or Scienti�c Method?” Social Service Review, 58(3), 358–372. ———. 1985. “Author’s Reply,” Social Service Review, 59(3), 520–522.
633
———. 1987. “Correcting Misuse Is the Best Defense of Statistical Tests of Signi�cance,” Social Service Review, 61(1), 170–172. Crawford, Kent S., Edmund D. Thomas, and Jeffrey J. Fink. 1980. “Pygmalion at Sea: Improving the Work Effectiveness of Low Performers,” Journal of Applied Behavioral Science (October–December), 482–505. Crowe, Teresa V. 2002. “Translation of the Rosenberg SelfEsteem Scale into A merican Sign Language: A Pr incipal Components Analysis,” Social Work Research, 26(1), 57–63. Cuéllar, Israel, and Freddy A. Paniagua (eds.). 2000. Handbook of Multicultural Mental Health: Assessment and Treatment of Diverse Populations. San Diego, CA: Academic Press. Curtin, R ichard, Stanley Presser, and Eleanor Singer. 2005. “The Effects of Response Rate Changes on the Index of Consumer Sentiment,” Public Opinion Quarterly, 64(4). 413– 428. Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. “Changes in Telephone Survey Nonresponse over the Past Quarter Century.” Public Opinion Quarterly, 69(1): 87–98. Cummerton, J. 1983. “A Feminist Perspective on Research: What Does It Help Us to See?” Paper presented at the Annual Program Meeting of the Council on Social Work Education, Fort Worth, Texas. Dahlstrom, W. Grant, and George S. Welsh. 1960. An MMPI Handbook. Minneapolis: University of Minnesota Press. Dallas Morning News. 1990. “Welfare Study Withholds Bene�ts from 800 Texans,” February 11, 1. Daly, Kerry. 1992. “The Fit Between Qualitative Research a nd Characteristics of Families,” pp. 3–11 in Jane Gilgun, Kerry Daly, and Gerald Handel (eds.), Qualitative Methods in Family Research. Thousand Oaks, CA: Sage. Dana, Richard (ed.). 2000. Handbook of Cross-Cultural Personality Assessment. Mahwah, NJ: Lawrence Erlbaum Associates. Davis, Fred. 1973. “The Martian and the Convert: Ontological Polarities in Social Research,” Urban Life, 2(3), 333–343. DeFleur, Lois. 1975. “Biasing Influences on Drug Arrest Records: Implications for Deviance Research,” American Sociological Review (February), 88–103. De Maria, W. 1981. “Empiricism: An Impoverished Philosophy for Social Work Research,” Australian Social Work, 34, 3–8. Denzin, Norman K., and Yvonna S. Lincoln. 1994. Handbook of Qualitative Research. Thousand Oaks, CA: Sage, 1994. DePan�lis, Diane, and Susan J. Zuravin. 2002. “The Effect of Services on the Recurrence of Child Maltreatment,” Child Abuse & Neglect, 26, 187–205. Dillman, Don A. 1978. Mail and Telephone Surveys: The Total Design Method. New York: Wiley. ———. 2000. Mail and Internet Surveys: The Tailored Design Method, 2nd ed. New York: Wiley. Donald, Marjorie N. 1960. “Implications of Nonresponse for the Interpretation of Mail Questionnaire Data,” Public Opinion Quarterly, 24(1), 99–114. Draguns, Juris G. 2000. “Multicultural and Cross-Cultural Assessment: Dilemmas and Decisions,” pp. 37–84 in Gargi Roysircar Sodowsky and James C. Impara (eds.), Multicultural Assessment in Counseling and Clinical Psychology. Lincoln, NE: Buros Institute of Mental Measurements. DuBois, B. 1983. “Passionate Scholarship: Notes on Values, Knowing and Method in Feminist Social Science,” pp. 105–116 in G. Bowles and R. Duelli-Klein (eds.), Theories of Women’s Studies. London: Routledge & Kegan Paul. Duelli-Klein, R. 1983. “How to Do What We Want to Do: Thoughts about Feminist Methodology,” pp. 88–104 in G. Bowles and R. Duelli-Klein (eds.), Theories in Women’s Studies. London: Routledge & Kegan Paul. Duneir, Mitchell. 1999. Sidewalk. NY: Farrar, Straus, and Giroux.
Licensed to: iChapters User 634
BIBLIOGRAPHY
Edmond, Tonya, Allen Rubin, and Kathryn Wambach. 1999. “The Effectiveness of EMDR with Adult Female Survivors of Childhood Sexual Abuse.” Social Work Research (June), 103–116. Eichler, Margrit. 1988. Nonsexist Research Methods. Boston: Allen & Unwin. Einstein, Albert. 1940. “The Fundamentals of Theoretical Physics,” Science (May 24), 487. Elder, Glen H., Jr., Eliza K. Pavalko, and Elizabeth C. Clipp. 1993. Working with Archival Data: Studying Lives. Newbury Park, CA: Sage. Emerson, Robert M. (ed.). 1988. Contemporary Field Research. Boston: Little, Brown. England, Suzanne E. 1994. “Modeling Theory from Fiction and Autobiography,” pp. 190–213 in Catherine K. Reissman (ed.), Qualitative Studies in Soci al Work Research. Thousand Oaks, CA: Sage. Epstein, Irwin. 1985. “Quantitative and Qualitative Methods,” pp. 263–274 in Richard M. Grin nell (ed.), Social Work Research and Evaluation. Itasca, IL: Peacock. Epstein, W. M. 2004. “Con�rmational Response Bias and the Quality of the Editorial Processes among American Social Work Journals,” Research on Social Work Practice, 14(6), 450–458. Evans, William. 1996. “Computer-Supported Content Analysis: Trends, Tools, and Techniques,” Social Science Computer Review, 14(3), 269–279. Feick, Lawrence F. 1989. “Latent Class Analysis of Survey Questions That Include Don’t Know Responses,” Public Opinion Quarterly 53, 525–47. Festinger, L., H. W. Reicker, and S. Schachter. 1956. When Prophecy Fails. Minneapolis: University of Minnesota Press. Fischer, Joel. 1973. “Is Social Work Effective: A Review,” Social Work, 18(1), 5–20. ———. 1978. Effective Casework Practice: An Eclectic Ap proach. New York: McGraw-Hill. ———. 1990. “Problems and Issues in Meta-analysis,” pp. 297– 325 in Lynn Videka-Sherman and William J. Reid (eds.), Advances in Clinic al Social Work Research. Silver Spring, MD: NASW Press. Foa, E. B., T. M. Keane, and M. J. Friedman. 2000. Effective Treatments for PT SD. New York: Guilford Press. Fong, Rowena, and Sharlene Furuto (eds.). 2001. Culturally Competent Practice: Skills, Interventions, and Evaluations. Boston: Allyn & Bacon. Fowler, Floyd J., Jr. 1995. Improving Survey Questions: Design and Evaluation. Thousand Oaks, CA: Sage. Frankfort-Nachmias, Chava, and Anna Leon-Guerrero. 1997. Social Statistics for a Diverse Society. Thousand Oaks, CA: Pine Forge Press. Franklin, Cynthia, and Paula Nurius. 1998. Constructivism in Practice: Methods and Challenges. Milwaukee, WI: Families I nternational. Fredman, N. and R. Sherman. 1987. Handbook of Measurements for Marriage and Family Therapy. New York: Brunner/Mazel. Gage, N. 1989. “The Paradigm Wars and Their Aftermath: A ‘Historical’ Sketch of Research and Teaching Since 1989,” Educational Research, 18, 4–10. Gall, John. 1975. Systemantics: How Systems Work and Especially How They Fail. New York: Quadrangle. Gallup, George. 1984. “Where Parents Go Wrong,” San Francisco Chronicle (December 13), 7. Gambrill, E. 1999. “Evidence-Based Practice: An Alternative to Authority-Based Practice,” Families in Society, 80, 341–350.
———. 2001. “Educational Policy and Accreditation Standards: Do They Work for Clients?” Journ al of Social Work E ducation, 37, 226 –239. Garant, Carol. 1980. “Stalls in the Therapeutic Process,” American Journal of Nursing (December), 2166–2167. Gaventa, J. 1991. “Towards a Knowledge Democracy: Viewpoints on Participatory Research in North America,” pp. 121–131 in O. Fals-Borda and M. A. Rahman (eds.), Action and Knowledge: Breaking the Monopoly with Participatory Action-Research. New York: Apex Press. Gibbs, Leonard, and Eileen Gambrill. 1999. Critical Thinking for Social Workers: Exercises for the Helping Professions. Thousand Oaks, CA: Pine Forge Press. ———. 2002. “Evidence-Based Practice: Counterarguments to Objections.” Research on Social Work Practice, 12, 452–476. Gilgun, Jane. 1991. “Hand into Glove: The Grounded Theory Approach and Social Work Practice Research.” Paper presented at the Research Conference on Qualitative Methods in Social Work Practice Research, Nelson A. Rockefeller Institute of Government, State University of New York at Albany, August 24. ———, Kerry Daly, and Gerald Handel (eds.). 1992. Qualitative Methods in Family Research. Thousand Oaks, CA: Sage. Ginsberg, Leon. 1995.Social Work Almanac, 2nd ed. Washington, DC: NASW Press. Giuli, Charles A., and Walter W. Hudson. 1977. “Assessing Parent–Child Relationship Disorders in Clinical Practice: The Child’s Point of View,” Journ al of S oci al Service R esearch, 1(1), 77–92. Glaser, Barney, and Anselm Strauss. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Re search. Chicago: Aldine. Glisson, Charles. 1987. “Author’s Reply,” Social Service Review, 61(1), 172–176. Glock, Charles Y., Benjamin B. Ringer, and Earl R. Babbie. 1967. To Comfort and to Challenge. Berkeley: University of California Press. Goffman, Erving. 1961. Asylums: Essays on the Social Situation of Mental Patients and Other Inmates. Chicago: Aldine. ———. 1963. Stigma: Notes on the M anagement of a Spoiled Identity. Englewood Cliffs, NJ: Prentice Hall. ———. 1974. Frame Analysis. Cambridge, MA: Harvard University Press. ———. 1979. Gender Advertisements. New York: Harper & Row. Gold, Raymond L. 1969. “Roles in Sociological Field Observation,” pp. 30–39 in George J. McCall and J. L. Simmons (eds.), Issues in Participant Observation. Reading, MA: Addison-Wesley. Goldberg, W. and M. Tomlanovich. 1984. “Domestic Violence Victims in the Emergency Department,” Jour na l of the American Medical Association, 32 (June 22–29), 59–64. Goldman, B. A., and J. C . Busch. 1982. Directory of Unpublished Experimental Measures (Vol. 3). New York: Human Sciences Press. Goldstein, Eda G. 1995. “Psychosocial Approach,” Encyclopedia of Social Work, 19th ed., vol. 3, pp. 1948–1954. Washington, DC: National Association of Social Workers. Gorey, K. M. 1996. “Effectiveness of Social Work Intervention Research: Internal Versus Ex ternal Evaluations,” Social Work Research, 20, 119–128. Gottlieb, Naomi, and M. Bombyk. 1987. “Strategies for Strengthening Feminist Research,” Affilia (Sum mer), 23–35.
Licensed to: iChapters User BIBLIOGRAPHY
Goyder, John. 1985. “Face-to-Face Interviews and Mailed Questionnaires: The Net Difference in Response Rate,” Public Opinion Quarterly, 49, 234–252. Graham, Laurie, and Richard Hogan. 1990. “Social Class and Tactics: Neighborhood Opposition to Group Homes,” Sociological Quarterly, 31(4), 513–529. Graham, Mary. 1989. “One Toke over the Line,” New Republic, 200(16), 20–21. Greene, Robert, Katrina Murphy, and Shelita Snyder. 2000. “Should Demographics Be Placed at the End or at the Beginning of Mailed Questionnaires? An Empirical Answer to a Persistent Methodological Question,” Social Work Research, 24(4), 237–241. Grinnell, R. M., Jr. 1997. Social Work Research & Evaluation: Quantitative and Qualitative Approaches. Itasca, IL: Peacock. Grob, Gerald N. 1973. Mental Institutions in America. New York: The Free Press. Grotevant, H. D., and D. I. Carlson (eds.). 1989. Family Assessment: A Guide to Methods and Measures. New York: Guilford Press. Groves, Robert M. 1990. “Theories and Methods of Telephone Surveys,” pp. 221–240 in W. Richard Scott and Judith Blake (eds.), Annual Review of Sociology (vol. 16). Palo Alto, CA: Annual Reviews. Guba, E. G. 1981. “Criteria for Assessing the Trustworthiness of Naturalistic Inquiries,” Educational Resources Information Center Annu al Review Paper, 29, 75–91. Gubrium, Jaber F., and James A. Holstein. 1997. The New Lan guage of Qualitative Method. New York: Oxford University Press. Haase, Richard F., Donna M. Waechter, and Gary S. Solomon. 1982. “How Signi�cant Is a Signi �cant Difference? Average Effect Size of Rese arch in Counsel ing Psychology,” Jour nal of Counseling Psychology, 29(2), 59–63. Habermas, Jurgen. 1971. Knowledge and Human Interests. Boston: Beacon Press. Hamblin, Robert L. 1971. “Mathematical Experimentation and Sociological Theory: A Critical Analysis,” Sociometry, 34, 4. Harrington, R. G. (ed.). 1986. Testing Adolescents: A Reference Guide for Comprehensive Psychological Assessment Techniques. Elmsford, NY: Pergamon Press. Healey, Joseph F. 1999. Statistics: A Tool for Social Research. Belmont, CA: Wadsworth. Heath, Anthony W. 1997. “The Proposal in Qualitative Research,” Qualitative Report, 3(1) (March). Heineman, M. B. 1981. “The Obsolete Scientific Imperative in Social Work Research,” Social Service Review, 55, 371–397. Hempel, Carl G. 1952. “Fundamentals of Concept Formation in Empirical Science,” International Enc yclopedia of United Science, vol. 2, no 7. Chicago: University of Chicago. Hepworth, D. H., R. Rooney, and J. A. Larsen. 20 02. Direct Social Work Practice: Theory and Skills, 6th ed. Belmont, CA: Wadsworth. Herman, Daniel B., Ezra S. Susser, Elmer L. Struening, and Bruce L. Link. 1997. “Adverse Childhood Experiences: Are They Ri sk Factors for Adult Homelessness?” American Journal of Public Health, 87, 249–255. Hernandez, Mario, and Mareesa R. Isaacs (eds.). 1998. Promoting Cultural Competence in C hildren’s Mental Health Services. Baltimore: Paul H. Brookes. Hersen, M., and A. S. Bellack (eds.). 1988. Dictionary of Behavioral Assessment Techniques. Elmsford, NY: Pergamon Press. Higginbotham, A. Leon, Jr. 1978. In the Matter of Color: Race and the Americ an Legal Process. New York: Oxford University Press.
635
Hill, R. R. 1978. “Social Work Research on Minorities: Impediments and Opportu nities.” Paper presented at the National Conference on the Future of Social Work Research, San Antonio, Texas, October. Hirschi, Travis, and Hanan Selvin. 1973. Principles of Survey Analysis. New York: The Free Press. Hogarty, Gerard. 1979. “Aftercare Treatment of Schizophrenia: Current Status and Future Direction,” pp. 19–36 in H. M. Pragg (ed.), Management of Schizophrenia. Assen, Netherlands: Van Gorcum. ———. 1989. “Meta-analysis of the Effects of Practice with the Chronically Mentally Ill: A Critique and Reappraisal of the Literature,” Social Work, 34(4), 363–373. Hohmann, Ann A., and Delores L. Parron. 1996. “How the NIH Guidelines on Inclusion of Women and Minorities Apply: Ef�cacy Trials, Effectiveness Trials, and Validity,” Journal of Consulting and Clinical Psychology, 64(5), 851–855. Homans, George C. 1971. “Reply to Blain,” Sociological Inquiry, 41 (Winter), 23. Hough, Richard L., Henry Tarke, Virginia Renker, Patricia Shields, and Jeff Glatstein. 1996. “Recruitment and Retention of Homeless Mentally Ill Participants in Research,” Jo ur na l of Co ns ul ti ng an d Cl in ic al Psy ch ol og y, 64(5), 881–891. Howell, Joseph T. 1973. Hard Living on Clay Street. Garden City, NY: Doubleday Anchor. Hudson, W. W. 1982. The Clinical Meas urement Package: A Field Manual. Homewood, IL: Dorsey Press. ———. 1992. The WALMYR Assessment Scales Scoring Manual. Tempe, AZ: WALMYR. ———. 1997. “Assessment Tools as Outcomes Measures in Social Work,” pp. 68–80 in E dward J. Mullen and Jennifer L . Magnabosco (eds.), Outcomes Measurement in the Human Services: Cross-Cutting Issues and Methods. Washington, DC: NASW Press. Hughes, Michael. 1980. “The Fruits of Cultivation Analysis: A Reexamin ation of Some Effect s of Television Watching,” Public Opinion Quarterly (Fall), 287–302. Humphreys, Laud. 1970. Tearoom Trade: Imperson al Sex in Public Places. Chicago: Aldine. Hutchby, Ian, and Robin Woof�tt. 1998. Conversation Analysis: Principles, Practices and Applications. Cambridge, England: Polity Press. Hyun, Sung Lim, and Miriam McNown Johnson. 2001. “Korean Social Work Students’ Attitudes Toward Homosexuals,” Journ al of Social Work Education, 37 (3), 545–554. Jackman, Mary R., and Mary Scheuer Senter. 1980. “Images of Social Groups: Categorical or Quali�ed?” Public Opinion Quarterly, 44, 340 –361. Jack son , Aurora P., and And re Ivano ff. 1999. “Reduction of Low Response Rates in Interview Surveys of Poor AfricanAmerican Families,” Jou rn al of Soc ial Se rv ice Research, 25(1–2), 41–60. Jaco b, T., and D. L. Tennebau m. 1988. Family Assessment: Rationale, Methods, and Future Directions. New York: Plenum Press. Jayaratne, Sri nika, and Rona L . Levy. 1979. Empirical Clinical Practice. New York: Columbia University Press. Jayaratne , S rinika, Tony Trip odi , a nd Euge ne Talsma. 1988 . “The Comparative Analysis and Aggregation of Single Case Data,” Journal of Appl ied Beha vioral Sc ien ce, 1(24), 119–128. Jensen, A rthur. 1969. “How Much Can We Boost IQ and Scholastic Achievement?” Harvard Educational Review, 39, 273–274. Johns on, Jeffrey C. 1990. Selecting Ethnographic Informants. Newbury Park, CA: Sage Publications.
Licensed to: iChapters User 636
BIBLIOGRAPHY
Johnston, Han k. 1980 . “ The Mar kete d S oci al Movem ent: A Case Study of the Rapid Growth of TM,” Paci�c Sociological Review (July), 333–354. Jones , Ja mes H. 1981. Bad Blood: The Tuskegee Syphilis Ex perim ent. New York: The Free Press. Jones, Lovell. 1991. “The Impact of Cancer on the Health Status of M inorities in Texas.” Paper presented to the Texas Minority Health Strategic Planning Conference, July. Kahane, Howard. 1992. Logic and Contemporary Rhetoric, 2nd ed. Belmont, C A: Wadsworth. Kalton, Graham. 1983. Introduction to Survey Sampling. Newbury Park, CA: Sage. Kaplan, Abraham. 1964. The Conduct of Inquiry. San Francisco: Chandler. Kaplowitz, Michael D., Timothy D. Hadlock, and Ralph Levi ne. 2004. “A Comparison of Web and Mail Survey Response Rates,” Public Opinion Quarterly, 68(1):94–101. Keeter, Scott. 2006. “The Impact of Cell Phone Noncoverage Bias on Polling in the 2004 Presidential Election,” Public Opinion Quarterly, 70(1), 88–98. Keeter, Scott, Michael Dimock, Leah Christian, and Courtney Kennedy. 2008. “The Impact of ‘Cell-Onlys’ on Public Opinion Polls: Ways of Coping with a Growing Population Segment,” PewResearchCenter Publications; online at http:// pewresearch.org/pubs/714/the-impact-of--cell-onlys-onpublic-opinion-polls; posted January 31. Keitel, Merle A., Mary Kopala, and Warren Stanley Adamson. 1996. “Ethical Issues in Multicultural Assessment,” pp. 29–50 in Lisa Suzuki, Paul J. Meller, and Joseph G. Ponterotto (eds.), Handbook of Multicultural Assessment. San Francisco: Jossey-Bas s. Kelly, A. 1978. “Feminism and Research,” Women’s Studies International Quarterly, 1, 226. Kendall, Patricia L., and Paul F. Lazarsfeld. 1950. “Problems of Survey Analysis,” in Robert K. Merton and Paul F. Lazarsfeld (eds.), Continuities in Social Research: Studies in the Scope and Method of “The American Soldier.” New York: The Free Press. Kestenbaum, C. J., and D. T. Williams (eds.). 1988. Handbook of Clinical Assessment of Children and Adolescents. Austin, TX: Pro-Ed. Kinnell, Ann Marie, and Douglas W. Maynard. 1996. “The Delivery and Receipt of Safer Sex Advice in Pretest Counseling Sessions for HIV and A IDS,” Jour nal of Contemporary Ethnography, 24, 405–437. Kirk, Stuart A., and William J. Reid. 2002. Science and Social Work. New York: Columbia University Press. Kish, Leslie. 1965. Survey Sampling . New York: Wiley. Knoff, H. M. 1986. The Assessment of Child and Adolesce nt Personality. New York: Guilford Press. Krefting, Laura. 1991. “Rigor in Qualitative Research: The A ssessment of Trustworthiness,” American Journal of Occu pational Therapy, 45(3), 214–222. Kronick, Jane C. 1989. “Toward a Formal Methodology of Document Analysis in the Interpretive Tradition.” Paper presented at the meeting of the E astern Sociological Society, Baltimore, MD. Kuhn, Thomas. 1970. The Structure of Scienti�c Revolutions, 2nd ed. Chicago: University of Chicago Press. Kulis, Stephen, Maria Napoli, and Flavio Francisco Marsiglia. 2002. “Ethnic Pride, Biculturalism, and Drug Use Norms of Urban American Indian Adolescents,” Social Work Research, June. Kvale, Steina r. 1996. InterViews: An Introduction to Qualitative Research Interviewing . Thousand Oaks, CA: Sage. Ladd, Everett C., and G. Donald Ferre e. 1981. “Were the Pollsters Really Wrong?” Public Opinion (December/January), 13–20.
LaGreca, A. M. 1990. Through the Eyes of the Child: Obtaining Self-Reports from Children and Adolescents. Boston: Allyn & Bacon. Lake, D. G., M. B. M iles, and R. B. Earle, Jr. 1973. Measuring Human Behavior: Tools for the Assessment of Social Functioning . New York: Teachers College Press. Lasch, Christopher. 1977. Haven in a Heartless World . New York: Basic Books. Lazarsfeld, Paul. 1959. “Problems in Methodology,” in Robert K. Merton (ed.), Sociology Today. New York: Basic Books. ———, Ann Pasanella, and Morris Rosenberg (eds.). 1972. Continuities in the L anguage of Social Research. New York: The Free Press. Lee, Raymond. 1993. Doing Research on Sen sitive Topics. Newbury Park, CA: Sage. Lewis-Beck, Michael. 1995. Data Analysis: An Introduction (vol. 103 in Quantitative Application in the Social Sciences series). Thousand Oaks, CA: Sage. Liebow, Elliot. 1967. Tally’s Corn er. Boston: Little, Brown. ———. 1993. Tell Them Who I Am: The Lives of Homeless Women. New York: The Free Press. Lincoln, Y. S., and E. A. Guba. 1985. Naturalistic Inquiry. Beverly Hills, CA: Sage. Lipsey, Mark W. 1990. Design Sensitivity: Statistic al Power for Experimental Research. Newbury Park, CA: Sage. Literary Digest . 1936a. “Landon, 1,293,669: Roosevelt, 972,897,” October 31, 5–6. ———. 1936b. “W hat Went Wrong with the Polls?” November 14, 7–8. Lo�and, John. 1995. “Analytic Ethnography: Features, Failings, and Futures ,” Journal of Contemporary Ethnog raphy, 24(1), 30–67. ———, and Lyn H. Lo�and. 1995. Analyzing Social Settings, 3rd ed. Belmont, CA: Wadsworth. Longres, John F., and Edward Scanlon. 2001. “Social Justice and the Research Curriculum,” Health and Social Work, 37(3), 447–463. Luker, K. 1984. Abortion and the Politics of Motherhood . Berkeley: University of California Press. Magura, S. and B . S. Moses. 1987. Outcome Measures for Child Welfare Services. Washington, DC: Child Welfare League of America. Manning, Peter K., and Betsy Cullum-Swan. 1994. “Narrative, Content, and Semiotic Analysis,” pp. 463–477 in Norman K. Denzin and Yvonna S. Lincoln (eds.), Handbook of Qualitative Research. Thousand Oaks, CA: Sage. Marsden, Gerald. 1971. “Content Analysis Studies of Psychotherapy: 1954 through 1968,” in Allen E . Bergin and Sol L. Gar�eld (eds.), Handbook of Psychotherapy and Behavior Change: An Empirical Analysis. New York: Wiley. Marshall, Catherine, and Gretchen B. Rossman. 1995. Designing Qualitative Research. Thousand Oaks, CA: Sage. Martin, R . P. 1988. Assessment of Personality and Behavior Problems: Infancy through Adolescence. New York: Guilford Press. Maruish, M. E. (ed.). 2000. Handbook of Psychological Assessment in Primary Care Settings. Mahwah, NJ: Lawrence Erlbaum Associates. ———. 2002. Psychological Testing in the Age of Managed Behavioral Health Care. Mahwah, NJ: Lawrence Erlbaum Associates. Marx, Karl. 1867. Capital . New York: International Publishers. (Reprinted 1967.) ———. 1880. Revue Socialist (July 5). Reprinted in T. B. Bottomore and Maxim ilien Rubel (eds.), Karl Marx: Selected Writings in Sociology and Social Philosophy. New York: McGraw-Hill, 1956.
Licensed to: iChapters User BIBLIOGRAPHY
Mash, E. J., and L . G. Terdal. 1988. Behavioral Assessment of Childhood Disorders. New York: Guilford Press. Matocha, Linda K. 1992. “Case Study Interviews: Caring for Persons with AIDS,” pp. 66–84 in Jane Gilgun, Kerry Daly, and Gerald Handel (eds.), Qualitative Methods in Family Research. Thousand Oaks, CA: Sage. Maxwell, Joseph A. 1996. Qualitative Research Design: An Interactive Approach. Thousand Oaks, CA: Sage. McAlister, Alfred, Cheryl Perry, Joel Killen, Lee Ann Slinkard, and Nathan Maccoby. 1980. “Pilot Study of Smoking, Alcohol, and Drug Abuse Prevention,” American Journal of Public Health (July), 719–721. McCall, George J ., and J. L . Simmons (eds.). 1969. Issues in Participant Observation. Reading, MA: Addison-Wesley. McCubbin, H. I., and A. I. Thompson (eds.). 1987. Family Assessment Inventories for Research and Practice. Madison: University of Wisconsin–Madison. McRoy, Ruth G. 1981. A Comparative Study of the Self-Concept of Transracially and Inracially Adopted Black Children . Dissertation, University of Texas at Austin. ———, Harold D. Grotevant, Susan Ayers Lopez, and Ann Furuta. 1990. “Adoption Revelation and Commun ication Issues: Implications for Practice,” Families in Society, 71(9), 550–557. McWirter, Norris. 1980. The Guinness Book of Records. New York: Bantam. Menard, Scott. 1991. Longitudinal Research. Newbury Park, CA: Sage. Mercer, Susan, and Rosalie A. Kane. 1979. “Helplessness and Hopelessness among the Institutionalized Aged,” Health and Social Work, 4(1), 91–116. Messer, S. B. 2006. “ What Quali�es as Evidence in Effective Practice? Patient Values and Preferences,” pp. 31–40 in J. C. Norcross, L. E. Beutler, and R. F. Levant (eds.), EvidenceBased Practices in Mental He alth: Debate and Dialogue on the Fundamental Questions. Washington, DC: American Psychological Association. Mies, M. 1983. “Toward a Methodology for Feminist Research,” pp. 117–139 in G. Bowles and R. Duelli-Klein (eds.), Theories of Women’s Studies. London: Routledge & Kegan Paul. Miles, Matthew B., and A. Michael Huberman. 1994. Qualitative Data Analysis, 2nd ed. Thousand Oaks, CA: Sage. Milgram, Stanley. 1963. “Behavioral Study of Obedience,” Journ al of Abnor mal and Social Psychology, 67 , 371–378. ———. 1965. “Some Conditions of Obedience and Disobedience to Authority,” Human Relations, 18, 57–76. Miller, Delbert C. 1983. Handbook of Research Design and Social Measurement , 4th ed. New York: Longman. ———. 1991. Handbook of Research Design and Social Measurement , 5th ed. New York: Longman. Miller, William L., and Benjamin F. Crabtree. 1994. “Clinical Research,” pp. 340–352 in Norman K . Denzin and Yvonna S. Lincoln (eds.), Handbook of Qualitative Research. Thousand Oaks, CA: Sage. Miranda, Jeanne. 1996. “Introduction to the Special Section of Recruiting and Retaining Minorities in Psychotherapy Research,” Jour nal of Consulting and C lini cal Ps ycholog y, 64(5), 848–850. Mitchell, J. V. 1983. Tests in Print III . Lincoln, NE: Buros Institute of Mental Measurements. ——— (ed.). 1985. The Ninth Mental Measurem ents Yearbook. Lincoln: University of Nebraska Press. Mitchell, Richard G., Jr. 1991. “Secrecy and Disclosure in Field Work,” pp. 97–108 in William B. Shaffir and Robert A. Stebbins (eds.), Experiencing Fieldwork: An Inside View of Qualitative Research. Newbury Park, CA: Sage.
637
Mitofsky, Warren J. 1999. “Miscalls Likely in 2000,” Public Perspective, 10(5), 42–43. Monette, Duane R., Thomas J. Sullivan, and Cornell R. DeJong. 1994. Applied Social Research: Tool for the Human Services, 3rd ed. Fort Worth, T X: Harcourt Brace. ———. 2002. Applied Social Research: Tool for the Human Services, 5th ed. Fort Worth, T X: Harcourt Brace. Moore, David W. 2002. “Measuring New Types of QuestionOrder Effects: Additive and Subtractive,” Public Opinion Quarterly, 66, 80–91. Moreland, Kevin L. 1996. “Persistent Issues in Multicultural Assessment of Soc ial and Emotional Fu nctioning,” pp. 51–76 in Lisa Suzuki, Paul J. Meller, and Joseph G. Ponterotto (eds.), Handbook of Multicultural Assessment . San Francisco: Jossey-Bass. Morgan, David L. 1993. Successful Focus Groups: Advancing the State of the Art . Newbury Park, CA: Sage. Morgan, Lewis H. 1870. Systems of Consanguinity and Af�nity. Washington, DC: Smithsonian Institution. Morrison, Denton, and Ramon Henkel (eds.). 1970. The Significance Test Controversy: A Reader. Chicago: AldineAtherton. Morrissey, J., and H. Goldman. 1984. “Cycles of Reform in the Care of the Chronically Mentally Ill,” Hospital and Community Psychiatry, 35(8), 785–793. Morse, Janice M. 1994. “Designing Funded Qualitative Research,” in Norman K. Denzin and Yvonna S. Lincoln (eds.), Handbook of Qualitative Research. Thousand Oaks, CA: Sage. Moskowitz, Milt. 1981. “The Drugs That Doctors Order,” San Francisco Chronicle (May 23), 33. Moss, Kathryn E. 1988. “Writing Research Proposals,” pp. 429–445 in Richard M. Grinnell, Jr. (ed.), Social Work Research and Evaluation, 3rd ed. Itasca, IL: Peacock. Mowbray, Carol T., Lisa C. Jordan, Kurt M . Ribisl, A ngelina Kewalramani, Douglas Luke, Sandra Herman, and Deborah Bybee. 1999. “Analysis of Postdischarge Change in a Dual Diagnosis Population,” Health & Social Work, 4(2), 91–101. Moynihan, Daniel. 1965. The Negro Family: The Case for National Action. Washington, DC: U.S. Government Printing Of�ce. Mullen, Edward J. 2006. “Facilitating Practitioner Use of Evidence-Based Practice,” pp. 152–159 in Albert R. Roberts and Kenneth R. Yeager (eds.), Foundations of EvidenceBased Social Work Practice. New York: Oxford University Press. ———, and W. B. Bacon. 2004. “Implementation of Practice Guidelines and Evidence-Based Treatment: A Survey of Psychiatrists, Psychologists, and So cial Workers,” pp. 210–218 in A. R. Roberts and K. Yeager (eds.), Evidence-Based Practice Manual: Research and Outcome Measures in Health and Human Services. New York: Oxford University Press. ———, and J. R. Dumpson (eds.). 1972. Evaluation of Social Intervention. San Francisco: Jossey-Bass. ———, and Jennifer L. Magnabosco (eds.). 1997. Outcomes Measurement in the Human Services: Cross-Cutting Issues and Methods. Washington, DC: NASW Press. ———, and D. L. Strein er. 2004. “The Evidence For and Agains t Evidence-Based Practice,” Brief Treatment and Crisis Intervention, 4, 111–121. Murray, Charles. 1984. Losing Ground . New York: Basic Books. ———, and Richard J. Herrnstein. 1994. The Bell Curve. New York: The Free Press. Myrdal, Gunnar. 1944. An American Dilemma. New York: Harper & Row.
Licensed to: iChapters User 638
BIBLIOGRAPHY
National Association of Social Workers. 1997. An Author’s Guide to Social Work Journals, 4th ed. Washington, DC: NASW Press. NASW (National Association of Social Workers, Inc.). 1999. NASW Code of Ethics. Neuman, W. Lawrence. 1994. Social Research Methods: Qualitative and Quantitative Approaches. Needham Heights, MA: Allyn & Bacon. ———. 2000. Social Research Methods: Qualitative and Quantitative Approaches, 4th ed., Boston: Allyn & Bacon. Newton, Rae R., and Kjell Erik Rudestam. 1999. Your Statistical Consultant: Answers to Your Data Analysis Questions. Thousand Oaks, CA: Sage. New York Times. 1984. “Method of Polls in Two States,” June 6, 12 . ———. 1988. “Test of Journals Is Criticized as Unethical,” September 27, 21, 25. ———. 1989. “Charges Dropped on Bogus Work,” April 4, 21. Nicholls, William L., I I, Reginald P. Baker, and Jean Martin. 1996. “The Effect of New Data Collection Technology on Survey Data Quality,” in L. Lyberg, P. Biemer, M. Collins, C. Dippo, N. S chwartz, and D. Trewin (eds.), Survey Measurement and Process Quality. New York: Wiley. Nichols, David S., Jesus Padilla, and Emili a Lucio GomezMaqueo. 2000. “Issues in the Cross-Cultural Adaptation and Use of the MMPI-2,” pp. 247–266 in Richard Dana (ed.), Handbook of Cross-Cultural Personality Assessment . Mahwah, NJ: Lawrence Erlbaum Associates. Nie, Norman H., C. Hadlai Hull, Jean G. Jenkins, Karing Steinbrenner, and Dale H. Bent. 1975. Statistical Package for the Social Sciences. New York: McGraw-Hill. Norton, Ilena M., and Spero M. Manson. 1996. “Research in American Indian and Alaska Native Communities: Navigating the Cultural Universe of Values and Process,” Journal of Consulting and Clinical Psychology, 64(5), 856–860. Nugent, William R. 1991. “An Experimental and Qualitative Analysis of a Cogn itive-Behavioral Intervention for Anger,” Social Work Research and Abstracts, 27 (3), 3–8. Nurius, P. S., and W. W. Hudson. 1988. “Computer- Based Practice: Future Dream or Current Technology?” Social Work, 33(4), 357–362. Oakley, A. 1981. “Interviewing Women: A Contradiction in Terms,” in H. Roberts (ed.), Doing Feminist Research. London: Routledge & Kegan Paul. Ogles, B. M., and K. S. Masters. 1996. Assessing Outcome in Clinical Practice. Boston: Allyn & Bacon. O’Hare, T. 2005. Evidence- Based Practices for Social Workers: An Interdisciplinary Approach. Chicago, IL: Lyceum Books. Ollendick, T. H., and M. Hersen. 1992. Handbook of Child and Adolescent Assessment. Des Moines, IA: Allyn & Bacon. Orme, John G., and Terri D. Combs-Orme. 1986. “Statistical Power and Type II Er rors in Soci al Work Research,” Social Work Research & Abstracts, 22(3), 3–10. Ortega, Debora M., and Cheryl A. Richey. 1998. “Methodological Issues in Social Work Research with Depressed Women of Color,” pp. 47–70 in Miri am Potocky and Antoinet te Y. Rodgers-Farmer (eds.), Social Work Research with Minority and Oppressed Populations: Methodological Issues and Innovations. New York: Haworth Press. Øyen, Else (ed.). 1990. Comparative Methodology: Theory and Practice in International Social Research. Newbury Park, CA: Sage. Ozawa, Martha N. 1989. “Welfare Policies and Illegitimate Birth Rates among Adolescents: Analysis of State-by-State Data,” Social Work Research and Abstracts, 24(2), 5–11.
Padgett, Deborah K. 1998a. “Does the Glove Really Fit? Qualitative Research and Clinical Social Work Practice,” Social Work, 43(4), 373–381. ———. 1998b. Qualitative Methods in Social Work Research. Thousand Oaks, CA: Sage. Padilla, Amado M., and A ntonio Medina. 1996. “CrossCultural Sensitivity in Assessment,” pp. 3–28 in Lisa Suzuki, Paul J. Meller, and Joseph G. Ponterotto (eds.), Handbook of Multicultural Assessment . San Francisco: Jossey-Bass. Parrish, Danielle. E. 2008. Evaluation of the Impact of a FullDay Continuing Education Training on How Practitioners Learn About, View, and Engage in Evidence-Based Practice. Dissertation Abstracts International, XX, X X. Parsons, Talcott, and Edward A. Shils. 1951. Toward a General Theory of Action. Cambridge, MA: Harvard University Press. Patton, Michael Quinn. 1990. Qualitative Evaluation and Research Methods, 2nd ed. Newbury Park, CA: Sage. Payne, Charles M. 1995. I’ve Got the Light of Freedom: The Organizing Tradition and the Mississip pi Freedom Struggle. Berkeley: University of California Press. Perinelli, Phillip J. 1986. “No Unsuspecting Public in TV Call-In Polls,” New York Times, February 14, letter to the editor. Perlman, David. 1982. “Fluoride, AIDS Experts Scoff at Nelder’s Idea,” San Francisco Chronicle, September 6, 1. Petersen, Larry R., and Judy L. Maynard. 1981. “Income, Equity, and Wives’ Housekeeping Role Expectations,” Paci�c Sociological Review (January), 87–105. Polansky, Norman A. 1975. Social Work Research. Chicago: University of Chicago Press. ———, Ronald Lippitt, and Fritz Redl. 1950. “An Investigation of Behavioral Contagion in Groups,” Human Relations, 3, 319–348. Polster, Richard A., and Mary A. Lynch. 1985. “Single-Subject Designs,” pp. 381–431 in Richard M. Gr innell (ed.), Social Work Research and Evaluation. Itasca, IL: Peacock. Population Reference Bureau. 1980. “1980 World Population Data Sheet.” Poster prepared by Carl Haub and Douglas W. Heisler. Washington, DC: Population Reference Bureau. Porter, Stephen R. and Michael E. Whitcomb. 2003. “The Impact of Contact type on Web Survey Response Rates.” Public Opinion Quarterly, 67, 579–588. Posavac, Emil J., and Raymond G. Carey. 1985. Program Evaluation: Methods and Case Studies. Englewood Cliffs, NJ: Prentice-Hall. Potocky, Miriam, and Antoinette Y. Rodgers-Farmer (eds.). 1998. Social Work Research with Minority and Oppressed Populations. New York: Haworth Press. Presser, Stanley, and Johnny Blair. 1994. “Survey Pretesting: Do Different Methods Produce Different Results?” pp. 73 –104 in Peter Marsden (ed.), Sociological Methodology. San Francisco: Jossey-Bass. Public Opinion. 1984. “See How They Ran” (October– November), 38– 40. Quay, H. C., and J. S. Werry. 1972. Psychopathological Disor ders of Childhood . New York: Wiley. Quoss, Bernita, Margaret Cooney, and Terri Longhurst. 200 0. “Academics and Advocates: Using Participatory Action Research to In�uence Welfare Policy,” Journ al of C ons umer Affairs, 34(1), 47. Rank, Mark. 1992. “The Blending of Qualitative and Quantitative Methods in Understanding Childbearing Among Welfare Recipients,” pp. 281–300 in Jane Gilgun, Kerry Daly, and Gerald Handel (eds.), Qualitative Methods in Family Research. Thousand Oaks, CA: Sage.
Licensed to: iChapters User BIBLIOGRAPHY
———, and T. A. Hirschl. 2002. “Welfare Use as a Life Course Event: Toward a New Understanding of the U.S. Safety Net,” Social Work, 47 (3), 237–248. Ransford, H. Edward. 1968. “Isolation, Powerlessness, and Violence: A Study of Attitudes and Participants in the Watts Riots,” American Jour nal of Sociology, 73, 581–591. Rasinski, Kenneth A. 1989. “The Effect of Question Wording on Public Support for Government Spending,” Public Opinion Quarterly, 53, 388–394. Ray, William, and Ravizza, Richard. 1993. Methods Toward a Science of Behavior and Experience. Belmont, CA: Wadsworth. Reed, G. M. 2006. “What Quali�es as Evidence of Effective Practice? Clinical Expertise,” pp. 13–23 in J. C. Norcross, L. E. Beutler, and R. F. Levant (eds.), Evidence-Based Practices in Mental Health: Debate and Dialogue on the Fundamental Questions. Washington, DC: American Psychological Association. Reid, Will iam J. 1997. “Evaluating the Dodo’s Verdict: Do A ll Evaluations Have Equivalent Outcomes?” Social Work Research, 21, 5–18. ———, and L aura Epstein. 1972. Task-Centered Casework. New York: Columbia University Press. ———, and Patricia Hanrahan. 1982. “Recent Evaluations of Social Work: Grounds for Optimism,” Social Work, 27 (4), 328–340. Reinharz, Shulamit. 1992. Feminist Methods in Social Research. New York: Oxford University Press. Reissman, Cather ine (ed.). 1994. Qualitative Studies in Social Work Research. Thousand Oaks, CA: Sage. Reynolds, C. R., and R. W. Kamphaus (eds.). 1990. Handbook of Psychological and Educational Assessment of C hildren. New York: Guilford Press. Richmond, Mary. 1917. Social Diagnosis. New York: Russell Sage Foundation. Roberts, Albert R., and Gilbert J. Greene (eds.). 2002. Social Workers’ Desk Reference. New York: Oxford University Press. Roberts, Albert R ., and K. R. Yeager (eds.). 2004. EvidenceBased Practice Manual: Research and Outcome M easures in Health and Human Services. New York: Oxford University Press. ——— (eds.). 2006. Foundations of Evidence-Based Social Work Practice. New York: Oxford University Press. Roberts, Michael C., and Linda K. Hurley. 1997. Managing Managed Care. New York: Plenum Press. Robinson, Robin A. 1994. “Private Pain and Public Behaviors: Sexual Abuse and Delinquent Girls,” pp. 73–94 in Catherine Reissman (ed.), Qualitative Studies in Social Work Research. Thousand Oaks, CA: Sage. Rodwell, Mary K. 1987. “Naturalistic Inquiry: A n Alternative Model for Social Work Assessment,” Social Service Review, 61(2), 232–246. ———. 1998. Social Work Constructivist Research. New York: Garland. Roethlisberger, F. J. and W. J. Dickson. 1939. Management and the Worker. Cambridge, MA: Har vard University Press. Roffman, R. A., L . Downey, B. Beadnell, J. R. Gordon, J. N. Craver, and R. S. Stephens. 1997. “Cognitive-Behavioral Group Counseling to Prevent HIV Transmission in Gay and Bisexual Men: Factors Contributing to Successful Risk Reduction,” Research on Social Work Practice, 7 , 165–186. Roffman, Roger A., Joseph Picciano, Lauren Wicki zer, Marc Bolan, and Rosemary Ryan. 1998. “Anonymous Enrollment in AIDS Prevention Telephone Group Counseling: Facilitating the Participation of Gay and Bisexual Men in Intervention and Research,” pp. 5–22 in Miriam Potocky and
639
Antoinette Y. Rodgers-Farmer (eds.), Social Work Research with Minority and Oppressed Populations: Methodological Issues and Innovations. New York: Haworth Press. Rogler, Lloyd H. 1989. “The Meaning of Culturally Sensitive Research in Mental Health,” American Journal of Psychiatry, 146(3), 296–303. ———, and A . B. Holl ingshead. 1985. Trapped: Puerto Rican Families and Schizophrenia. Maplewood, NJ: Waterfront Press. Rosenberg, Morris. 1965. Society and the Adolescent SelfImage. Princeton, NJ: Princeton University Press. ———. 1968. The Logic of Survey Analysis. New York: Basic Books. Rosenhan, D. L. 1973. “On Being Sane in Insane Places,” Science, 179, 240–248. Rosenthal, Richard N. 2006. “Overview of Evidence-Based Practice,” in A. R. Roberts and K. Yeager (eds.), Foundations of Evidence -Based Social Work Practice. New York: Oxford University Press, 67–80. Rosenthal, Robert, and Donald Rubin. 1982. “A Simple, General Purpose Display of Magnitude of Experimental Effect,” Journ al of Educ ational Psychology, 74(2), 166–169. Rossi, Peter H., and Howard E. Freeman. 1982. Evaluation: A Systematic Approach. Beverly Hills, CA: Sage. ———. 1993. Evaluation: A Systematic Approach, 5th ed. Newbury Park, CA: Sage Publications. Rothman, Ellen K. 1981. “The Written Record,” Jou rn al of Family History (Spring), 47–56. Rowntree, Derek. 1981. Statistics Without Tears: A Primer for Non-Mathematicians . New York: Charles Scribner’s Sons. Royse, David. 1988. “Voter Support for Human Services,” ARETE , 13(2), 26–34. ———. 1991. Research Methods in Social Work. Chicago: Nelson-Hall. Rubin, Allen. 1979. Community Mental Health in the Social Work Curriculum. New York: Council on Social Work Education. ———. 1981. “Reexamini ng the Impact of S ex on Salar y: The Limits of Statistical Signi�c ance,” Social Work Research & Abstracts, 17 (3), 19–24. ———. 1983. “Engaging Famil ies as Support Resources in Nursing Home Care: Ambiguity in the Subdivision of Tasks,” Gerontologist, 23(6), 632–636. ———. 1985a. “Practice Effectiveness: More Grounds for Optimism,” Social Work, 30(6), 469–476. ———. 1985b. “Signi�cance Testing with Population Data,” Social Service Review, 59(3), 518–520. ———. 1987. “Case Management ,” Encyclopedia of Social Work, 18th ed, vol. 1, pp. 212–222. Silver Spring, MD: National Association of Social Work. ———. 1990. “Cable TV as a Resource in Preventive Mental Health Programming for Children: An Illustration of the Importance of Coupling Implementation and Outcome Evaluation,” ARETE, 15(2), 26–31. ———. 1991. “The Effectiveness of Outreach Counseling and Support Groups for Battered Women: A Preliminary Evaluation,” Research on Social Work Practice, 1(4), 332–357. ———. 1992. “Is Case Management Effective for People with Serious Mental Illness? A Research Review,” Health and Social Work, 17 (2), 138–150. ———. 1997. “The Family Preservation Evaluation from Hell: Implications for Program Evaluation Fidelity,” Children and Youth Services Review, 19(1–2), 77–99. ———. 2002. “Is EMDR an Evidence-based Practice for Treating PTSD? Unanswered Questions,” Paper presented at the annual conference of the Society for Social Work and Research, San Diego, January 18.
Licensed to: iChapters User 640
BIBLIOGRAPHY
———. 2007. Statistics for Evidence-Based Practice and Evaluation. Belmont, CA: T homson Brooks/Cole. ———. 2008. Practitioner’s Guide to Using Research for Evidence-Based Practice. Hoboken, NJ: John Wiley & Sons, Inc. ——— 2010. “Research for providing the evidence-base for interventions in this volume.” In Rubin A. and D. W. Springer (Eds.), Treatment of Traumatized Adults an d Children. Volume 1: The Clinician’s Guide to Evidence-Ba sed Practice, Appendix A. Hoboken, NJ: John Wiley & Sons. ———, S. Bischofshausen, K. Conroy-Moore, B. Dennis, M. Hastie, L. Melnick, D. Reeves, and T. Smith. 2001. “The Effectiveness of EMDR i n a Chi ld Guidance Center,” Research on Social Work Practice, 11(4), 435–457. ———, Jose Cardenas, Keith Warren, Cathy King Pike, and Kathryn Wambach. 1998. “Outdated Practitioner Views about Family Culpability and Severe Mental Disorders,” Social Work, September, 412–422. ———, and Patricia G. Conway. 1985. “Standards for Determining the Magnitude of Relationships in Social Work Research,” Social Work Research & Abstracts, 21(1), 34 –39. ———, Patricia G. C onway, Judith K. Patterson, a nd Richard T. Spence. 1983. “Sources of Variation in Rate of Decline to MSW Programs,” Jou rn al of Ed ucation for Social Wo rk, 19(3), 48–58. ———, and Peter J. Johnson. 1982 . “Practitioner Orientations Toward the Chronically Disabled: Prospects for Policy Implementation,” Administration in Mental Health, 10, 3–12. ———, and Peter J. Johnson. 1984. “Direct Practice Interests of Entering MSW Students,” Journal of Edu cation for Soci al Work, 20(2), 5–16. ———, and Danielle Parrish. 2007a. “Views of Evidence-Based Practice Among Faculty in MSW Programs: A National Survey.” Research On Social Work Practice, 17 , 1, 110–122. ———, and Danielle Parrish. 2007b. “Problematic Phrases in the Conclusions of Published Outcome Studies: Implications for Evidence-Based Practice.” Research On Social Work Practice, 17 , 3, 334–347. ———, and Danielle Parrish. 2009. “Development and Validation of the EBP Process Assessment Scale: Preliminary Findings.” Research On Social Work Practice, 19, [in press]. ———, and David. W. Springer (eds.). 2010. The Clinician’s Guide to Evidence-Based Practice. Hoboken, NJ: John Wiley & Sons, Inc. ———, and Guy E. Shuttlesworth. 1982. “Assessing Role Expectations in Nursing Home Care,” ARETE, 7 (2), 37–48. ———, and Irene Thorelli. 1984. “Egoistic Motives and Longevity of Participation by Social Service Volunteers,” Jour nal of Applied Behavioral Science, 20(3), 223–235. Rubin, Herbert J., and Riene S. Rubin. 1995. Qualitative Interviewing: The Art of Hearing Data. Thousand Oaks, CA: Sage. Ruckdeschel, Roy A., and B. E. Faris. 1981. “Assessing Practice: A Critical Look at the Si ngle-Case Design,” Social Casework, 62, 413–419. Rutter, M., H. H. Tuma, and I. S. Lann (eds.). 1988. Assessment and Diagnosis in Child Psychopathology. New York: Guilford Press. Sackett, D. L., W. S. Richardson, W. Rosenberg, and R. B. Haynes. 199 7. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone. ———. 2000. Evidence-Based Medicine: How to Practice and Teach EBM (2nd ed.). New York: Churchill Livingstone. Sacks, Jeffrey J., W. Mark Kr ushat, and Jeffrey Newman. 1980. “Reliability of the Health Hazard Appraisal,” American Journ al of Public Health (July): 730–732. Sales, Esther, Sara Lichtenwalter, and Antonio Fevola. 2006. “Secondary Analysis i n Social Work Research Education:
Past, Present, and Future Promise ,” Journal of Soci al Work Education, 42(3), 543–558. Saletan, William, and Nancy Watzman. 1989. “Marcus Welby, J. D.” New Republic, 200(16), 22. Sandelowski, M., D. H. Holditch-Davis, and B. G. Harr is. 1989. “Artful Design: Writing the Proposal for Research in the Naturalistic Paradigm,” Research in Nursing and Health, 12, 77–84. Sarnoff, S. K. 1999. “‘Sancti�ed Snake Oil’: Ideology, Junk Science, and Social Work Practice,” Families in Society, 80, 396–408. Sattler, J. M. 1988. Assessment of Children (3rd ed.). Brandon, VT: Clinical Psychology Publishing. Sawin, K. J., M. P. Harrigan, and P. Woog (eds.). 1995. Measures of Family Functioning for Research and Practice. New York: Springer. Scholl, G., and R. Schnur. 1976. Measures of Psychological, Vocational, and Educ ational Functioning in the Blind and Visually Handicapped . New York: American Foundation for the Blind. Schuerman, John. 1989. “Editorial,” Social Service Review, 63(1), 3. Selltiz, Claire, Lawrence S. Wrightsman, and Stuart W. Cook. 1976. Research Methods in Social Relations. New York: Holt, Rinehart and Winston. Shadish, William R ., Thomas D. Cook, and Donald T. Campbell. 2001. Experimental and Quasi-experimental Designs for Generalized Causal Inference. New York: Houghton Mif�in. Shadish, William R., Thomas D. Cook, and Laura C. Leviton. 1991. Foundations of Program Evaluation. Newbury Park, CA: Sage. Shaf�r, Will iam B., and Robert A. Stebbins (eds.). 1991. Ex per ienci ng Fieldwork : A n I nsi de Vie w of Qual ita tive R esearch. Newbury Park, CA: Sage. Shanks, J. Merrill, and Robert D. Tortora. 1985. “Beyond CATI: Generalized and Distributed Systems for ComputerAssisted Surveys.” Prepared for the Bureau of the Census, First Annual Research Conference, Reston, VA, March 20–23. Shea, Christopher. 2000. “Don’t Talk to the Humans: The Crackdown on Social Science Research,” Lingua Franca, 10, no. 6, 27–34. Shlonsky, Aron and L. Gibbs, L. 2004. “ Will the Real EvidenceBased Practice Please Stand Up? Teaching the Process of Evidence-Based Practice to the Helping Professions.” Brief Treatment and Crisis Intervention, 4(2), 137–153. Silverman, David. 1993. Interpreting Qualitative Data: Methods for Analyzing Talk, Text, and Interaction . Newbury Park, CA: Sage. ———. 1999. Doing Qualitative Research: A Practical Handbook. Thousand Oaks, CA: Sage. Simon, Cassandra E., John S . McNeil, Cynthia Franklin, and Abby Cooperman. 1991. “The Family and S chizophrenia: Toward a Psychoeducational Approach.” Families in Society, 72(6), 323–333. Smith, Andrew E., and G. F. Bishop. 1992. “The Gallup Secret Ballot Experiments: 1944–1988.” Paper presented at the annual conference of the American Asso ciation for Public Opinion Research, St. Petersburg, FL , May. Smith, Eric R. A. N., and Peverill Squire. 1990. “The Effects of Prestige Names in Question Wording,” Public Opinion Quarterly 54, 97–116. Smith, Joel. 1991. “A Methodology for Twenty-First Century Sociology,” Social Forces, 70(1), 117. Smith, Mary Lee, and Gene V. Glass. 1977. “Meta-analysis of Psychotherapy Outcome Studies,” American Psychologist, 32(9), 752–760.
Licensed to: iChapters User BIBLIOGRAPHY
Smith, Tom W. 1988. “The First Straw? A Study of the Origins of Election Polls,” Public Opinion Quarterly, 54 (Spring), 21–36. Smith, Tom W. 2001. “Are Representative Internet Surveys Possible?” Proceedings of Statistics Canada Symposium. Snow, David A., and Leon Anderson. 1987. “Identity Work among the Homeless: The Verbal Construction and Avowal of Personal Identities,” Journal of Sociology, 92(6), 1336–1371. Sodowsky, Gargi Roysircar, and James C. Impara (eds.). 1996. Multicultural Assessment in Counseling and Clinical Psychology. Lincoln, NE: Buros Institute of Mental Measurements. Solomon, Phyllis, and Robert I. Paulson. 1995. “Issues in Designing and Conducting Randomized Human Service Trials.” Paper presented at the National Conference of the Society for Social Work and Research, Washington, DC. Srole, Leo. 1956. “Social Integration and Certain Corollaries: An Exploratory Study,” American Sociological Review, 21, 709–716. Stouffer, Samuel. 1962. Social Research to Test Ideas. New York: Free Press of Glencoe. Strachan, Angus M. 1986. “Family Intervention for the Rehabilitation of Schizophrenia: Toward Protection and Coping,” Schizophrenia Bulletin, 12(4), 678–698. Straus, M., and B. Brown. 1978. Family Measurement Techniques: Abstracts of Published Instruments. Min neapolis: University of Minnesota Press. Strauss, Anselm, and Juliet Corbin. 1990. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: Sage. ———. 1994. “Grounded Theor y Methodology: A n Overview,” pp. 273–285 in Norman K. Denzin and Yvonna S. Lincoln (eds.), Handbook of Qualitative Research. Thousand Oaks, CA: Sage. Stuart, Paul. 1981. “Historical Research,” pp. 316–332 in Richard M. Grinnell, Social Work Research and Evaluation. Itasca, IL: Peacock. Sudman, S eymour. 1983. “Applied Sampling,” pp. 145–194 in Peter H. Rossi, James D. Wright, and A ndy B. Anderson (eds.), Handbook of Survey Research. New York: Academic Press. Sue, Stanley. 1996. “Measurement, Testing, and Ethnic Bias: Can Solutions be Found?” pp. 7–36 in Gargi Roysircar Sodowsky and James C. Impara (eds.), Multicultural Assessment in Counseling and Clinical Psychology. Lincoln, NE: Buros Institute of Mental Measurements. Suzuki, Lisa A., Paul J. Meller, and Joseph G. Ponterotto (eds.). 1996. Handbook of Multicultural Assessment . San Francisco: Jossey-Bass. Sweet, Stephen. 1999. “Using a Mock Institutional Review Board to Teach Ethics in Sociological Research,” Teaching Sociology, 27 (January): 55–59. Taber, Sara M. 1981. “Cognitive-Behavior Modi�cation Treatment of an Agg ressive 11-Year-Old Boy,” Social Work Research & Abstracts, 17 (2), 13–23. Takeuchi, David. 1974. Grass in Hawaii: A Structural Constraints Approach. M.A. thesis, University of Hawaii. Tan, Alexis S. 1980. “Mass Media Use , Issue Knowledge and Political Involvement,” Public Opinion Quarterly, 44, 241–248. Tandon, Rajesh, and L. Dave Brown. 1981. “OrganizationBuilding for Rural Development: An Experiment in India,” Journal of Applied Behavioral Scienc e (April–June), 172–189. Task Force on the Promotion and Dissemination of Psychological Procedures. 1995. “Training in and Dissemination of Empirically-Validated Psychosocial Treatments: Report and Recommendations.” Clinical Psychologist , 48, 3–23.
641
Taylor, Humphrey, and George Terhanian, 1999. “Heady Days Are Here Again: Onl ine Polling Is Rapidly Coming of Age,” Public Perspective, 10(4), 20–23. Taylor, James B. 1977. “Toward Alternative Forms of Social Work Research: The Case for Naturalistic Methods,” Journal of Social Welfare, 4, 119–126. Thomas, W. I., and Florian Znaniecki. 1918. The Polish Peasant in Europe and America. Chicago: University of Chicago Press. Thompson, Bruce. 1999a. “Improving Research Clarity and Usefulness with Effect Size Indices as Supplements to Statistical Significance Tests,” Exceptional Children, 65(3), 329–337. ———. 1999b. “Why Encouraging Effect Size Reporting Is Not Working: The Etiology of Researcher Resistance to Ch anging Practices,” Journ al of Psychology, 133(2), 133–140. Thompson, Estina E., Harold W. Neighbors, Cheryl Munday, and James S. Jackson. 1996. “Recr uitment and Retention of African American Patients for Clinical Research: An Exploration of Response Rates in an Urban Psychiatric Hospital,” Jour nal of Consulti ng and Clin ical Psycholog y, 64(5), 861–867. Thomson, Bill. 1996. Letter on P ush Polling. Letter posted May 29 to the AAPORnet listserv [Online]. Available:
[email protected]. Thyer, Bruce. 2001. “Evidence-Based Approaches to Community Practice,” pp. 54–65 in Harold E. Briggs and Kevin Corcoran (eds.), Social Work Practice: Treating Common Client Problems. Chicago: Lyceum Books. ———. 2002. “Principles of Evidence-Based Practice and Treatment Development,” pp. 738–742 in Albert R. Roberts a nd Gilbert J. Greene (eds.), Social Workers’ Desk Reference. New York: Oxford University Press. Tobler, N. S. 1986. “Meta-analysis of 143 Adolescent Drug Prevention Programs: Quantitative Outcome Results of Program Participants Compared to a Control or Comparison Group,” Journ al of Drug Issues, 4, 537–567. Todd, Tracy. 1998. “Co-constructing Your Business Relationship,” pp. 323–347 in Cynthia Franklin and Paula Nurius, Constructivism in Practice: Methods and Challenges. Milwaukee, WI: Families International. Touliatos, J., B. F. Perlmutter, and M. A. Straus (eds.). 1990. Handbook of Family Measurement Techniques. Newbury Park, CA: Sage. Tuckel, Peter S., and Barry M. Feinberg. 1991. “The Answering Machine Poses Many Questions for Telephone Survey Researchers,” Public Opinion Quarterly, 55, 200–217. Tuckel, Peter S., and Harry O’Neill. 2002. “The Vanishing Respondent in Telephone Surveys ,” Journal of Advertising Research (September/October), 26– 48. Turk, Theresa Gu minski. 1980. “Hospital Support: Urban Correlates of Allocation Based on Organizational Prestige,” Paci�c Sociological Review (July), 315–332. Turner, Jonathan. 1974. The Structure of Sociological Theory. Homewood, IL: Dorsey Press. U.S. Bureau of the Census. 1979. Statistical Abstract of the United States. Washington, DC: U.S. Government Printing Of�ce. ———. 1992. Statistical Abstract of the United States . Washington, DC: U.S. Government Printing Of�ce. ———. 1995. Statistical Abstract of the United States . Washington, DC: U.S. Government Printing Of�ce. ———. 1996. Statistical Abstract of the United States, 1996, National Data Book and Guide to Sources. Washington, DC: U.S. Government Printing Of�ce. ———. 2006. Statistical Abstract of the United States, 2006 , National Data Book and Guide to Sources. Washington, DC: U.S. Government Printing Of�ce.
Licensed to: iChapters User 642
BIBLIOGRAPHY
U.S. Department of Health and Human Services. 1992. Survey Measurement of Drug Use. Washington, DC: U.S. Government Printing Of�ce. U.S. Department of Labor (Bureau of Labor Statistics). 1978. The Consumer Price Index: Concepts and Content Over the Years. Report 517. Washington, DC: U.S. Government Printing Of�ce. Videka-Sherman, Lynn. 1988. “Meta-analysis of Research on Social Work Practice in Mental Health,” Social Work, 33(4), 325–338. Vonk, M. Elizabeth. 2001. “Cultural Competence for Transracial Adoptive Parents,” Social Work, 46(3), 246–255. Votaw, Carmen Delgado. 1979. Women’s Rights in the United States. United States Commission of Civil Rights, InterAmerican Commis sion on Women. Washington, DC: Clearinghouse Publications. W. K. Kellogg Foundation. 2004. W. K. Kellogg Foundation Logic Model Development Guide. Battle Creek, MI: Author. Wagner-Paci�ci, Robin. 1995. Discourse and Destruction: The City of Philadelphia versus MOVE. Chicago: University of Chicago Press. Walker, D. K. 1973. Socioemotional Measure for Preschool and Kindergarten Children. San Francisco: Jossey-Bass. Walker, Janice R., and Todd Taylor. 1998. The Columbia Guide to Online Style. New York: Columbia University Press. Walker, Jeffery T. 1994. “Fax Machines and Social Surveys: Teaching an Old Dog New Tricks,” Journ al of Quantitative Criminology, 10(2), 181–188. Walker Research. 1988. Industry Image Study, 8th ed. Indianapolis: Walker Research. Wallace, Walter. 1971. The Logic of Science in Sociology. Chicago: Aldine-Atherton. Walster, Elaine, Jane Piliavian, and G . William Walster. 1973. “The Hard-to-Get Woman,” Psychology Today (September), 80–83. Webb, Eugene, Donald T. Campbell, Richard D. Schwartz, and Lee Sechrest. 1981. Nonreactive Research in the Social Sciences. Chicago: Rand McNally. ———. 2000. Unobtrusive Measures. Thousand Oaks, CA: Sage. Weber, Max. 1925. “Science as a Vocation,” in Hans Gerth and C. Wright Mills (trans., eds.) 1946 From Max Weber: Essays in Sociology. New York: Oxford University Press. Weber, Robert Philip. 1990. Basic Content Analysis. Newbury Park, CA: Sage. Weinbach, Robert, and Richard Grinnell. 1998. Statistics for Social Workers, 4th ed. New York: Longman. Weisbrod, Burton A., Mary Ann Test, and Leonard I. Stein. 1980. “Alternative to Mental Hospital Treatment: II. Economic Bene�t-Cost Analysis,” Archives of General Psychiatry, 37 (4), 400–408. Weiss, Carol H. 1972. Evaluation Research. Englewood Cliffs, NJ: Prentice Hall. Westen, D. I. 2006. “Patients and treatments in clinical trials are not adequately representative of clinical practice,” pp. 161–171 in J. C. Norcross, L. E. Beutler, and R. F. Levant (eds.), Evidence-Based Practices in Mental Health: Debate and Dialogue on the Fundamental Questions. Washington, DC: American Psychological Association.
Wetzler, S. (ed.). 1989. Measuring Mental Illness: Psychometric Assessment for Clinicians. Washington, DC: American Psychiatric Press. White, Karl R. 1988. “Cost Analyses in Family Support Programs,” pp. 429–443 in Heather B. Weiss and Francine H. Jac obs (ed s.), Evaluating Family Programs. New York: Aldine de Gruyter. White, Ralph. 1951. Value-Analysis: The Nature and Use of the Method . New York: Society for the Psychological Study of Social Issues. Whiteman, Martin, David Fanshel, and John F. Grundy. 1987. “Cognitive-Behavioral Interventions Aimed at Anger of Parents at Risk of C hild Abuse,” Social Work, 32(6), 469–474. Whittaker, James K. 1987. “Group Care for Children,” Encycloped ia of Social Work, 18th ed., vol. 1, pp. 672–682. Silver Spring, MD: National Association of Social Workers. Whyte, William Foote. 1943. Street Corner Society. Chicago: University of Chicago Press. ———, D. J. Greenwood, and P. Lazes. 1991. “Participatory Action Research: Through Practice to Science in Social Research,” pp. 19 –55 in W. F. Whyte (ed.), Participatory Action Research. New York: Sage. Williams, Janet, and Kathleen Ell. 1998. Advances in Mental Health Research: Implications for Practice. Washington, DC: NASW Press. Wilson, Camilo, 1999. Private e-mail, S eptember 8. Wilson, Jerome. 1989. “Cancer Incidence and Mortality Differences of Black and White Americans: A Role for Biomarkers,” in Lovell Jones (ed.), Minorities and Cancer. New York: Springer-Verlag. Wood, Katherine M. 1978. “Casework Effectiveness: A New Look at the Research Evidence,” Social Work, 23(6), 437–458. Yin, Robert K. 1984. Case Study Research: Design and Methods. Beverly Hills, CA: Sage. Yinger, J. Milton, Kiyoshi Ikeda, Frank Laycock, and Stephen J. Cutl er. 1977. Middle Start: An Experiment in the Educational Enrichment of Young Adolescents. London: Cambridge University Press. York, James, and Elmer Persigehl. 1981. “Productivity Trends in the Ball and Roller Bearing Industry,” Monthly Labor Review (January), 40–43. Yoshihama, Mieko. 2002. “Breaking the Web of Abuse and Silence: Voices of Battered Women in Japan,” Social Work, 47 (4), 389– 400. Yu, Elena S. H., Zhang Ming-Yuan et al. 1987. “Translation of Instruments: Procedures, Issues, and Dilemmas,” pp. 75–83 in W. T. Liu (ed.), A Decade Review of Mental Health Research, Training, and Ser vices. Paci�c/Asian American Mental Health Research Center. Ziesel, Hans. 1957. Say It with Figures. New York: Harper & Row. Zimbalist, Sidney E. 1977. Historic Themes and Landmarks in Social Welfare Research. New York: Harper & Row. Zippay, Allison. 2002. “Dynamics of Income Packaging: A 10-Year Longitudinal Study,” Social Work, 47 , 291–300. Zlotnik, J. L., and C. Galambos. 2004. “Evidence-Based Practices in Health Care: Social Work Possibilities,” Health and Social Work, 29, 259–261.
Licensed to: iChapters User
Index
Note: Page numbers followed by f refer to �gu res. Page numbers followed by t refer to tables. AB design, 304–305 ABAB withdrawal/reversal design, 305–307 Absolute zero, 503 Abstract, of a resea rch report, 591 Abstract var iables, 170 Accidental sampling, 355–357 Acculturation, 108 Acquiescent response set, 189, 219, 224 Activities approach logic model, 341 Ad hominem attack, 20 Administration in Mental Health , 147 Administrative problems, 326–327 African Americans, 101–103, 108–109, 115, 118, 127, 155 Agency tracking, 117 Agreement reality, 3–4 Aid to Families with Dependent Children (AF DC), 416, 442 Alexander, Leslie B., 415 Alternative single-case designs, 304–305 Alternative treatment design with pretest, 257 Ambiguity, direction of causal in�uence, 250 Ambiguous results, interpretation of, 311–313 American Association for Public Opinion Research (AA POR), 395 American Institute of Public Opinion, 353–354 American Journal of Mental De�ciency , 147 American Psychological Association, 590 American Psychological Association’s Division 12 Task Force on Promotion and Dissemination of Psychological Procedures, 40 American Sociological Association, 590 Aminzade, Ron, 429–430 Analysis of var iance, 558–559 Analytic induction, 424 Analytic techniques, 429–431
Analyzing Social Settings (Lo�and), 437, 448, 453, 461–462, 464, 472, 478, 483, 486 Anchor points, 117 Anecdotal case reports, 36 Aneshenshel, Carol S., 151 Annie E . Casey Foundation, 410 Anonymity, 82–83 Anonymous enrollment, 114–115 ANOVA (analysis of variance), 558–559 Appearance and demeanor, 391 Appendices, 594 Areán, Patricia, 113 Asch Experiment, 50 Assessment, measurement equivalence, 124 Atheoretical research studies, 56 Attributes, 57–60, 165 Attrition (experimental mortalit y), 265–266 Audience, 588 Auditing, 452 Authority-based practice, 14, 27, 63 An Author’s Guide to Social Work Journals , 588 Availability sampling, 355–357 Available records, 418–419 B designs, 313–314 Babbie, Earl, 579, 594 Back-translation, 121 Bad Blood: The Tuskegee Syphilis Experiment (Jones), 75 Bandwagon appeal, 20 Bartko, J. J., 560 Baseline, 292 Baseline phase, 302–304 Basics of Qualitative Research (Strauss and C orbin), 480 Baxter, Ellen, 439–440 Belcher, John, 443 The Bell Curve (Murray and Herrnstein), 101–102 Bellah, Robert, 462 Bell-shaped curve, 512 Bennett, William, 102 Benton, J. Edwin, 225 Berg, Bruce, 424 Beta weight, 559–560 643
Beutler, L. E., 127 Beveridge, W. I. B., 61 Bian, Yanjie, 218 Bias class, 363 cultural, 98 –99, 121, 190–191 gender, 98–99, 377–378 journal ethics, 96–98 measurement, 261 measurement errors, 189–190 meta-analysis, 551 questionnaires, items and terms in, 219 recall, 282 in research st udies, 7, 12–13 researcher, 451 respondent, 451 sampling, 359–360, 362–363, 377–378 selection, 250 social-class, 363 Bilingual interv iewers, 114, 120 Billups, James, 424–425 Bivariate analysis, 508, 516 Bivariate parametric test s, 558–559 Bivariate tables, 517, 519 Black, B. M., 274 Black, Donald, 414 Blair, Johnny, 226 Blind ratings, 261 Bloom, Martin, 304, 445 Bogdan, Robert, 342–345 Bonferroni adjustment, 564 Bonney, Charles, 220–221, 246 –247 Book research report, 589 Booth, Charles, 382 Boston Globe , 102 Bottom-up searches, 32–33 Brannen, Stephen, 258 Brown, L. Dave, 278 Buckingham, Robert, 87 Budget, 585 Burnette, Denise, 522–523 Bush, George, 352f , 354–355 Buxtun , Peter, 75 Cambridge Scienti�c Abstracts, 30 Campbell Collaboration, 33 Campbell, Donald T., 248, 265–267, 285, 377–378
Licensed to: iChapters User 644
INDEX
Carey, Raymond, 249, 327–328 Carmines, Edward, 200 Carryover effects, 310 Case assignment protocol, 285 Case studies, 39–40, 345–346, 388–389, 443–445 Case-control design and studies, 282–283 A Case-Control Study of Adverse Childhood Experiences as Risk Factors for Homelessness (Herman), 283–284 Case-oriented analysis, 478–479 Caseworkers, 425, 427 Catalytic authenticity, 453 Cauce, A. M., 122 Causal inference, 243, 245–247 CBS News, 514 Central tendency, 509–512 “Certi�cates of Con�dentiality”, 83–84 Chance, 528–529 Changing intensity design, 310 Chicago Manual of Style , 590 “Chicago School”, 438 Childcare for participants in studies, 113 Child’s Attitude toward Mother (CAM) scale, 206–208 Chi-square test, 557 Chow, Julia, 416 Christian Science Monitor , 511 Chronicle of Higher Education , 96 CIAO, 30 Class bias, 363 Classic experimental design, 254–255 Client logs, 445 Client preferences, 37, 41. See also Therapeutic alliance Client recruitment and retention, 285–286 The Clinical Measurement Package (Hudson), 203, 206–208 Clinical social workers, 425 Closed-ended questions, 216 Cluster sampling, 373, 421 Cochrane Collaboration, 33 Code categories, 422, 505–507 Code of Ethics of the National Association of So cial Workers (NASW), 11, 88–89, 101 Code notes, 485–486 Code past, 494f Codebook, 507–508 Codes, creating, 483 Coding, 421–422, 482– 485, 491f , 494f , 501, 504–505 Coef�cient alpha, 198–199 Cognitive-behavioral model, 55 Cohen, Jacob, 545, 553–554, 556
Cohort st udies, 149–152 Coleman, James, 101 Collapsing extreme categories, 514, 515t Collapsing response categories, 514–515 Common sense, 14 Community forum, 339 Community leaders, endorsement of, 111–112 Community Mental Health Journal , 147 Comparability, 278 Comparative analysis, 427 Comparative data, 428 Comparative research, 429– 431 Comparative studies, 412 Comparison group, 272 Compassion, 8–11, 131 Compensation for study participation, 112 Compensatory equalization, 265 Compensatory rivalry, 265 Complete observer, 458, 461 Complete participant, 457–460 Components of Scienti�c Theory (Leming), 58 Composite measurement, 188 Comprehensive obser vation, 12 Computer programs, 424, 487–488, 504 Computer-assisted telephone interviewing (CATI), 395–396, 508 Comte, Auguste, 49 Concept mapping, 486– 487 Concept(s), 57, 165, 478, 483 Conceptions, 171 Conceptual equivalence, 123–124, 126f Conceptual explication, 165 Conceptual framework, 580 Conceptual order, 174–175 Conceptualization, 131, 165, 172–175, 422 Conclusion, research report, 593–594 Concurrent validity, 200 Con�dentiality, 82–83, 112, 117–118 Con�icts of interest , 551 Con�icts of interest s tatement, 552 Conrad, K., 341 Conscious sampling bias, 359–360 Consent form, 78, 79f –82 f , 120 Consent procedures, 111 Constant comparative method, 479 Construct validity, 200–202, 204f , 206 Constructing measurement instruments, 135–136, 137f , 141f
Constructivism in Practice: Methods and Challenges (Franklin and Nurius), 322 Constructs, 171–173 Contact, establishing initial, 457 Contamination of the control condition, 285 Contemporary positivism, 49–52, 451 Content analysis, 418–427 Content validity, 200 –201, 204f Contingency questions, 222–223 Contingency table, 223f , 517 Continuous variables, 513–514 Contracts, 576–577 Control conditions, 285 Control group, 253, 260–261 Control variable, 168 Convenience sampling, 355–357 Convergent validity, 202, 205f Conversation analysis (CA), 482 “Convert”, 461 Conway, Patricia, 545, 554 Cook, Thomas D., 248, 265–267, 285 Cooney, Margaret, 442 Corbin, Juliet, 478, 480, 483, 485 Coronado, N., 122 Correlation and Casualty (Bonney), 246–247 Correlational studies, 35 Cost, 449 Cost-bene�t analysis, 330 –331, 332t –334t Cost-effectiveness analysis, 330–331 Coulton, Claudia, 416 Council on Social Work Education (CSWE), 325, 376 Couper, Mick, 397–398 Cournoyer, B., 38 Cover letter, 385, 386f Cover materials, 578 Cramer’s V., 540 Criterion-related validity, 200, 204 f , 206 Critical incidents sampling, 447 Critical social science, 51–52 Critiques of social work, 8 Cross-case ana lysis, 479 Cross-sectional studies, 148–149, 152, 281–282 Crowder, Carla, 9–10 Cullum-Swam, Betsy, 480 Cultural bias, 98 –99, 121–122, 176, 190–191 Cultural competence acculturation, 108 challenges to, 111–118 community members as research staff, 112 data analysis, 107–108
Licensed to: iChapters User INDEX
developing, 109–111 insensitivity, impact on research climate, 109–110 interpretation, 107–108 measurement in, 107, 118–127 minority participation in studies, 107–117 problematic issues, 127 research part icipants, 106–107 sensitivity in questionnaires, 220 Cultural context of social work research, 73 Curtin, Richard, 396–397 Curvilinear relationship, 167–168
Dallas Morning News , 94 Daly, John, 225 Daly, Kerry, 403–404, 522 –523 Dannemiller, James E., 396 Data analysis, 107–108, 311, 423, 478, 528–529, 585 archives, 410 cleaning, 508 entry, 508 existing, 408, 416–419 gathering, 298–299 missing, 413 quanti�cation process, 301–302 sources, 299 Databases, 30–31, 144, 411t Data-collection methods, 349, 584–585 Davis, D., 586 Davis, Fred, 461 Deductive method, 60–63 DeFleur, Lois, 414 Degrees of freedom, 557 DeJong, C., 190 Demographic Yearbook , 411 DePan�lis, Diane, 415 Dependent variable, 57, 60, 166 Depth, 421 Description, 134, 137f , 141f Descriptive statistics, 520–521 Design methods, 584–585 Detail vs. manageability, 514 Deviant cas e sampling, 357, 446–447 Dewey, Thomas E., 353–354 Diagramming the religious sources of Anti-Semitism, 562f Diffusion of treatments, 263–265 Dillman, Don, 388, 394 Dimension, 173, 178, 203, 232 Direct behavioral observation, 192–193, 300 Direct observables, 171–172, 178, 180–181 Direct practice, 425 Directors, women �lm, 490–493
Discrete variables, 513–514 Discriminant function analysis, 560–561 Discriminate validity, 202, 205f Discussion, research report, 593–594 Dismantling studies, 257–258 Dispersion, 512 Disproportionate strati�ed sampling, 372, 373f Distributions, 509 Dix, Dorothea, 319, 427–428 Documentation, 414 “Don’t knows”, 515–516 Don’t Talk to the Humans: The Crackdown on Social Science Research (Shea), 92 Dork Depression Scale, 204f –205f Duneier, Mitchell, 438 Duration, 301 EBP process, 38 Ecological fallacy, 155–157 Economic barriers for participants in studies, 113 Economic reductionism, 158 Edmond, Tonya, 264 Effect size, 540–542, 543t , 544, 555t Effective Casework Practice (Fischer), 419 Ego involvement, 19 Eichler, Margrit, 99 Einstein, Albert, 430–431 Element, 361 Elemental memo, 486 The Elements of St yle (Strunk and White), 588 Emic perspective, 461–463 Empirical clinical practice model, 27 Empirical evidence, 12 Empirical support, 57 Empirically supported treatments (EST), 38–40 Empowerment standards, 453 Encounters, 437 Episodes, 437 Epistemological paradigm, 451 Epistemology, 4 Epstein, William, 96 –98, 552 Equal probability of selection method (EPSEM), 360–361 Errors, 536 Estimated sampling error, 366t Ethics analysis, 84–85 anonymity, 82–83 bene�ts and costs, weighing, 85–86 bias, 98–99 Code of Ethics of the National Association of Soc ial Workers (NASW), 88–89
645
con�dentiality, 82–83 controversies, 92–98 deception, 83–84 informed consent, 76–78 Institutional Review Board (IRB), 75–76, 89–92 participants, no harm to, 78, 82 professional, 8 –11 reporting, 84–85 research, 453 right to receive services vs. responsibility to evaluate service effectiveness, 86, 88 voluntary part icipation, 76–78 Ethnicity, 28–29 Ethnocentrism, 109 Ethnography, 438–4 40 Ethno methodology, 209 –210 Etic perspective, 461–463 Evaluating a Sexual Assault and Dating Violence Prevention Program (Weisz and Black), 274 Evaluations, 135, 137f , 141f Evidence, empirical, 12 Evidence-based medicine (EBM), 27 Evidence-Based Medicine: How to Practice and Teach EBM (Sackett), 27 Evidence-based practice (EBP) case example, 38–40 controversies and misconceptions, 40–42 evaluations, 38 feedback, 38 historical background, 26–27 intervention, 36–38, 133 introduction to, 26 nature of, 27–28 newer integrative model, 37t online surveys, 400– 401 process vs. practice, 38 questions, formulating, 28–30 randomized clinical trials (RCT’s), 35–38 research, 30 –34 single-case designs, 295–296 social work research, purposes of, 137f study appraisal, 34–36 Ex post facto hypothesizing, 18–19 Examining available records, 193 Exhaustive response c ategories, 216 Existing data, 408, 416–419 Existing statistics, 410– 411 Experiential reality, 3–4 Experimental demand characteristics, 262 Experimental designs, 253 Experimental group, 253 Experimental intervention, 260–261 Experimental mortality, 265–266
Licensed to: iChapters User 646
INDEX
Experimenter expectancies, 262 Experiments, 34 –35, 50, 284, 286–287 Explanation, 56, 135, 137f , 141f Exploration, 133–135, 137f , 141f External evaluators, 323–325 External validity, 247–248, 267–268 Extraneous variables, 168 Eye movement desensitization and reprocessing (EMDR), 39–40, 258, 551 Face validity, 198, 200, 204 f , 230 Factorial validity, 202–203, 205f Family Support Act 1988, 442 Fanshel, David, 259 Feasibility, 139–142, 414–415 Feasibility of Providing Culturally Relevant, Brief Interpersonal Psychotherapy for Antenatal Depression in an Obstetrics Clinic . . . (Grote), 254 Federal Register, 576 Feinberg, B., 395 Feminist paradigm, 51–52 Fevola, Antonio, 409 Field tracking, 117 File drawer effect, 552 Filing, 483 Film directors, 490–493 FirstSearch, 30 Fischer, Joel, 304, 419, 445 Fisher’s exact test, 558 Fittingness, 452 Flexibility, 449 Focus groups, 110–111, 340–341, 468–471 Focus groups and feminist methods: the voices of battered women in Japan, an illustration . . . , 471 Follow-up mailings , 387–388 Forms, 89–90 Foundation Center, 576 Foundation Directory, 576 Fox News Sunday , 511 Franklin, Cynthia, 322 Fraser, Mark, 256 Freewill notions, 63 Frequency, 301 Frequency distribution, 509 Funding sources, 575–577 Gall, John, 590–591 Gallagher-Thompson, Dolores, 113 Gallup Organization, 218 Gallup, George, 353–354 Gambler’s fallacy, 19–20 Gender Advertisements (Goffman), 481 Gender bias, 98–99, 176, 377–378
General Social Survey (GSS), 189, 397–398, 410, 507 Generalizability, 77, 449– 451 Generalization of effects, 309 Gilgun, Jane, 144, 403–404, 442, 522–523 Giuli, C., 206, 208 Glaser, Barney, 424, 438, 479–480, 483, 486 Glass, G., 545, 550 Goal attain ment model, 329 Goffman, Erving, 481 Going native, 451 Gold, Raymond, 458 Goldman, H., 427–428 Google, 31–32, 144–147, 576 Google Scholar, 31–32, 145, 148 Grants, 576–577 Grantsmanship Center, 575 Greene, Robert, 225 Grote, Nancy, 254 Grotevant, Harold, 233, 234f –238f Grounded theory, 438, 440 –4 43 Grounded theory method (GTM), 479–480 Grounded Theory Methodology on the Web (Morin), 480 Group interviewing, 469–470 Group work, 425 Groups, 153, 437 Groupthink, 470 Grundy, John F., 259 Guba, E., 452 Guide Research Proposals Used by the University of Texas at Austin’s I nstitutional Review Board, 90f Guttman scaling, 231 Haase, R. F., 545 Handel, Gerald, 403–404, 522–523 Hard-to-identify populations, 412 Harris, B. G ., 586 Harris Interactive, 398 Harvard Educational Review , 101 Haven in a Heartless World (Lasch), 8 Herald Tribune , 514 Herman, Daniel, 283–284 Hermeneutics, 430– 431 Herrnstein, Richard J., 101–102 Higginbotham, L eon Jr., 428 Historic Themes and Landm arks in Social Welfare Research (Zimbalist), 427 Historical analysis, 427 Historical data, 428 Historical research, 429– 431 History intervention plans, change limitations associated with, 310
program effectiveness, 248 program evaluation, 319–320 survey research, 382 Hogarty, Gerard, 9, 551–552 Hohmann, A. A., 107 Hollingshead, A., 122 Homogeneous sample, 447 Hopper, Kim, 439–4 40 Hospice vs. hospital care, 69 Howell, Joseph, 449 Huberman, A. M ichael, 478 Hudson, Walter, 203, 206 –208, 322 Human obedience experiments, 93–94 Human subject concerns, 412 Humphreys, Laud, 94 Hurh, Won Moo, 125–126 Hush Little Baby: The Challenge of Child Care , 189–190 Hypothesis, 18–19, 57–63, 166–168, 180–181, 528–529 Ideal types , 431 Identi�cation (ID) number, 385–386 Ideology, 46, 100–101 Idiographic model, 64–67 Illogical reasoning, 19–20 An Illustration: Living with the Dying—use of Participant Observation, 87 An Illustration of a QuasiExperiment Evaluating a Family Preservation Program (Rubin), 279–280 Imitation of treatments, 263–265 Immigration experience, 108 Implicit strati�cation, 371–372 Independent variable, 57, 60, 166 Index tree, 493, 495f Indexes, 229–230 Indicators, 172–173 Indirect observables, 171–172 Individualistic fallacy, 157 Individuals, 152–153 Inductive method, 60–63 Inference, 245–247 Inferential statistics, 520– 522, 550, 562–563, 566–569 Informants, 358–359 Information needs of agency, 139 Informed consent, 76–78, 111 In-house evaluators, 323–325 Inquiry, premature closure of, 20 –21 Insensitivity, cultural, 108 –109 Insider understanding, 461–463 Institutional Review Board (IRB), 75–76, 78, 82, 85–86, 89 –91, 120, 412, 585–586 Instru mentation changes, 248–249 Integrating memo, 486 Intensity sampling, 447
Licensed to: iChapters User INDEX
Interdisciplinary triangu lation, 452 Internal consistency reliability, 196–198, 206 Internal validity, 247–248, 273 Internet resources, 31–32, 34, 89, 144–148, 397–398, 411t , 575–576, 578 Interobserver reliability, 196 Interpretivism, 50–52 Interquartile range, 512 Interrater reliability, 196 Interrupted time-series with a nonequivalent comparison group time-series design, 278 Inter-University Consortium of Political and Social Research (ICPSR), 410 Interval measures, 229, 503 Interval recording, 301 Intervening variables, 169 Intervention �delity, 284–285 Interventions, 6–8 , 36–38, 292, 310 Interview guide, 215, 465–467 Interview schedule, 215 Interview surveys, 389–397, 399, 401–402 Interviewers, 113–114, 390 –393 Interviewing. See also speci�c approaches Asians, 119–120 culturally competent, 118–119 group, 469–470 informal conversational, 464–465 measurement errors in, 192 qualitative methods, 463 recording observations, 470– 474 Introduction, research report, 592 Inverse relationship, 167 Item analysis, 231 Jensen, Arthur, 101 Johnson, C., 254 –255 Johnson, Lyndon, 429 Jones, Ja mes, 75 Jones, L ovell, 102–103 Journ al art icles, 562, 589 Journ al bias, 96 –98 Journal of Counseling Psychology , 545 Journal of Evidence- Based Social Work, 27 Journal of Social Work Education , 400–401 Judgment al sampling, 357 Julia, M aria, 424– 425 Kane, R., 307 Kaplan, Abraham, 171 Keeter, S., 395 Kerry, John, 354–355 Key informants , 338–339, 457
KIDS COUNT, 410 Kim, Kwang Chung, 125–126 Kinnell, Ann Marie, 482 Knowledge, 3–4, 12–21 Knowledge Networks, 398 Known groups validity, 201, 204 f , 208 Kolmogorov-Smirnov two-sample test, 558 Kronick, Jane, 211 Kuhn, Thomas, 47 Kvale, Steinar, 463 Lambda, 540 Landon, Alf, 352–353 Language problems, 120–121 Lasch, Chr istopher, 8 Laslett, Barbara, 429–430 Latent content, 421–422 Learning from Bad E xamples (Bonney), 220–221 Leming, Michael R., 58 Leon, Joseph J., 152 Level of acculturation, 108 Level of signi�cance , 532–533 Levels of measurement, 229–230, 501, 503–504 Leviticus, Book of , 483– 485, 488 Library resources, 30–31, 144–145, 428–429 Lichtenwalter, Sara, 409 Liebow, Elliot, 521 The Life and Labour of the People of London, 382 Lien, Forrest, 10 Life history, 468 Life story, 468 Lifestyles, 438 Likert, Rensis, 215 Likert s cale, 215–216, 231–232 Limited socialism, 382 Lincoln, Yvonna S., 452 Linguistic equivalence, 123–124, 126f Lipsey, Mark, 545 Literary Digest , 352–353, 394 Literature, relevant, 457 Literature review, 143–148, 179, 579–583, 592 Lo�and, John and Lyn, 437, 448, 453, 461–462 , 464, 472, 478, 483, 486 Logic model, 328, 341–342 Logistical problems, 326–327 Longhurst, Terri, 442 Longitudinal studies, 149–151 Losing ground (Murray), 8 Lowe, Peggy, 9–10 Magnabosco, Jennifer, 322 Magnitude, 301
647
Mail distribution, 384–385 Mail return, 384–385 Mail tracking, 117 Manageability, 514 Managed care, 320–322 Manifest content, 421–422 Manning, Peter K., 480 Mann-Whitney U test , 558 Manson, ’S. M., 109, 111–112, 127 Marginals, 509 Marshall, C., 458 “Martian”, 461 Marx, Karl, 382 Marxist s cholars, 431 Matching, 260 Matrix question format, 223f Maturation, 248, 251 Maximum variation sampling, 447 Maynard, Douglas, 482 McNemar nonparametric test, 558 McRoy, Ruth, 67, 233, 234 f –238f , 425–426 Mean, 509–512 Measurement. See also Conceptualization; Operationalization bias, 261 constructing composite, 229–230 constructing qualitative, 232–233 cultural competence in, 107, 118–127 data gathering, 298–299 data sources, 299 errors, 188–189, 191, 193–194 instruments, 135–136, 137f , 141f , 179, 215 interval, 229, 503 issues, 296–298 levels of, 229–230, 501, 503–504 nominal, 501–502 ordinal, 229, 502–503 problem formulation and, 131 process, 188 proposals, writing, 580 ratio, 229, 503 in the real world, 175 reliability, 194–197 what to measure, 298 who should measure, 299 Measurement equivalence, 122–124, 126f Measures of assoc iation, 538–539 Media, 14–16 Median, 509 –511 Median test, 558 Mediating variable, 169 Medium effect sizes, 544–545 Medline, 31, 145 Member checking, 452 Memoing, 485–486 Mental Ret ardation , 147
Licensed to: iChapters User 648
INDEX
Mercer, S., 307 Meta-analysis, 32, 550–553 Methodological Problems In the Study of Korean Immigrants: Linguistic And Conceptual Problems (Hurh and Kim), 125–126 Methodology, 4, 7–8, 56, 119, 552–553. See also speci�c methods Methods, of a research report , 592 Metric equivalence, 123–124, 126f Miles, Matthew B ., 478 Milgram, Stanley, 93–94 Milgram study, 93–94 Minnesota Multiphasic Personality Inventory (MMPI), 198, 200 Minority groups, 107–111. See also Cultural competence Miranda, 109 –110, 114, 116 Missing data, 413 Mitchell, Richard G. Jr., 218 Mode, 509–511 Moderating variables, 169 Monette, D., 190 Monitoring survey returns, 385–387 Monitoring trends, 412 Morgan, David, 470 Morin, Gaelle T., 480 Morrissey, J., 427–428 Mullen, Edward, 41–42 Multiple measurement points, 292 Multiple pretests, 273–274, 275f Multiple regression analysis, 559 Multiple time-series designs, 278–279 Multiple-baseline designs, 307–309 Multiple-component designs, 309–311 Multistage cluster sampling, 373–375 Multistage designs, 373 –374 Multivariate analysis, 508, 559–561 Multivariate statistical techniques, 412 Multivariate tables, 519–520 Murphy, Katrina, 2 25 Murray, Charles, 8, 101–102 Mutually exclusive answer categories, 216 My Lai tragedy, 93 Naive realism, 47 NASW News, 425 National Archive of Criminal Justice Data, 410 National Association of Social Workers, 97 National Data Archive on Child Abuse and Neglect, 410 National Institute of Drug Abuse (NIDA), 575
National Institute of Health (NIH), 89, 107, 575–576 National Institutes of Health Guide for Grants and C ontracts, 575 National Institute of Mental Health (NI MH), 325, 575 National Library of Medicine, 31 National Opinion Research Center (NORC), 398, 410 Naturalism, 438, 468 Nazi medical experimentation, 75 Needs asses sment, 337–338 Negative case analysis, 452 Negative case testing, 424 Negative relationship, 167 New Republic , 102, 510–511 New York Times , 96, 514 Newmaker, Candace, 10 Nodes, 490, 493, 496f Nominal de�nition, 174–175 Nominal measures, 229, 501–502 Nomothetic model, 64–67 Nondirectional hypotheses, 533–534 Nonequivalent comparison groups design, 272–274 Nonparametric test , 557–558 Nonprobability sampling, 352, 355 Nonrational behavior, 50 Nonreactive research, 408 Nonresponse bias, 363–364 Nonsexist Research Methods (Eichler), 99 Normal curve, 512 Norton, I., 109, 111–112, 127 Notes, 472 Novelty and disruption affects, 263 NUD*IST (Nonnumeric Unstructured Data, Index Searching, and Theorizing), 488– 493, 494f –496f Null hypothesis, 536–537, 556, 563 Numerical, end product of coding, 423 Numerical descriptions in qualitative research, 522 Nurius, Paula, 322 Objectivity, 12, 46, 68–70, 100 –101 Observation, 16–18, 57, 62–63 Observation-based evidence, 12 Observer-as-participant, 458, 461 Obtrusive observation, 262, 300–301, 408 One-group pretest-posttest design, 251–252, 253f One-shot case study, 251, 253f One-tailed test of signi�cance, 534–535 Online sur veys, 397–399, 402 Open coding, 483 Open mindedness, 12
Open-ended questions, 216 Operational de�nition, 165, 170–171, 175–176, 180–181, 183–185, 297–298, 319 Operational notes, 485–486 Operationalization, 131, 165, 174–185 Opinions, clinical experts, 36 Oral history reviews, 468 Order effects, 310 Ordinal measures, 2 29, 502 Organizations, 437 Orme, John G., 304, 445 Ortega, D., 121–122 Outcomes approach logic model, 341, 342f Outcomes Measurement in the Human Services: Cross-Cutting Issues and Methods (Mullen and Magnabosco), 322 Over�ow design, 280 Overgeneralization, 17, 18f OVID, 30 Ozawa, Martha, 416 Padgett, Deborah, 451 Pandey, Shanta, 416 Panel attrition, 151 Panel studies, 150–152 Paradigmatic �exibility, 52–53 Paradigms, 47. See also speci�c paradigms Parallel-forms reliability, 197–198 Parameter, 365, 557 Parametric test , 557–558 Parrish, Danielle, 400– 401, 418–419 Parron, D. L., 107 Participant-as-observer, 458, 460–461 Participants in studies factors in �uencing, 114 intervention that disappoint or frustrate, avoiding, 266–267 minority and repressed populations, 108–111 observation of, 110 recruiting and retaining, 116 recruiting and retaining the participation of, 285–286 reimbursement, 266 sampling, 584 Participation, 459– 460 Participatory action research (PAR) paradigm, 442–443 Passage of time, 248 Path analysis, 561 Path coef�cients, 561 Patterns, 478 Patton, Michael Qui nn, 4 47, 463, 466 Paulson, Robert I., 285–286
Licensed to: iChapters User INDEX
Pearson product-moment correlation (r), 540, 559 Peer debrie�ng and support, 452 Percentage down, 517 Percentaging a table, 516–517, 518f Personal interest, 139 Personal Responsibility and Work Opportunity Reconciliation Act 1996, 442 Philosophy, 45 Phone tracking, 117 Pilot studies, 250–251 Pitfalls of experiments, 284 Pittsburgh Su rvey, 382 Placebo control group design, 263 Placebo effects, 263 Plagiarism, 590–591 Planners, 320 Plausible, 478 Point-biserial correlation coef�cient, 540, 544–545 Polansky, N., 138 Political in�uence, 243 Politics, 73, 323 Poll, 351t , 352f , 384 Popular media, 14–16 Population, 152–153, 361–365 Population Bulletin , 411 Porter, Stephen, 399 Portionate reduction of error (PR E), 540 Posavac, Emil, 249, 327–328 Positive relationship, 167 Positivism, 49–50 Possible-code cleaning, 508 Postcard contents, 387 Postmodernism, 47–49, 52 Posttest-only control group design, 255 Posttest-only design with nonequivalent groups, 252–253 Post-traumatic stress disorder (PTSD), 29, 31–32, 64 –66, 173, 229–230, 258, 314 Powers, G., 38 Practical pitfalls, qualitative techniques for avoiding or alleviating, 287 Practice evaluation, 243 Practice models, 55 Practices, 437 Prediction, 56, 136 Predictive validity, 200 Predictor variables, 560 Pre-experimental designs, 250–251, 254–256 Presser, Stanley, 226, 396–397 Pretest-posttest control group design, 255 Primary sources, 428–429 Probabilistic knowledge, 28, 63
Probability proportionate to size (PPS) sampling, 375–376 Probability sampling, 352, 354 –355, 359, 367–368, 377 Probability theory, 365 Probes, 464 Problem formulation, 131–133, 142–143 Procedures, 89 Process evaluation, 336–337 Professional papers, 589 Program analysts, 320 Program evaluation administrative problems, 326–327 compliance with and utilization of, 328 goal attainment, problems and issues, 331, 335 historical background, 319–320 in-house vs. external evaluators, 323–325 logistical problems, 326–327 managed care, 320–322 monitoring program implementation, 335–336 needs assessment, 337–338 outcome and ef�ciency, 329–330 planning, 327–328 politics of, 323 process evaluation, 336–337 program planning, 337–338 qualitative approach, 342, 345 quantitative and qualitative approaches, 34 4 types, 329 utilization of �ndings, 325–326 validity of inferences, 243 Prolonged engagement, 451–452 Prolonged exposure therapy (PET), 39–40 Proportionate strati�ed sampling, 372 Proposal, 162, 575, 577–579 Pseudoscience, 21, 22f Psychoeducational approaches, 15 Psychological reductionism, 158 Psychometric equivalence, 123 Psychosocial model, 26, 55–56 PsycINFO, 30, 145 Public Health Service Act, 83 Public Opinion Quarterly , 397 PubMed, 145 Purpose of study, explain, 458 Purposes, 136, 137f Purposive sampling, 357, 448 Qualitative analysis, 478, 488 Qualitative data analysis, 424–426 Qualitative data, computer programs, 487–488 Qualitative data processing, 482
649
Qualitative inquiry, 564–566 Qualitative methods a comparison of quantitative and qualitative approaches to asking people questions, 238–239 contemporary positivism, 50 contemporary positivist standards, 451–452 de�ned, 437 descriptive studies, 134 empowerment st andards, 453 evaluation research, an illustration of a qualitative approach to, 342 evaluation standards, 451 and evidence based practice, 42, 137f feminist methods, 468 �eld, preparing for the, 457 grounded theory in studying homelessness, an illustration of, 443 informal conversational interviews, 464–465 interview guide approach, 465–466 interviewing, 463 naturalistic, ethnographic studies of homelessness, two illustrations of, 439–4 40 numerical descriptions, 522 observer, roles of the, 458 operationalization and its complementarity w ith a quantitative perspective, illustrations of the qualitative perspective on, 184–185 overview, 435 participants, relations to, 461–463 program evaluation, combining quantitative and qualitative methods in, 344 qualitative measures, constructing, 232–233 recording observations, 470, 472–474 reliability and validity in, 209 –212 research ethics, 453 research paradigms, 438, 440–442 sampling, 445–448 scienti�c research inquiry, 66–68 selecting informants in, 358 single-case evaluation, role in, 315 social constructivist standards, 452–453 speci�c methods, 457 strengths and weaknesses of, 448–449 survey research methods, combining, 403–404 topics, 437–438
Licensed to: iChapters User 650
INDEX
Qualitative Methods in Family Research (Gilgun, Daly, Handel), 403–404, 522–523 Qualitative Methods in Social Work Research (Padgett), 451 Qualitative reports, 594–595 Qualitative research proposal, 586–587 Qualitative Studies in Social Work Research (Reissman), 522–523 Qualitative Techniques for Experimental or QuasiExperimental Research , 288 Quantitative analysis, 501 Quantitative data analysis, 424, 499 Quantitative methods, 50, 66–68, 134, 137f , 238–239, 342, 344 –345, 437 Quantitative research proposal, 586–587 Quasi-experimental designs, 243, 272 Quasi-experiments, 35, 284 Questionnaire answer selection, 222 biased items and terms, 219 clarity of the, 216 composite illustration, 226 construction, 219–222 contingency questions, 222–223 cultural sensitivity, 220 data, handling missing, 230–231 double-barreled questions, 216–217 formats for respondents, 222 instructions, 225–226 interviewers, 390–392 matrix questions, 223–224 measures, construct ing composite, 229–230 ordering questions in a, 224–225 pretesting, 226 questions and statements, 215–216 random errors, 191–192 relevancy, 218 respondent issues, 218 sample, 226f –229f scale construction, 230–231 voluntary participation and informed consent, 76–77 Questions in evidence-based practice (E BP), 28–29, 34, 35t guidelines, 215 matrix, 223–224 quantitative vs. qualitative approaches, 238–239 research, attributes of good, 139–140 research vs. hypotheses, 166–167 social work research, selection for, 136, 138
Quoss, Bernita, 442 Quota matrix, 260 Quota sa mpling, 353–354, 357–358, 446 Race, 101–103 Random error, 191, 193 Random sampling, 420 Random selection, 361 Random-digit dialing, 361, 394 Randomization, 258–259 Randomized clinical trials (RCT’s), 34–35, 40, 42 Randomized experiments, 34–35 Range, 512 Range of variation, 176–177 Rank, Mark, 522–523 Rank order, 502 Rapport, establishing, 457–459 Rasinski, Kenneth, 189 Rates under treat ment, 339 Ratio measures, 229, 503 Reactivity, 451 Reading and Evaluating Documents (Aminzade and Laslett), 429–430 Reality, conceptions and, 174 Reality, experiential, 3–4 Reality, nature of, 46 Reality agreement, 3–4 Reasoning, illogical, 19–20 Rebirthing therapy, 9 –10 Recall bias, 282 Reconstructed baseline, 304 Record keeping, 423 Records, 178, 180–181, 193 Recruitment of study participants, 111–116, 285–286 Reductionism, 157–158 References, 594 Referral sources, 116 Re�exivity, 462 Rei�cation, 174 Reimbursement, 266 Reismann, Catherine, 522–523 Relationship, 57, 437 Relationship magnitude, 539–540 Relevance, 139–140 Reliability, 179, 194–196, 205–212, 299–300, 413– 414 Reminder calls, 116 Replication, 13, 17, 38, 306 Reporting, 492f , 562, 587–591 Representativeness, 360–361 Request for proposals (RF P), 575–576 Research. See also Cultural competence; ethics; Social work research; speci�c methods ; Survey research analysis, 84–85 bene�ts and costs, weighing, 85–86
bias, 451 community members as research staff, 112 compensation, 112 con�dentiality, 112 contracts, 576–577 cultural context of, 73 cultural insensitivity, 108–109 design, 245 ethics, 453 evidence-based practice, research hierarchy, 35t , 42 Federal exemption categories, 91 grants, 575–576 mental health example, 9 methods, 10–11 misleading st udies, 551–552 note, 589 participants, 75–78, 82–83, 110–111, 285–286 politics of, 73, 99 proposal, 575, 577–579 publication of, 7 quality, 7–8 reactivity, 261–265 reporting, 84 –85, 587–591 resources, 31–34, 116 social, 101–103 social work practice, utility in, 2, 10 studies, 6 –8, 56, 551–552 topics and questions, 136, 138–142 understanding and use of, 5–9 Research Committee of the Eye Movement Desensitization and Reprocessing International Association (EMDRI A), 576 Resentful demoralization, 265 Respondent bias, 451 Response rate, 388, 396–397 Results, 311–313, 593 Retention of participants in studies, 116, 285–286 Retrospective baseline, 304 Retrospective data, 282 Reviews Cochrane C ollaboration, 33 on the effectiveness of social work, 5–6 Campbell Collaboration, 33 Institutional Review Board (IRB), 89, 91–92 literature, 143–148, 179, 579–583, 592 Richey, C. A., 121–122 Richmond, Mary, 26–27 Right to receive services vs. responsibility to evaluate service effectiveness, 86 Robinson, Robin, 468
Licensed Licensed to: iChapters iChapters User User INDEX
Rocky Mountain News (Crowder and Lowe), 9–10 Rodwell, J., 453 Roffman, Roger A., 114–115 Rogler, L., 122 Roles, 437 Rosenhan, D. L., 77 Rossman, G., 458 Rothman, Ellen, 428 Rubin, Allen effect sizes, 545 evidence-based practice (EBP), 400–401, 418–419 An Illustration of a QuasiExperiment Evaluating a Family Preservation , 279–280 literature review, 579 multiple regression analysis, 559 program evaluation, 324 qualitative interviewing, 463, 465 references and appendices, 594 sample size, 538 A Social Work Dissertation that Evaluated the Effectiveness of EMDR, 264 statistical power analysis, 554 Statistics for Evidence-Based Practice and Evaluation , 311, 513 support group intervention, 245 Sackett, Evidence-Based Medicine: How to Practice and Teach EBM , 27 Sales, Esther, 409– 41 410, 0, 412 Sample, 351 Sample size, 365, 367, 538, 555 t , 556 SamplePower, 367 Sampling bias, 359–360, 362–363, 377–378 error, 365–367, 373–374, 528–529 frame, 353, 362–365 interval, 369 participants in studies, 584 ratio, 369 techniques, 115, 351–358, 420– 421 421,, 469 unit, 361 Sampling social work students, illustration, 376–377 Sandelowski, M., 586 Scalar equivalence, 123 Scales, 179, 181–183, 206–208, 229–231 Schedule, 585 Schilling, Robert, 256 Schizophrenia Bulletin (Bartko, Carpenter, McGlashan), 560 Schuerman, John, 96–98 Science, 4
Scienti�c inquiry, 1, 4, 16. See also Research Scienti�c method, 11– 11–13, 13, 38 Search engines, 30 –33, 144 144–1 –145 45 Secondary analysis, 408–417 Secondary sources, 428–429 Selection biases, 250, 252 Selective observation, 17–18 Self-administered questionnaires, 215, 384–385, 399–400 Self-esteem, 131 Self-mailing questionnaire, 384–385 Self-monitoring, 300 Self-reports, 178, 180–181, 191–192 Semantic differential, 232 Semiotics, 480–481 Settlements, 438 Shadish, William, 265–267, 285 Shaf�r, William B., 462 Shea, Christopher, 92 Signi�cance, power of test of, 555t Signi�cance levels, 532–533 532–533 Signi�cance tests, 562 Signs, 480f , 481 Silverman, David, 482, 523 Simple random sampling (SRS), 367–368 Simple time-series designs, 275–278 SINET: A Quarterly Review of Social Reports and Research on Social Indicators, Social Trends, and the Quality of Life , 411 Singer, Eleanor, 396–397 Single-case designs, 294–296, 304–305 Single-case evaluation designs, 35, 243, 292–294, 315 Single-case resea rch studies, 313 Single-subject designs, 294 Single-system designs, 294 Smith, M., 545, 550 Snowball sampling, 115, 358, 446 Snyder, Shelita, 225 Social adjustment, 131 artifacts, 154 groups, 153 indicators, 339–340 research, 101–103, 101–103, 408– 409 status, 481 worlds, 438 Social Casework , 425 Social constructivist standards, 452–453 Social desirability bias, 18 189–191 9–191 Social Diagnosis (Richmond), 26–27 Social science methodology, 4 Social Sciences Abstracts , 30 Social scienti�c theory, 53 Social Service Abstra Abstracts cts , 30 Social Service Review , 96–97, 569
651
Social work critiques of, 8 education programs, 376–377 operationalization in, 178–179 practice models, 55 research, 5–8, 73 research reports, 587–5 587–590 90 single-case designs, 294–296 utility of theory in, 54–55 Social Work Abstracts , 144, 147 A Social Work Dissertation that Evaluated the Effectiveness of EMDR (Edmond, Rubin, and Wambach), 264 A Social Work Experiment Comparing the Effectiveness of Two Approaches to Court-mandated Spouse Abuse Treatment (Brannen), 258 A Social Work Experiment Evaluating CognitiveBehavioral Interventions with Parents at Risk of Child Abuse (Whiteman, Fanshel, and Grundy), 259 A Social Work Experiment Evaluating The Effectiveness Of A Program To Treat Children at Risk of Serious Conduct Problems (Fraser), 256 A Social Work Experiment Evaluating Motivational Interviewing (Schilling), 256 Social Work Research , 355 Social work research cross-sectional studies, 148–1 148–149, 49, 152 data analysis, 158–15 158–159 9 data collection, 158 data processing, 158 interpretation, 159 literature review, 143–148 longitudinal st udies, 149–151 problem formulation and, 142–143, 158 process, diagramming the, 159–162 proposal, 162 purposes, 133–136 report, writing, 159 research design, 158 topics and questions, 136, 138–142 units of analysis, 151–154 151–154 Social worker, 425 Social-class bias, 363 Sociological Abstracts , 30 Sociometrics Social Science Electronic Data Library, 410 Solomon, Phyllis, 285–286, 415, 545 Solomon four-group design, 257
Licensed Licensed to: iChapters iChapters User User 652
INDEX
Sorting memo, 486 Speci�cations, 393 Speci�city, 421 Split-halves method, 197 Spot-check recording, 302 Spreadsheet for qualitative analysis, 488f SPSS, 504 –505, 507–508 507–508 Spurious relationship, 168–169 Stability, 196 Stakeholders, 327–328 Standard deviation, 512–513 Standardized Open-Ended I nterview, 467–468 Standardized Open-Ended Interview Schedule, 234f –238 –238f Standardized regression coef�cient, 559–560 Stanley, J., 248 Static-group comparison design, 252, 253f Statistical Abstracts of the United States, 410 Statistical data , 417 417 Statistical Package for the Social Sciences (SPSS), 367 Statistical power analysis, 553, 556, 563–564, 584 Statistical Power Analysis for the Behavioral Sciences (Cohen), 553, 556 Statistical regression, 249–250 Statistical signi �cance, 311, 529–530, 556–557 Statistics, 410– 41 411, 1, 520–522 , 550, 562–563, 566–569 Statistics for Evidence-Based Practice and Evaluation (Rubin), 311, 513 Stebbins, Robert A., 462 Stimulus-response theory, 390 Strati�cation, 370, 374–375 Strati�ed sampling, 369–371, 421 421 Strauss, Anselm, 424, 438, 478–480, 483, 485 Straw person argument, 20 Street Corner Society (Whyte), 438 Strong effect sizes, 544–545 Structu ral equation modeling, 561 561 The Structure of Scienti�c Revolutions (Kuhn), 47 Strunk , William Jr., 588 Studies, u npublish npublished, ed, 553 Study population, 361 Style guides, 590 Subcultures, 438 Subjectivity, 68–70, 449–450 Substantive signi�ca nce, 311, 545–546 Sue, Stanley, 123–124 Sullivan, T., 190
Summative evaluations, 319 Survey, 384 Survey Monkey , 399 Survey research communities or target groups, 340 cover letter, 385, 386f follow-up follow -up mailings , 387–388 historical background, 382–383 identi�cation (ID) numbers, 385–386 interviewing guidelines, 391–3 391–392 92 mail distribution, 384–385 mail return, 384–385 method comparisons, 399, 401–402 monitoring returns, 385–387 qualitative research methods, combining, 403–404 response rate, 388 responses, 391–3 391–392 92 secondary analysis, 408–409 strengths and weaknesses of, 403–404 topics, 383–384 Switching replication, 274–275, 276f Symbolic realism, 462 Systematic error, 188–189, 193 Systematic observation, 12 Systematic sampling, 368–369, 371–372, 420 Tactical authenticity, 453 Tally sheet, 423t Tandon, Rajesh, 278 Tape recording, 470, 472 Target problems, 297–298 Taylor, Steven, 342–345 Tearoom Trade: Impersonal Sex in Public Places (Humphreys), 94 Technical support, 412– 41 413 3 Telephone surveys, 394–395, 402 Testing, 248 Test-retest reliability, 196–197, 206 Texas Welfare Study, 94–96 Theoretical notes, 485–486 Theoretical sampling, 447–448 Theoretical sampling distr ibutions, 530–535 Theory, 45, 53–57, 478 Theory-based logic model, 341, 343f Therapeutic alliance, 55. See also Client preferences Therapists, 425 Thinking topics, 437 Thorelli, I., 559 Thought Field Therapy (Johnson, et al.), 254–255 Threats to internal validity, 35 Thurstone scaling, 231 Time dimension, 147–148
The Time Dimension and Aging (Leon), 152 Title, of a research report , 591 Todd, Tracy, 322 Top-down searches, 32–33 Tracking methods, 117, 267 Tradition, 13–14 Training requirements, 89 Transferability, 452 Translation Transla tion equivalence, 123 Translation validity, 120 Transparency, 552 Transportation for participants in studies, 113 Trauma-focused Tra uma-focused cognitive behavioral therapy (TFCBT), 258 Trend studies, 149, 152 Triangulation, 194, 298, 452, 584 Truman, Harr y, 353–354 353–354 Trustworthiness, 45 451 1 Tuckel, P., 395 Tuskegee syphilis study, 75, 76 ph Two-tailed tests of signi� cance, 533–534 Type I error, 536 –537 –537,, 550, 553, 556, 564–565, 568 Type II error, 537–538, 550, 553–554, 556, 564–566, 568 Type II I error, 564–566 Unconscious sampling bias, 359–360 Understanding, 136, 430– 431 431,, 448–449 Unidimensionality,, 230 Unidimensionality Uniform Crime Reports , 414 Units of analysis, 151–158 Univariate analysis, 508–509 University of California–San Diego Social Science Data on the Internet, 410 University of Michigan’s Survey of Consumer Attitudes, 396–397 Unlikely coincidences, 292 Unobtrusive observation, 194, 262, 300–301, 408 Unpublished studies, 553 Utilit y, 514 514 Validity, Vali dity, 198–202, 20 4f , 205–212, 247–248, 261, 299–300, 413 Values, Val ues, 53 –54 Variable-oriented Variab le-oriented analysis, 478–479 Variables, 57–60, 165–167, 182 f Variance, 230 Variations, 176–177 Venn diagrams, 560 Videka-Sherman, Lynn , 551–552 Visual pattern, 31 311 1 Visual signi�cance, 31 311 1
Licensed Licensed to: iChapters iChapters User User INDEX
Voice Capture (Dannemiller), 396 Voluntary Vol untary participation, 76–78 Waechter, D. M., 545 Walker Research, 395 Wallace, Walter, Walter, 62– 63 Wallace model, 62f Wambach, Kathryn Kathryn , 264 Washington Post , 102 Watson, J., 122 Weak effect sizes, 544–545 Webb, Eugene J., 408 Weber, Robert Philip, 431 Websites. See Internet resources
Weiss, Carol, 278 Weisz, A. N., 274 Wheel of Science, 62f Whitcomb, Michael, 399 White, E. B., 588 Whiteman, Martin, 259 Whyte, William Foote, 438 Wikipedia, 145 Wilcoxon sign test, 558 Wilson, Camilo, 397–3 397–398 98 Women �lm directors, 490–493 Working papers, 589 World Population Data Sheet , 411
Writing research proposals, 575 575,, 577–579 Written self-reports, 191–1 191–192 92 Yahoo, 144–145, 576 Yoshihama, Mieko, 471 Yu, L., 121 Yules’ Q, 540 Zeller, Richard, 200 Zerbib, Sandrine, 490–493 Zhang, J., 121 Zimbalist, Sidney, 100, 427 Zippay, Allison, 446
653
Licensed to: iChapters User
Licensed to: iChapters User
P RA CT IC E- RE LAT ED PART I: A N INTRODUCTION TO SCIENTIFIC INQUIRY IN SOCIAL WORK Chapter 1 ✸ ✸
■
Why Study Research?
Review of practice effectiveness research Utility of research to practitioners (examples)
Chapter 2 ■ Evidence-Based Practice ✸ Evidence-based practice a. Historical background b. Nature of c. Steps in d. Controversies and misconceptions about
Chapter 3 ■ Philosophy and Theory in Social Work Research ✸ ✸
✸ ✸
Play therapy illustration of role of theory Contracting and client satisfaction, illustration of relationship Social work practice models Treatment of PTSD illustration of nomothetic and idiographic models of explanation
PART II: T HE ETHICAL , POLITICAL , AND CULTURAL CONTEXT OF SOCIAL WORK RESEARCH Chapter 4 Research ✸
✸
The Ethics and Politics of Social Work
Right to receive services vs. need to evaluate them discussed in relation to social work practice evaluation Ethical controversies regarding a study of social work journal bias and a study on social welfare reform
Chapter 5 ✸
■
■
Chapter 7 ■ Conceptualization and Operationalization ✸
✸
✸
✸
✸
✸
✸
✸
✸ ✸
✸
■
Measurement
■
Constructing Measurement Instruments
Qualitative interview schedule regarding openness in adoption
PART IV: D ESIGNS FOR EVALUATING PROGRAMS AND P RACTICE Chapter 10 Designs ✸
✸
Causal Inference and Experimental
■
Quasi-Experimental Designs
This chapter is also filled with practice examples throughout, this time with an emphasis on quasiexperimental evaluations of practice effectiveness. Also included is coverage of practical pitfalls in conducting evaluations in social work practice settings and how to prevent or alleviate them.
Chapter 12 ✸
■
This chapter is filled with practice examples throughout, particularly in regard to the internal and external validity of evaluations of practice effectiveness and how experiments attempt to control for threats to internal validity.
Chapter 11
Problem Formulation
Practitioner involvement in evidence-based practice example of research purposes. Research process illustrated with example of social work in a residential treatment facility Welfare reform example of narrowing research topics Treatment of sexually abused girls, example of research question Case management example regarding literature review
■
Measurement error, reliability and validity illustrated regarding assessing paranoia, child welfare interventions, parent-child relationships, self-esteem, depression, interviewing skills, treating battered women, practice orientations, trauma symptoms, treating sex offenders, marital satisfaction, and others
Chapter 9
PART III: PROBLEM FORMULATION AND MEASUREMENT Chapter 6
Symptoms of PTSD to illustrate indicators and dimensions of constructs Welfare policy reform illustration regarding hypotheses variables Operational definitions illustrated with examples from child welfare, community organizing, family therapy, and social work interviewing skill Child welfare practice illustration regarding qualitative perspective on operational definitions
Chapter 8
Culturally Competent Research
Mental health services with African Americans, Asian Americans, and Latinos; parenting interventions; HIV/AIDS prevention interventions; services for the homeless; and caregiver burden
I SS UE S
■
Single-Case Evaluation Designs
This entire chapter is devoted to practitioners’ use of research to evaluate their own practice. Virtually every word in it deals directly with practice related issues.