Problem Solving A ST A TISTICIAN'S GUlDE
OTHER STATISTICS TEXTS FROM CHAPMAN AND HALL
Applied Statistics D. R. Cox and E. J. Snell The Analysis of Time Series C. Chatfield Decision Analysis: A Bayesian Approach Q. Smith
J.
Statistics for Technology C. Chatfield Introduction to Multivariate Analysis C. Chatfield and A. J. Collins Introduction to Optimization Methods and their Applications in Statistics B. S. Everitt An Introduction to Statistical Modelling A. J. Dobson Multivariate Analysis ofVariance and Repeated Measures D. J. Hand and C. C. Taylor Statistical Methods in Agriculture and Experimental Biology R. Mead and R. N. Curnow Elements of Simulation B. J. T. Morgan Essential Statistics D. G. Rees Intermediate Statistical Methods G. B. Wetherill Probability: Methods and Measurement A.O'Hagan Elementary Applications of Probability Theory H . C. Tuckwell Multivariate Statistics: A Practical Approach B. Flury and H. Riedwyl Point Process Models with Applications to Safety and Reliability W. A. Thompson, Jr
Further information on the camplete range of Chapman and Hall statistics books is available from the publishers.
Problem Solving A ST A TISTICIAN'S GUIDE
Christopher Chatfield Reader in Statistics, University 01 Bath, Bath, UK
SpringerScience+Business Media, RY.
© 1988, ChatfieJd Originally pub1ished by Chapman and Hall in 1988.
ISBN 9780412286704
ISBN 9781489930170 (eBook) DOI 10.1007/9781489930170
This tide is available in both hardbound and paperback editions. The paperback edition is sold subject to the condition that it shall not, by way of trade or otherwise, be Ient, resold, hired out, or otherwise circulated without the publisher's prior consent in any form of binding or cover other than that in which it is pubIished and without a similar condition including this condition being imposed on the subsequent purchascr. All rights reserved. No part of this book may be reprinted, or reproduced or utiIized in any form or by any electronic, mechanical or other means, now known or hereafter invented, including photocopying and recording, or in any information storage and retrieval system, without permission in writing from the publisher.
British Library Cataloguing in Publication Data ChatfieJd, Christopher. 1941 ~ Problem solving: a statistician's guide 1. Statistical mathematics I. Tide 519.5
Library of Congress Cataloging in PubIication Data ChatfieJd, Christopher. Problem solving. BibIiography: p. lncludes index. 1. Statistics. 2. Mathematical statistics. 3. Problem solving. I. Title. QA276.12.C457 1988 519.5 88~11835
To Peace
'The first thing I've got to do' said Alice to herself 'is to grow to my right size again; and the second thing is to find my way into that lovely garden. I think that will be the best plan'. It sounded an excellent plan, no doubt; the only difficulty was that she had not the smallest idea how to set ab out it. Alice's Adventures in Wonderland
by Lewis Carroll
Contents
Preface
IX
Prelude
Xl
How to tackle statistical problems
xv
PART I THE GENERAL PRINCIPLES INVOLVED IN TACKLING ST A TISTICAL PROBLEMS
1 2 3 4 5 6 7 8 9 10
Introduction The stages of a statistical investigation Formulating the problem Collecting the data ANALYSING THE DATA I General strategy ANALYSING THE DATA II The initial examination of da ta ANALYSING THE DATA III The 'definitive' analysis USING RESOURCES I The computer USING RESOURCES II The library COMMUNICATION I Consultation and collaboration in statistical projects 11 COMMUNICATION II Effective report writing 12 Numeracy SUMMARY How to be an effective statistician
3 6 8 10 13 22
48
59 65
68 71 75 79
PART 11 EXERCISES
A B C D E F G H
Descriptive statistics Exploring data Correlation and regression Analysing complex largescale data sets Analysing more structured data Timeseries analysis Miscellaneous Collecting data
84 93 117 126 141 154 161 174
V111
Contents
PART III APPENDICES
A digest of statistical techniques Descriptive statistics Probability Prob ability distributions Estimation Significance tests Regression Analysis of variance (ANOVA) The general linear model The generalized linear model Sampie surveys The design of experiments Clinical trials Multivariate analysis Timeseries analysis Quality control and reliability APPENDIX B MINITAB and GLIM APPENDIX C Some useful addresses APPENDIX D Statistical tables
181 182 184 185 188 191 194 202 205 207 210 214 216 219 222 228 232 244 246
References
250
Index
255
APPENDIX A
A.l A.2 A.3 AA A.5 A.6 A.7 A.8 A.9 A.l0 A.11 A.12 A.13 A.14 A.15
Preface
There are numerous books on statistical theory and on specific statistical techniques, but few, if any, on problem solving. This book is written for anyone who has studied a range ofbasic statistical topics but still feels unsure about tackling reallife problems. How can reliable da ta be collected to answer a specific question ? What is to be done when confronted with a set of real data, perhaps·rather 'messy' and perhaps with unclear guidelines? Problem solving is a complex process which is something of an acquired knack. This makes it tricky to teach. The situation is not helped by those textbooks which adopt a 'cookbook' approach and give the false impression that statistical techniques can be performed 'parrotfashion'. Part I of this book aims to clarify the general principles involved in tackling statistical problems, while Part II presents aseries of exercises to illustrate the practical problems of real data analysis. These exercises are problembased rather than techniqueoriented  an important distinction. The book aims to develop a range of skills including a 'fee!' for data, the ability to communicate and to ask appropriate searching questions. It also demonstrates the exciting potential for simple ideas and techniques, particularly in the emphasis on the initial examination of data (or IDA). This is essentially a practical book emphasizing general ideas rather than the details of techniques, although Appendix A provides a brief, handy reference source. Nevertheless, I want to emphasize that the statistician needs to know sufficient background theory to ensure that procedures are based on a firm foundation. Fortunately many teachers already present a good balance of theory and practice. However, there is no room for complacency as theory can unfortunately be taught in a counterproductive way. For example, a student who first meets the ttest as a special case ofthe likelihood ratio test may be put off statistics for life! Theory taught in the right way should strengthen practical judgement, but, even so, the essen ce of statistics is the collection and analysis of data, and so theory must be backed up with practical experience. This book is based on a course I have given for several years at Bath University to finalyear statistics undergraduates. As weIl as generallectures, a selection of exercises are written up in a practical book or as special projects, and the course is examined by continuous assessment. Student reaction
x
Preface
suggests the course is weIl worthwhile in improving motivation and providing valuable practical experience. In describing my approach to problem solving, I have tried to be truthful in saying what I would actually do based on my experience as a practising statistician. I hope I do not live to regret putting my 'confessions' into print! I recognize that the book 'will not please all the people all the time'! It is often said that ten statisticians will offer ten different ways of tackling any given problem. However if the book stimulates debate on general principles, provides readers with useful practical guidance, and encourages other statisticians to write similar books based on their alternative, and perhaps very different, experience, then I will regard the book as successful. I am indebted to many people for helpful comments du ring the preparation of this manuscript. They include Andrew Ehrenberg, David Hinkley, Elizabeth Johnston, Roger Mead, Bernard Silverman and David Williams. Of course any errors, omissions or obscurities which remain are probably (?) my responsibility. The author will be glad to hear from any reader who wishes to make constructive comments. Finally it is a pleasure to thank Mrs Sue Collins for typing the manuscript with exceptional efficiency. Chris Chatfield December 1987
Prelude
What are statistical problems really like? One reallife example will suffice at this stage to illustrate that many textbook examples are oversimplified and rather artificial. Your telephone rings. A doctor at a local hospital has collected so me observations in order to compare the effects of four anaesthetic drugs, and wants so me help in analysing the data. A meeting is arranged and the statistician is assured that 'it won't take long' (of course it does! But statisticians must be willing to respond to genuine requests for help). When the data are revealed, they turn out to consist of as many as 31 variables measured for each of 80 patients undergoing surgery for a variety of conditions, such as appendicitis. A small portion of the data is shown in table P .1. The doctor asks 'How should the data be analysed?'. How indeed! In order to try and answer this question, I put forward six general rules (or rather guidelines). Table P.I Patient no. Group Sex Age Operation Premed time Vapour
7;
T2 T3 Antiemetic Condition etc.
Part of the anaesthetic data
3 A M
5 B
14
27
42
D
C
F
38
38
42
A F
41
54
RIH
Mastectomy ?
TAH
Laparotomy
Appendix
/
HAL
/
F
/
/
2
200 2
3 3 48
4 4
116
3 3 110 25 1
90
10 10 108 60 3
etc.
F
59
1 1 38
/
1
Rule 1 Do not attempt to analyse the data until you understand what is being measured and why. Find out wh ether there is any prior information about likely effects.
You may h~ve to ask lots of questions in order to clarify the objectives, the meaning of each variable, the units of measurement, the meaning of special
Xl1
Prelude
symbols, whether similar experiments have been carried out before, and so on. Here I found the following: 1. 2. 3. 4. 5. 6.
The group (A, B, C or D) is used to denote the anaesthetic administered, and the main objective is to compare the effects of the four drugs; A question mark denotes a missing observ;ttion; A slash (f) denotes NO or NONE (e.g. No vapour was inhaled by patient 3); T, denotes time (in minutes) from revers al of anaesthetic till the eyes open, T2 the time to first breath, and T3 the time till discharge from the operating theatre; Overall condition after the operation is rated from 1 (very good) to 4 (awful); And so on ....
The doctor was unaware of any previous experiments to compare the four anaesthetics and wanted the data analysed as a 'oneoff' sample. It seems unlikely that theprior information was so meagre and this point may be worth pursuing if the results are inconclusive. Rule 2 Find out how the da ta were collected.
In this case, were patients allocated randomly to the four groups? How important is the type of operation on aftereffects? How reliable are the measurements? etc. If the experiment has not been properly randomized, then a simple descriptive analysis may be all that can be justified. Rule 3 Look at the structure of the data.
Are there enough observations? Are there too many variables? How can the variables be categorized? Here there were 20 patients in each of the four groups, and this should be enough to make some sort of comparison. The number of variables is high and so they should not all be considered at once. Try to eliminate some in consultation with the doctor. Are they really all necessary? For example, T, and T2 were usually identical and are essentially measuring the same thing. One of them should be excluded. It is helpful to distinguish the different types of variable being studied. Here the variables generally fall into three broad categories. There are demographie variables (e.g. age) which describe each patient. There are controlled variables (e.g. type of vapour) which are generally under the control ofhospital staff. Finally there are response variables (e.g. condition) which measure how the patient responds to the operation. Variables can also be usefully classified by the type of measurement as continuous (e.g. time),
Prelude
X11l
discrete (e.g. number of children), qualitative (e.g. the type of operation), binary (e.g. male or female), etc. The ensuing analysis depends critically on the data structure. Rule 4 The da ta then need to be carefully examined in an exploratory way, before attempting a more sophisticated analysis.
The given da ta are typical of many reallife da ta sets in that they are somewhat 'messy', with missing observations, outliers, nonnormal distributions as well as a mixture of qualitative and quantitative variables. There are various queries about the quality of the data. For example why are some observations missing, and are there any obvious errors? The initial examination of the data should then continue by calculating appropriate summary statistics and plotting the data in whatever way seems appropriate. Do this for each of the four groups separately. First look at the demographic variables. Are the groups reasonably comparable? Try to avoid significance tests here, because there are several variables and it is the overall comparison which is ofinterest. Second, look at the controlled variables. Were the same surgeon and anaesthetist involved? Is there evidence of different strategies in different groups? Third, look at the response variables. Examine them one at a time, at least to start with (e.g. Exercise B.9). Here a oneway ANOVA may be helpful to see if there are significant differences between the group means, but don't get too concerned ab out 'significance' and the size of Pvalues at this stage. With several variables to examine, different operations involved, and doubts about randomization, it is more important to see if any differences between groups are of practical importance and to see if this fits in with any prior knowledge. Only after this initial analysis, will it be possible to see if any further, perhaps more complicated, analysis is indicated. Rule 5 Use your commonsense at all times. Rule 6 Report the results in a clear, selfexplanatory way.
These general principles will be amplified and extended in Part I of this book, while Part 11 presents aseries of worked exercises to illustrate their use.
How to ta ekle statistieal problems A BRIEF SUMMARY
Understanding the problem What are the objectives? What background information is available? Can you formulate the problem in statistical terms? ASK QUESTIONS.
Collecting the data Have the data already been collected? If so, how? If not, should an experimental design, sample survey, observational study, or what, be used? How will randomization be involved?
Analysing the data Process the data. Check data quality. Carry out an initial examination of the data (an IDA). Are the conclusions then obvious? If not, has the formal method of analysis been specified beforehand? If so, does it still look sensible after seeing the data? If not, how do we select an appropriate method of analysis? Have you tackled a similar problem berore? If not, do you know someone else who has? Or can you find a similar problem in a book? Can you restate the problem or solve part of the problem? Is there prior information (empirical or theoretical) about a sensible model, and, if so, do the new results agree with it? W ould the analysis be easier if some variables were transformed, or if some nonstandard feature were removed (and is this justifiable)? Try more than one analysis if unsure (e.g. with or without an outlier; parametric or nonparametric approach) and see if the results are qualitatively different. Ask for help if necessary. Check any model that you fit, not only with a residual analysis, but also by replicating the results where possible.
XVI
How to tackle statistical problems
Presenting the results Are the conclusions what you expected? If they are counterintuitive, have you performed an appropriate analysis? po you have to write up areport? Plan the structure of your report carefully. Revise it several times. Ask a friend to read it before you hand it over.
PART I
The general principles involved in tackling statistical problems
This part of the book gives general advice on tackling reallife problems which include statistical considerations. The text also covers so me important topics, such as using a library and report writing, which are indispensable to the applied statistician but which may not be covered in conventional statistics courses. The reader is assumed to have a working knowledge of simple probability models and basic inference. Detailed techniques are not discussed here, although a brief digest is given in Appendix A. As the computer takes over the routine implementation of many techniques, it is arguable that it will become less important to remember all their details. Instead the analyst will be able to concentrate on general strategy such as selecting the most appropriate method of analysis, checking assumptions and interpreting the results. It is this general strategy which is the main substance of this book.
1 Introduction
Statistics is concerned with collecting, analysing and interpreting da ta in the best possible way. The importance of all three facets (rather thanjust the analysis phase) deserves wider appreciation. In more detail, a statistician needs to be able to: 1. 2. 3. 4.
formulate a real problem in statistical terms give advice on efficient data collection analyse data effectively and extract the maximum amount of information interpret and report the results.
In order to do this, the statistician needs to know sufficient background theory and a range of statistical techniques (Appendix A). However, statistics is much more than a collection of techniques. An effective statistician also needs to understand the general principles involved in tackling statistical problems. At so me stage, it is more important to study general principles rather than leam yet more techniques (which can always be looked up in a book). In any case so me topics currently taught are unlikely ever to be used by most statisticians, whereas they will, for example, have to cope with a new set of messy data, with vague objectives, where it is unclear how to proceed. Of course general principles are difficult to specify exactly, and any advice should be seen as guidelines rather than as a set of rigid rules. Part 11 of this book contains aseries of worked exercises in questionandans wer format to give the reader some experience in handling real data. The importance of practical experience in developing statistical maturity cannot be overstressed. Some might argue that limited teaching time is best spent on theory, and that the skills of a practising statistician are difficult to teach and only develop through real working experience in a statistics career. However this is rather defeatist. Some practical skills (e.g. report writing) can and should be taught formally, and there is increasing evidence that applied statistics can be taught effectively and that it provides a valuable complement to theory. Many colleges now offer courses involving a variety of practical work, ranging from simple technique exercises to substantial projects, so as to prepare students for the real world. So, what do experienced statisticians actually do in practice? Is it what is in
4
Introduction
the textbooks? The practlsmg statlstIClan needs to be versatile and resourceful in tackling problems, always being aware that the next ca se may not fit any previously known recipe. Sound judgement is needed to darify objectives, collect trustworthy data, and analyse and interpret them in a sensible way. Clearly it is important to appreciate the different stages of a statistical investigation, from problem formulation through to the presentation ofthe results (Chapter 2). It is also dear that personal attributes, such as common sense, an inquiring mind and an ability to communicate, are at least as important as knowing lots of statistical formulae. Good statistical practice also requires a number of skills which may not be adequately covered in conventional statistics courses, such as the ability to write a good report or to use a computer effectively. As regards the analysis, it is important to consider the use of any prior information so as to avoid treating every sampie as if it were the only one ever taken. It is also worth noting that a robust nearoptimal solution may be preferred to an 'optimal' solution which depends on dubious assumptions. It is desirable to let the data 'speak for themselves', and to avoid the oversimple and the overelaborate. Many people who use complex procedures have problems which should really be tackled in a simpler way, and expert advice on, say, factor analysis often convinces a dient to adopt a simpler procedure. The use of unnecessarily complex methods means that attention is focused on the technical details rather than on potentially more important questions such as the quality of the data. At the other extreme, I have heard some practitioners say that 'we only need da ta collection and presentation', but this is being too simplistic. The emphasis in this book is towards the simple end of the spectrum, but hopefully avoiding the naive or simplistic. There is emphasis on the initial examination of data (Chapter 6), which, while apparently simple can often be harder than it looks. In particular the careful presentation of graphs and tables is vital but often sadly neglected. The book builds on this material to discuss the use of more advanced procedures ~ how to select and carry out a 'proper' analysis correctly. The book also discusses how to cope with nonstandard data. Throughout the book, the readerisencouraged to take a balanced, integrated view of statistics. There are several different philosophical approaches to inference (e.g. frequentist or Bayesian), but it is argued that the statistician should adopt whichever approach is appropriate for a given situation rather than insist on using one particular approach every time. A balance between knowledge of theory and practice, and between the use of simple or complicated methods is also important. I conclude this introduction by asking where statistics. stands today and whether it is developing in the right direction. Statistical methods are widely used in all branches of science and methodological research continues to develop new and increasingly sophisticated techniques. Yet, my experience suggests that all is not well. There is a disturbingtendency for statistical
Introduction
5
techniques to be used by people who do not fuHy understand them. N onstatisticians seem to be getting more confused and the standard of statistical argument in scientificjournals can best be described as variable. Complicated methods are often applied in a cookbook way which may turn out to be inappropriate or wrong. I suggest that we should be concerned, notjust with developing ever more complex methods, but also with clarifying the general principles needed to apply the techniques we already have. That provides the motivation for this book.
Further reading An exceHent alternative discussion of some topics in Part I is given by Cox and Snell (1981, Part I). Barnett (1982, Chapter 1) discusses possible answers to the question 'what is statistics?'. There is no simple answer with which everyone will agree. A detailed list of the abilities (both technical and personal) required by a statistician are listed by Anderson and Loynes (1987, section 2.2) together with suggestions as to how they might be taught.
2 The stages of a statistical investigation
Statisticians are often asked to analyse data which have already been collected, but the message is slowly being broadcast that it is generally more satisfactory for a statistician to be involved at all stages of an investigation, namely the planning, design and execution of the study, and the analysis, interpretation and presentation of the results. It is useful to bear these different stages in mind when considering statistical problems and we now discuss them in a little more detail. The main stages in an idealized statistical investigation may be listed as follows: 1.
Make sure you understand the problem and then formulate it In statistical terms. Clarify the objectives of the investigation very carefully.
2.
Plan the investigation and collect the data in an appropriate way (Chapter 4). It is important to achieve a fair balance between the effort needed to collect the data and to analyse them. The method of collection is crucial to the ensuing analysis. For example, data from a designed experiment are quite different in kind to those resulting from a pure observational study.
3.
Assess the structure and quality of the data. Scrutinize the data for errors, outliers and missing values. Modify the data if necessary, far example by transforming one or more variables.
4.
Carry out an initial examination of the data to obtain summary descriptive statistics and perhaps get ideas for a more formal analysis (Chapter 6). In particular guidance on model formulation may be obtained.
5.
Select and carry out an appropriate formal statistical procedure to analyse the data (Chapter 7). Such procedures often assurne a particular model structure, and may involve estimating the model parameters, and testing hypotheses about the model. The fitted model needs to be evaluated by looking at the residuals from the model to see if it needs to be modified or refined.
The stages of statistical investigation
7
6.
Compare the findings with any previous results and acquire further data if necessary.
7.
Interpret and communicate the results. The findings may need to be understood by both statisticians and nonstatisticians, and extra care is needed in the presentation of graphs, summary tables and computer output.
The list is not inviolate and few investigations follow such a straightforward pattern in practice. In particular there may be several cycles of model fitting as defects in some original model are recognized, further data are acquired, and the model is gradually improved. This circular iteration seems to be how scientific investigation often proceeds (see Box, 1980, for further discussion of this topic).
3 Formulating the problem An approximate answer to the right question is worth a great deal more than a precise ans wer to the wrong question  the first golden rule of applied mathematics, sometimes attributed to John Tukey
The first step in any statistical investigation should be to get a clear understanding of the physical background to the situation, clarify the objectives and then formulate the problem in statistical terms. If the investigation is to be carried out in collaboration with other people, the statistician must be prepared to ask lots of questions. The important variables need to be enumerated and any distinctive features of the system (such as constraints) should be noted. Objectives can range widely from adesire to increase general understanding (perhaps via an exploratory study) to a much more specific study to test a particular hypothesis, assess a particular relationship, or choose a course of action from a predetermined list of possibilities. It is always a good idea to think how information developed from a study will actually be used. Note that the objectives may even be unclear to the person who has asked the statistician for help, and may indeed turn out to be completely different to those initially suggested. Giving the 'right' answer to the wrong question is a more commonplace error than might be expected and is sometimes called a Type III error. Finding the right question may be harder than finding the right answer. In fact there may not be one unique answer but rather a range of answers depending on a range of different assumptions. While mathematical problems often have neat analytic solutions, practical statistics problems often do not and few statistics books give guidance on finding good, but not necessarily optimal, solutions to rather illdefined problems. Near optimality over a range of conditions is more useful to the practitioner than full optimality under strict artificial assumptions. Problem formulation is often learnt the hard way (by making mistakes in practice) because teaching exercises tend to be artificially clearcut. As Einstein has said 'The formulation of a problem is often more essential than
Formulating the problem
9
its solution which may be merely a matter of mathematical or experimental skill' . Other preliminary questions which need to be considered when formulating the problem are the possible use of prior information and consideration of the costs and likely benefits of different strategies. A literature search is often advisable for revealing known facts, giving previous results for comparison, and perhaps even making data collection unnecessary. I was once asked to cooperate in conducting a sampie survey to investigate a particular social problem, but half an hour in the library produced a book reporting a survey of exactly the kind required and no further survey was necessary. Even if a new investigation is thought necessary, it is still advisable to compare new results with previous ones so that established results can be generalized or updated. More gene rally there are many areas of science and social science where wellestablished laws already exist, as for example Boyle's Law in physics or laws relating market penetration and quantity of different products bought in marketing research. When a new sampie is taken, the main question is whether it agrees with previous results, and it is unfortunate that most statistics textbooks devote overwhelming attention to the analysis of data as if they were all hrandnew oneoff sampies. Further reading
Hamaker (1983) relates several instructive examples where he was asked to assist in designing an industrial experiment, but where an initial interrogation revealed that further background information was required first, or even that no experiment was actually needed.
4 Collecting the data
In order to draw valid conclusions, it is important to collect 'good' data. A distinction is often made between data collected in designed experiments (including clinical trials), sampie surveys, pure observational studies, and more specialist investigations such as controlled prospective and retrospective studies. While this distinction is not always clearcut, it can be helpful, although there are in fact many paralleis between the basic concepts of experimental design and of (random) sampling (e.g. compare blocking and stratification). The general principles of experimental design and survey sampling will be familiar to many readers but are summarized in sections 1012 of Appendix A. Some other collection methods are briefly considered below. Whatever method is used, the investigator needs to formulate objectives, specify which variables need to be measured and to what accuracy, and then specify the exact plan for data collection including such details as sampie size and how the data are to be recorded. There are substantial differences in spirit between different methods of data collection. Experiments require active intervention by the investigator, for example to allocate treatments to experimental units, preferably by so me sort of randomization procedure. A clear interpretation of any differences which arise should then be possible. In contrast, as the name suggests, the investigator is gene rally more passive in observational studies and simply observes what is going on. Sampie surveys involve drawing a representative sam pIe from a welldefined population, usually by some randomization procedure or by some form of quota sampling. While wellconducted surveys allow one to estimate population characteristics accurately, they are essentially observational and so suffer from the same drawback as observational studies, namely that it may be dangerous to try and interpret any interesting effects which appear to emerge, particularly as regards causeandeffect. As an example of this crucial point, I was given data for seven different treatments for child ear infections collected from all the 400 + children who attended a local hospital over a sixmonth period. Large differences were observed between reinfection rates for the different treatments. This is a useful indication that some treatments are better than others. However, this was an observational study in which doctors allocated the treatment they thought was best for the individual patient. Thus the results need to be
Collecting the data
11
treated with caution. Some doctors have preferred treatments or may tend to give particular treatments to particular groups of patients (e.g. the most badly affected). Thus until treatments are allocated randomly to patients, one cannot be sure that the observed differences between groups are due to the treatments. Neverthe1ess, historical data like these are better than nothing, and observational studies can be a useful and costeffective pre1iminary to a more costly fullscale experiment. Historical data are also used in controlled retrospective trials. Here a response variable is observed on different individuals and then the history of these individuals is extracted and examined in order to try and assess which variables are important in determining the condition of interest. It is gene rally safer, but takes much longer, to assess important explanatory variables with a controlled prospective trial where individuals are chosen by the investigator and then followed through time to see what happens. The general term 'longitudinal data' is used to describe data collected on the same units on several occasions over aperiod of time. Whatever datacollection method is chosen, the investigator must select an appropriate sampie size. This is a rather neglected topic. When it is desired to test one particular hypo thesis, the sampie size can be chosen so as to achieve a given power for a particular alternative hypothesis, but in my experience this rare1y occurs. In quality control and opinion sampling, sensible sample sizes are often wellestablished. In other situations one may take as large a sample as cost and time allows. With measured variables, a sample size of about 20 is usually a working minimum which can always be expanded if necessary. Nonstatisticians often choose ridiculously small or ridiculously large sizes. In particular a large, but uncontrolled (and hence messy) sample may contain less information than a smaller, but carefully observed, sample. What are the pitfalls in collecting data? The trials and tribulations can only be appreciated by actually doing it oneself. Try se1ecting a 'random' live pig from a crowded pigsty and then weighing it on a balance! Or try getting a quota sample to ans wer questions on some sensitive political topic. Nonresponse or downright falsehoods bec~me a real problem. Knowing the limitations ofthe data can be a big help in analysing them. To illustrate that a nu mb er may not be what it seems, I recall the case of a pregnant woman who went to hospital for a routine check, was kept waiting three hours, was found to have abnormally high blood pressure, and so was admitted to hospital. Subsequent tests showed that she was normal and that the stress of waiting three hours had caused the high reading! Two common failings in sampie surveys are se1ecting an unrepresentative sampie (perhaps through using a judgement or convenience sampie) and asking silly questions. Questionnaire design is very important in terms of what questions to ask and how they should be worded. A pilot survey is vital to try out a questionnaire.
12
Collecting the data
Perhaps the most common failing in experiments is a lack of randomization. The latter is needed to eliminate the effect of nuisance factors and its omission can have an effect in all sorts of unforeseen ways. It applies not only to the allocation of experimental units, but also to the order in which observations are taken. For example, in regression experiments it is tempting to record the observations sequentially through time in the same order as the (increasing) values of the explanatory variable. Unfortunately, if the response variable also tends to increase, the researcher cannot tell if this is related to time or to the increase in the explanatory variable, or to both. The two effects cannot be disentangled and are said to be confounded. It is clear that a statistician who is involved at the start of an investigation, advises on data collection, and who knows the background and objectives, will generally make a betterjob ofthe analysis than a statistician who is called in later on. Unfortunately many data are still collected in a rather haphazard way without the advice of a statistician. When this happens, the statistician who is asked to analyse the data should closely question the datacollection procedure to see if it is worth spending much effort on data analysis. The important overall message is that data collection is as much apart of statistics as data analysis. A final cautionary tale should provoke a healthy distrust of 'official' recorded data. A prominent British politician (Healey, 1980) recounts how he was appointed railway checker at a large railway station during the Second W orld War. He was expected to count the number of servicemen getting on and off every train, but as a convalescent with eight platforms to cover in blackout conditions, he made up all the figures! He salved his conscience by asking the ticket collector to provide numbers leaving each train, but later discovered that those numbers were invented as well. Fortunately, the war effort did not suffer visibly as a result! The obvious moral of this story is that there must be a sensible balance between effort spent collecting data and analysing them, and that care is needed to decide what information is worth recording and how.
Further reading Various general references are given in Chapters 1012 of Appendix A. A useful reference regarding longitudinal da ta is Plewis (1985). The parallels between experimental design and sam pie surveys are discussed by Fienberg and Tanur (1987).
5 ANALYSING THE DATA I
General strategy
Having collected (or been given) a set of data, the reader may be bewildered by the wide variety of statistical methods which are available and so be unsure how to proceed. The details of standard techniques are well covered in other books and will not be repeated here, although a concise digest of selected topics is given in Appendix A. Rather this book is concerned with general strategy, such as how to process the data, how to formulate a sensible model and how to choose an appropriate method of analysis.
5.1
The phases of an analysis
Students are often given the impression that a statistical analysis consists of 'doing attest' or 'fitting a regression curve'. However, life is rarely as simple as that. I will distinguish five main stages to an analysis as follows:
1. 2. 3.
4.
5.
Look at the data. Summarize them using simple descriptive techniques. Evaluate the quality of the data and modify as necessary. Formulate a sensible model. Use the results of the descriptive analysis and any other background information. A wellestablished model may already exist. The model may be descriptive or algebraic. Fit the model to the data. Ensure that the fitted model reftects the observed systematic variation. Fur·thermore the random component needs to be estimated so that the precision of the fitted model may be assessed. Check the fit of the model. The assumptions implicit in the fitted model need to be tested. It is also a good idea to see if the fit is unduly sensitive to a small number of observations. Be prepared to modify the model if necessary. Present the conclusions. A summary ofthe data, the fitted model and its implications need to be communicated. This should include a statement of the conditions under which the model is thought to be applicable and where systematic deviations can be expected.
14
General strategy
It is sometimes helpful to regard these five stages as forming two main phases which are sometimes described as (a) the preliminary analysis, and (b) the dljinitive analysis. The preliminary analysis is essentially stage 1 given above and includes:
(i) Processing the data into a suitable form for analysis. This probably includes getting the da ta onto a computer. (ii) Checking the quality of the data. It is important to recognize the strengths and limitations of the data. Are there any errors or wild observations? Have the data been recorded to sufficient accuracy? (iii) Modifying the data if necessary, for example by transforming one or more variables or by correcting errors. (iv) Obtaining simple descriptive summaries, using summary statistics, graphs and tables. One objective ofthis book is to emphasize the importance ofthis phase of the analysis which does not always receive the attention it deserves. The definitive analysis is often based on a probability model and may involve parameter estimation and hypothesis testing. However, it is a common mistake to think that the main analysis consists only of model fitting. Rather it includes all three stages of model building, namely model formulation, fitting and checking. Thus the definitive analysis includes stages 2, 3 and 4 given above and mayaiso be taken to include the presentation of the conclusions in a clear and concise way. Of course, if a wellestablished model already exists, based on many previous data sets, then the definitive analysis is concerned with assessing whether the new data conform to the existing model rather than with fitting a model from scratch. Whatever the situation, one important overall message is that the analyst should not be tempted to rush into using a standard statistical technique without first having a carefullook at the data. This is always a danger now that easytouse computer software is so widely available. In practice, the two phases (and the five stages) generally overlap or there may be several cycles of model fitting as the model is gradually improved, especially if new data become available. Thus the distinction between the two phases, while useful, should not be taken too seriously. In particular, the preliminary analysis may give such clearcut results that no followup definitive analysis is required. For this reason, I prefer to use the neutral tide 'The initial examination of data', rather than 'The preliminary analysis', although it is in the same spirit. The initial examina ti on of data will be discussed more fuHy in Chapter 6 while the choke of definitive analysis is discussed in Chapter 7. The remainder of Chapter 5 is mainly concerned with general aspects of model building.
Model building 5.2
15
Other aspects of strategy
It is suggested by some statisticians that it is helpful to distinguish between an exploratory analysis and a confirmatory analysis. An exploratory analysis is concerned with a completely new set of data and there may be little or no prior knowledge about the problem. In contrast a confirmatory analysis is primarily intended to check the presence or absence of a phenomenon observed in a previous set of data or expected on theoretical grounds. In practice a pure confirmatory analysis seems to be reported very rarely in the statisticalliterature. The vast majority of 'significant' results reported in the scientific literature, particularly in medicine and the social sciences, are essentially of oneoff exploratory data sets, even though it is known to be dangerous to spot an 'interesting' feature on a new set of data and then test this feature on the same data set. This point is amplified in section 7.2. There is thus too much emphasis on analysing single data sets in isolation, which has been called the 'cult of the isolated study' by Nelder (1986). In contrast there is not enough help in the literature on how to combine information from more than one experiment, although the rather horrid term 'metaanalysis' has recently been coined to describe the combination of information from two or more studies. Rather than distinguish between exploratory and confirmatory studies, it may be preferable to distinguish between oneoff(exploratory) data sets and situations where there are aseries of similar or related data sets. In the latter case essentially descriptive comparisons of different data sets allow one gradually to build up a comprehensive empiricallybased model which can be used with so me confidence. This approach, promoted by A. S. C. Ehrenberg, is amplified in section 7.4. Another useful distinction is between descriptive or dataanalytic methods and probabilistic methods. For example, simple graphical methods are usually descriptive while inferential methods based on a stochastic model are probabilistic. Of course, many statistical problems require a combination of both approaches and there is in any case some overlap. For example, by quoting the sampie mean, rather than the sampie median, in a data summary, you may be implicitly assuming that a sensible model for the underlying distribution is approximately symmetric.
5.3
Model building 'All models are wrong, but some are useful'  G. E. P. Box
A major aim of much statistical analysis is the construction of a useful model. A mathematical model is a mathematical representation ofthe given physical situation. This description may involve constants, called parameters, which may have to be estimated from data.
16
General strategy
There are many different types of model. Statistical models usually contain one or more systematic components as weH as a random (or stochastic) component. The random component, sometimes caHed the noise, arises for a variety of reasons and it is sometimes helpful to distinguish between measurement error and natural random variability between experimental units. The systematic component, sometimes called the signal, may or may not be deterministic. In engineering parlance, a statistical analysis can be regarded as extracting information about the signal in the presence of noise. However many useful scientific models are of a different type in ·that they are empiricallybased and essentially descriptive. For example, Boyle's Law in Physics says that (pressure) X (volume) ~constant for a given quantity of gas, when external conditions are kept fixed, and no attempt may be made to model measurement errors. Whatever sort of model is fitted, it should be remembered that it is impossible to represent a realworld system exactly by a simple mathematical model. However, it is possible that a carefully constructed model can provide a good approximation, both to the systematic variation and to the scatter. The challenge for the model builder is to get the most out of the modelling process by choosing a model of the right form and complexity so as to describe those aspects of the system which are perceived as important. There are various objectives in modelbuilding (see, for example, Daniel and W ood, 1980; Gilchrist, 1984): 1.
2. 3. 4.
5. 6.
To provide a parsimonious summary or description of one or more sets of data. By parsimonious, we me an that the model should be as simple as possible (and contain as few parameters as possible) as is consistent with describing the important features of the data. To provide a basis for comparing several different sets of data. To confirm or refute a theoretical relationship suggested apriori. To describe the properties of the random or residual variation, often called the error component. This will enable the analyst to make inferences from a sam pie to the corresponding population, to assess the precision of parameter estimates, and to assess the uncertainty in any conclusions. To provide predictions which act as a 'yardstick' or norm, even when the model is known not to hold for some reason. To provide physical insight into the underlying physical process. In particular the model can be used to see how perturbing the model structure or the model parameters will affect the behaviour of the process. This is sometimes done analytically and sometimes using simulation.
Y ou should no ti ce that the above list does not include getting the best fit to the observed data (see the ballad of multiple regression in Appendix A.6). The term data mining is sometimes used to describe the dubious procedure of
Model building
17
trying lots of different models until a goodlooking fit is obtaincd. However, the purpose of model building is not just to get the 'best' fit, but rather to construct a model which is consistent, not only with the data, but also with background knowledge and with any earlier data sets. In particular the choice between two models which fit data approximately equally weIl should be made on grounds external to the data. Occasionally it may be useful to make use of more than one model. For example, in forecasting it may be useful to construct a range of forecasts based on a variety of plausible assumptions about the 'true' model and about what the future holds. There are three main stages in model building, when starting more or less from scratch, namely: 1. 2. 3.
model formulation (or model specification) estimation (or model fitting) model validation.
In introductory statistics courses there is usually emphasis on stage 2 and, to a lesser extent, on stage 3, while stage 1 is often largely ignored. This is unfortunate because model formulation is often the most important and difficult stage of the analysis, while estimation is relatively straightforward, with a welldeveloped theory, much easytouse computer software, and many reference books. This book therefore concentrates on stages 1 and 3. The three stages are discussed in turn, although in practice they may overlap or there may be several cycles of model fitting as the model is refined in response to diagnostic checks or to new data. Model building is an iterative, interactive process. 5.3.1
MODEL FORMULATION
The general principles of model formulation are covered in so me books on scientific method but are rather neglected in statistics textbooks. The analyst should: 1. 2.
3. 4. 5.
6.
Consult, collaborate and discuss with appropriate experts on the given topic; ask lots of questions (and listen); Incorporate background theory, not only to suggest which variables to include, and in what form, but also to indicate constraints on the variables and known limiting behaviour; Look at the data and assess their more important features; see the remarks in Chapter 6 on the initial examination of data; Incorporate information from other similar data sets (e.g. any previously fitted model); Check that a model formulated on empirical and/or theoretical grounds is consistent with any qualitative knowledge of the system and is also capable of reproducing the main characteristics of the data; Remember that all models are approximate and tentative, at least to
18
General strategy start with; be prepared to modify a model during the analysis or as further data are collected and examined.
At all stages of model formulation it is helpful to distinguish between (a) what is known with near certainty, (b) what is reasonable to assurne and (c) wh at is unclear. As regards items 24 above, it is worth noting that the extent to which model structure should be based on background theory and/or on observed da ta is the subject of some controversy. For example, in timeseries analysis, an econometrician will tend to tely more on economic theory while a statistician will tend to rely more on the properties ofthe data. To so me extent this reflects different objectives, but it also indicates that model building depends partlyon the knowledge and prejudice of the analyst. While so me analysts wrongly ignore theoretical reasoning, others place too much faith in it. In particular there are some areas (e.g. economics) where theories may conflict, and then it is essential to let the data speak. As always, a combination of theory and empiricism is fruitful. In constructing a model, it is helpful to distinguish between various aspects of the model. Firstly, many models contain separate systematic and random components. In specifying the random component, one may need to assess the form of the error distribution and say whether the error terms are independent. Of course the use ofthe ward 'error' does not imply amistake, and I note that some authors prefer an alternative term such as 'deviation'. Another important distinction is that between the primary assumptions of the model, which are judged central to the problem, and the secondary assumptions where drastic simplification can often be made. Assumptions regarding the random component are often secondary though this is not always the case. Finally, we note that the model parameters can often be partitioned into the parameters of direct in te rest and the nuisance parameters which perhaps relate to the secondary assumptions. As an example consider the problem of fitting a regression curve to observations on a response variable, y, and a predictar variable, x. The form of the regression curve (e.g. linear, quadratic or whatever) constitutes the systematic component and is normally the primary effect of interest. There may be background theory (or previous data) which suggests a particular form of curve or at least puts some restrictions on the curve (e.g. if y is known to be nonnegative). In addition, the analyst should look at a scatter plot ofthe data to assess the form ofthe relationship. To consider the random component, we look at the conditional distribution of y for a fixed value of x. Is the distribution normal and is the (conditional) variance constant or does it vary with x? Are the errors (i.e. the deviations of y from the corresponding fitted value) independent? The standard regression model assurnes that the errors are independent normally distributed with zero me an and constant variance, (12. These are a lot of assumptions to make and they need to be checked.
Model building 5.3.2
19
MODEL ESTIMA TION
The estimation stage consists of finding point and interval estimates of the model parameters. Section 7.4 discusses the different philosophical approaches to inference while Appendix A.4 reviews technical details including terminology, so me general point estimation methods, robust estimation and bootstrapping. The wide availability of computer packages makes it relatively easy to fit most standard models. It is worth finding out what estimation procedure is actuaHy used by the package, provided this information is given in the documentation (as it should be). In addition to point estimates, a good program should also provide standard errors of estimates as weH as various quantities, such as fitted values and residuals, wh ich will in turn help in model validation.
5.3.3
MODEL VALIDATION
When a model has been fitted to a set of data, the underlying assumptions need to be checked. If necessary, the model may need to be modified. Answers are sought to such questions as: 1.
2. 3. 4.
Is the systematic part of the model satisfactory? If not, should the form ofthe model be altered, should some variables be transformed, or should more variables be included? Can the model be simplified in any way, for example by removing some of the variables? What is the distribution of the errors? (Many models assume approximate normality.) Is the error variance really more or less constant, as assumed by many models? If not, can the model be suitably adapted? How much does the 'good fit' of the model depend on a few 'influential' observations ?
Model validation is variously called model checking, model evaluation, diagnostic checking or even residual analysis as most procedures involve looking at the residuals. The residual is the difference between the observed value and the fitted value. This can be expressed in the important symbolic formula DAT A = FIT + RESIDUAL. For so me purposes, it is preferable to convert these raw residuals into what are called standardized residuals which are designed to have equal variance. This is readily performed by a computer. Some technical details on this and other matters are given in Appendix A.8. There are many ways of examining the residuals and the choice depends to some extent on the type of model. For example, a good computer package will plot the values of the
20
General strategy
residuals against the values of other measured variables to see if there is any systematic trend or pattern. An example of such a graph is given in Exercise C.4. It is also advisable to plot the residuals in the order in which they were collected to see ifthere is any trend with time. The re si duals should also be plotted against the fitted values and against any other variable of interest. If the residual plots reveal an unexpected pattern, then the model needs appropriate modification. The distribution of the residuals should also be examined, perhaps by simply plotting the histogram of the residuals and examining the shape, or by carrying out some form of probability plotting (Appendix A.1). However, note that the distribution of the residuals is not exactly the same as the underlying error distribution, particularly for small samples. Various tests for normality are available (e.g. Wetherill, 1986, Chapter 8) but I have rarely needed to use them in practice because the analyst is usually only concerned about gross departures from assumptions which are usually obvious 'by eye'. There is usually special interest in large residuals. A large residual may arise because (a) the corresponding observation is an error of some sort, (b) the wrong model has been fitted or an inappropriate form of analysis has been used, or (c) the error distribution is not normal, but rather skewed so that occasional large re si duals are bound to arise. In practice it is often difficult to decide which is the correct explanation. In particular it is potentially dangerous to assume (a) above, and then omit the observation as an outlier, when (b) is actually true. Further data may resolve the problem. It is also useful to understand what is meant by an injluential observation, namelY an observation whose removal leads to substantial changes in the fitted model (Appendix A.6 and A.8). A gross outlier is usually influential, but there are other possibilities and it is wise to find out why and how an observation is influential. It is also important to distinguish between gross violation of the model assumptions and minor departures which may be inconsequential. A statistical procedure which is not much affected by minor departures is said to be robust and it is fortunate that many procedures have this property. For example the ttest is robust to departures from normality. Finally we note that diagnostic checks on a single da ta set can be overdone, particularly as there are philosophical problems in constructing and validating a model on the same set of data (section 7.2). It is far more important in the long run to see if a fitted model generalizes to other data sets rather than question the fine detail of each fit. Thus I would like to see model validation expanded to include checking the model on further data sets where possible. For example, I have worked for a number of years on a model to describe the purchasing behaviour of consumers of various manufactured products, such as soap and toothpaste, (e.g. Goodhardt, Ehrenberg and Chatfield, 1984). The model has been found to 'work' with
Further reading
21
data collected at different times in different countries for different products. This sort of model is far more useful than say a regression model which typically happens to give the best fit to just one particular set of data. Methods for validating models on more than one data set deserve more attention in the statisticalliterature.
Further reading Model formulation is discussed by Cox and Snell (1981, Chapter 4) and in much more detail by Gilchrist (1984).
6 ANALYSING THE DATA 11
The initial examination of data
6.1
Introduction
As they gain more experience, statisticians usually find that it is best to begin an analysis with an informal exploratory look at a given set of data in order to get a 'feel' for them. This constitutes the first phase of the analysis (see section 5.1) and includes the following: 1. 2. 3.
Process the data into a suitable form for analysis. Check the quality of the data. Are there any errors, missing observations or other peculiarities? Do the data need to be modified in any way? Calculate simple descriptive statistics. These include summary statistics, such as means and standard deviations, as well as appropriate graphs.
The general aim is to clarify the structure of the data, obtain a simple descriptive summary, and perhaps get ideas for a more sophisticated analysis. This important stage of a statistical analysis will be described as 'the initial examination of data' or 'initial data analysis', and will be abbreviated to IDA. I regard ID A as an essential part of nearly every analysis and one aim of this book is strongly to encourage its more systematic and thorough use. Although most textbooks cover simple descriptive statistics, students often receive little guidance on other aspects ofIDA. This is a great pity. Although IDA is straightforward in theory, it can be difficult in practice and students need to gain experience, particularly in handling messy data and in using IDA as a signpost to inference. Of course, if you just literally 'look' at a large set of raw data, you won't see very much. IDA provides a reasonably systematic way of digesting and summarizing the data, although its exact form naturally varies widely from problem to problem. Its scope will also depend on the personal preferences of the statistician involved and so there is no point in attempting a precise definition of IDA. I generally take a broad view of its ingredients and objectives as becomes evident below. I have found that IDA will often highlight the more important features of a set of data without a 'formal' analysis, and with some problems IDA may turn out to be all that is required (for example, Exercise B.3). Alternatively, IDA may suggest reasonable
Data structure
23
assumptions for a stochastic model, genera te hypotheses to be tested, and generally give guidance on the choice of a more complicated inferential procedure. The important message from all this is that IDA should be carried out before attempting formal inference and should help the analyst to resist the temptation to use elaborate, but inappropriate, techniques without first carefully examining the data. I also note that the simple techniques ofIDA are particularly relevant to analysts in the Third W orld who may not have ready access to a computer. The first part ofIDA consists of assessing the structure and quality of the data and processing them into a suitable form for analysis. This may be referred to as data scrutiny. 6.2
Data structure
The analysis will depend crucially not only on the nu mb er of observations but also on the number and type of variables. Elementary textbooks naturally concentrate on the simple, instructive, but atypical case of a small number of observations on just one or two variables. As regards the number of observations, any model fitting is likely to be unreliable if the sam pie size is less than about ten, and it is normally best to treat such a small sampie as an exploratory, pilot sampie to get ideas. On the other hand with hundreds, or even thousands, of observations, the problems of da ta management become severe. Although model fitting is apparently more precise, it becomes harder to control the quality oflarge data sets, and one must still ask if the data are representative. As already noted in Chapter 4, it is often better to collect an inter mediate size sampie of good quality rather than a large, but messy, data set. Effective supervision of da ta collection is crucial. The number of variables is also important. An analysis with just one or two variables is much more straightforward than one with a large nu mb er of variables. In the latter case, one should ask if all the variables are really necessary (they often aren't!), or consider whether the variables can be partitioned into unrelated groups. Alternatively, multivariate techniques may be used to reduce the dimensionality (section 6.6). It is potentially dangerous to allow the number of variables to exceed the number of observations because of nonuniqueness and singularity problems. Put simply, the unwary analyst may try to estimate more parameters than there are observations. Indeed it is generally wise to study data sets where there are considerably more observations than variables. In some areas (e.g. economics), a large number of variables may be observed and attention may need to be restricted to a suitable sub set in any particular analysis. The type of variable measured is also important (see also Cox and Snell, 1981, section 2.1). One important type of variable, called a quantitative
24
The initial examination of data
variable, takes numerical values and may be continuous or discrete. A continuous variable, such as the weight of a manufactured item, can (theoretically at least) take any value on a continuous scale. In practice the recorded values of a continuous variable will be rounded to a specified number of significant figures but will still be regarded as continuous (rather than discrete) unless the number of possible values is reduced to less than about seven or eight. A discrete variable can only take a value from a set or sequence of distinct values, such as the nonnegative integers. An example is the number of road accidents experienced by a particular individual in a particular timeperiod. In contrast to a quantitative variable, a categorical variable records which one of a list of possible categories or attributes is observed for a particular sampling unit. Thus categorical data are quite different in kind and consist of the counts or frequencies in particular categories (e.g. Exercise B.4). A categorical variable is called nominal when there is no particular ordering to the possible va lues (e.g. hair colour in Exercise B.4), but is called ordinal when there is a natural ordering (e.g. a person's rank in the army). A binary variable hasjust two possible outcomes (e.g. 'success' or 'failure') and may be regarded as a special type of categorical variable which is neither nominal nor ordinal (with only two possible values, ordering is not meaningful). A binary variable which takes the values zero or one could also be regarded as a discrete variable. Indeed there is a blurred borderline between ordinal and quantitative variables. For example in the social sciences, many observations are opinion ratings or so me other subjective measure of 'quality'. Opinions could be rated in words as very good, good, average, poor or very poor. This is a categorical variable. If the data are then coded from say 1 to 5 (sec ti on 6.3), then the variable is converted to a discrete form. However the 'distance' between 1 and 2 may not be the same as that between, say, 2 and 3 in which case the variable should not be treated as an ordinary discrete quantitative variable. As another example, the continuous variable 'yearly income' in Exercise B.4 is grouped into four categories, such as 'less than 1000 kroner' , and this creates an ordinal categorical variable. To clarify the above remarks, it may be helpful to distinguish between data measured on different types of measuring scales. They include: 1. 2. 3. 4.
a nominal scale, for unordered categorical variables an ordinal scale, where there is ordering but no implication of distance between scale positions an interval scale, where there are equal differences between successive integers but where the zero point is arbitrary a ratio scale, the highest level of measurement, where one can compare differences in scores as weIl as the relative magnitude of scores.
A categorical variable is measured on an ordinal or nominal scale rather than an interval or ratio scale. Subjective measures of 'quality' are sometimes
Processing the data
25
made over a continuous range, say from 0 to 1, so that the measured variable looks continuous but the measuring scale may weIl be ordinal. The exact meaning of a qualitative variable is unclear from the literature. Some authors use 'qualitative' to be synonymous with 'nonnumerical' or with 'categorical', while others restrict its use to nominal (unordered categorical) variables. Others describe qualitative data as those which are not characterized by numerical quantities, or which cannot be averaged to produce a meaningful result. I follow majority opinion in restricting its use to nominal variables. The sort of analysis which is suitable for one type of variable or one type of measuring scale may be completely unsuitable for a different type. Problems arise in practice when there is a mixture of different types of variable. It is sometimes possible or necessary to form two or more separate groups of variables of a similar type. Another 'trick' is to turn a continuous variable into an ordinal variable, or even into a binary variable, by grouping values, although this inevitably leads to some loss of information. It is also important to ask if the variables arise 'on an equal footing' or if instead there is a mixture of response and explanatory variables. Some techniques (e.g. regression) are concerned with the latter situation in trying to explain the variation in one variable (the response) in terms of variation in other variables (the explanatory or predictor variables). Other techniques are concerned with the former situation. For example, given the different exam marks fordifferent students, the analysis usually consists of some sort of averaging to produce an overall mark. There are also many other more sophisticated, multivariate techniques for examining the interrelationships between a set of comparable variables (e.g. principal component analysis  section 6.6). Note that exam marks might alternatively be regarded as response variables if other (explanatory) information was available about individual students. Assessing the structure of the data must also take account of the prior knowledge of the system in regard to such matters as the design of the experiment, the known sources of systematic variation (e.g. any blocking factors or known groupings of the experimental units) and so on. An isolated, unstructured data set is quite different from the sort of data arising from a proper experimental design. For example the difference between hierarchical (nested) and crossed data discussed in Appendix A.11 is fundamental and one must match the model and the analysis to the given problem.
6.3
Processing the da ta
Data are often recorded manually on data sheets. Unless the numbers of observations and variables are small (e.g. less than 20 observations on one or
26
The initial examination of data
two variables), the clata will probably be analysecl on a computer. The clata will then generally go through three stages: 1.
2. 3.
Coding: the clata are transferrecl, if necessary to cocling sheets although it is often possible to record coded data in the first place; questionnaire forms are often designecl to allow this to happen; it is good statistical practice to keep da ta copying to a minimum; Typing: the data are typed and stored on tape or disk; Editing: the data are checked for errors.
There is some help on these topics in the computing literature, but surprisingly little help in statistics textbooks. Some of the following remarks also apply to data which are collected directly onto a computer (e.g. some medical recordings) and this form of data recording seems likely to increase. When coding the data, the statistician may need to consider the following points: (a) Choice of variables: screen the data to see if all the variables are worth including, or if some may be disregarded. Choose a sensible order for the variables. (b) Choice of format: an appropriate format must be selected for each variable. For continuous variables, it is often sensible to allow one more digit per variable than is strictly necessary so that there are gaps between numbers when they are printed out. This makes it easier to spot errors. (c) Missing values: if any observations are missing for any reason, they must be carefully coded to distinguish them from ordinary observations. In particular when data are collected from sam pie surveys, the co ding should distinguish 'refused to reply', 'don't know' and 'not applicable'. Missing values can occasionally be filled in from other sources. Care should be exercised in using numerical values such as (  1) or zero or 999 as they might wrongly be analysed as ordinary observations giving nonsensical results. However, so me numerical coding may have to be used. (cl) Coding of nonnumerical data: many data sets include some variables which do not come naturally in numerical form. Examples include categorical variables, such as 'religion', and the names of individual respondents. Such data need to be coded with extra care, if at all. Names can be coded by giving a different number to each respondent but a master list would need to be kept to allow iclentification of inclividuals. It is usually convenient to code ordinal data numerically. For example, opinion ratings on a 5point sc ale from 'agree strongly' to 'disagree strongly' can be coclecl from say 1 to 5. This coding puts equalspaced intervals between successive ratings. If the data are then treatecl as quantitative by, for example, taking averages, there is an implicit, and perhaps unjustified, assumption that an interval scale is being used. It is more tricky to handle nominal variables since any numerical co ding
Data quality
27
will be entirely arbitrary. For example, if individuals are recorded as 'male' or 'female', this binary variable could be coded as 1 for 'male' and 0 for 'female' , but the reverse coding could equally well be used. Some data bases do allow alphanumeric codes (e.g. words!), but some sort of numerical coding is usually used as numerical data are generally easier to handle on a computer. After the da ta have been coded, they will be typed into a suitable machine. The typ ist should be encouraged to call attention to any obvious mistakes or omissions on the coding sheets. If possible, the data should be repunched to verify the data. Any differences between the two typed versions of the data may then be investigated. Comments on data editing will be made in the next section.
6.4
Data quality
The quality of the data is of paramount importance and needs to be assessed carefully, particularly if a statistician was not consulted before they were collected. Are there any suspiciousIooking va lues ? Are there any missing observations, and if so why are they missing and what can be done about them? Have too many or too few significant digits been recorded? All sorts of problems can, and do, arise! If, for example, the data have been recorded by hand in illegible writing, with some missing values, then you are in trouble! The main message of this section is that the possible presence of problem data must be investigated and then appropriate action taken as required. 6.4.1
HOW WERE THE DATA COLLECTED?
The first task is to find out exactly how the data were collected, particularly if the statistician was not involved in planning the investigation. Regrettably, data are often collected with little or no statistical guidance. For example, a poor questionnaire may have been used in a sampie survey, or an experimental design may not have been randomized. It is also advisable to find out how easy it is actually to record the data (try counting the number of seedlings in abox!) and to find out exactly what operations, if any, have been performed on the original recorded observations. Problems mayaiso arise when data from several sources are merged. For example, the same variable may have been measured with different precision in different places. Fortunately, IDA is helpful in assessing what can be salvaged from a set of messy data (e.g. Exercise D.2) as well as for actually revealing datacollection inadequacies (e.g. sections 6.4.3 and 6.4.5 and Exercises A.l(d) and D.2). At the other extreme, a proper statistical design may have been
28
The initial examination of data
used, yielding a highly structured data set. Here the form of analysis may be largely determined apriori, and then IDA may be confined to some simple descriptive statistics and a few quality checks. 6.4.2
ERRORS AND OUTLIERS
The three main types of problem data are errors, outliers and mlSSmg observations. This subsection considers the distinction between errors and outliers more closely, while the next subsection deals with ways of detecting and correcting them. Missing observations are considered in section 6.4.4. An error is an observation which is wrongly recorded, perhaps because it was recorded incorrectly in the first place or because it has been copied or typed incorrectly at so me stage. An outlier is a 'wild' or extreme observation which does not appear to be consistent with the rest of the data. Outliers arise for a variety of reasons and can create severe problems. A thorough review of the different types of outlier, and methods for detecting and dealing with them, is provided by Barnett and Lewis (1985). Errors and outliers are often confused. An error may or may not be an outlier, while an outlier may or may not be an error. Think about this! For example if a company's sales are recorded as halfthe usual figure, this may be because the value has been written down wrongly  an error and an outlier  or because there has been a collapse of demand or a labour dispute  giving a true value which is an outlier. These two situations are quite different, but unfortunately it is not always easy to tell the difference. In contrast, if true sales of870 items, say, are wrongly recorded as 860 items, this error will not produce an outlier and may never be noticed. Thus an outlier may be caused by an error, but it is important to consider the alternative possibility that the observation is a genuine extreme result from the 'tail' of the distribution. This usually happens when the distribution is skewed and the outlier comes from the long 'tai!'.
Types of error There are several common types of error, which are illustrated m Exercises A.1, D.1 and D.2, including the following: 1. 2. 3. 4.
A recording error arises for example when an instrument is misread. A typing error arises when an observation is typed incorrectly. A transcription error arises when an observation is copied incorrectly, and so it is advisable to keep the amount of copying to aminimum. An inversion arises when two successive digits are interchanged at some stage of the data processing and this is something to be on the look out for. If for example the observation 123.45 appears as 123.54, then the error is trivial, does not produce an outlier, and will probably never be
Data quality
5.
6.
6.4.3
29
noticed. However, if 123.45 is inverted to 213.45, then a gross outlier may result. Arepetition arises when a complete number is repeated in two successive rows or columns of a table, thereby resulting in another observation being omitted. More generally, it is disturbingly easy to get numbers into the wrong column of a large table. A deliberate error arises when the results are recorded using deliberate falsification, as for example when a person lies about his political beliefs. DEALING WlTH ERRORS AND OUTLIERS
The search for errors and outliers is an important part ofIDA. The term data editing is used to denote procedures for detecting and correcting errors. Some checks can be done 'by hand', but a computer can readily be programmed to make many other routine checks. The main checks are for credibility, consistency and completeness. The main credibility check is a range test on each variable, where an allowable range of possible values is specified. This will usually pick up gross outliers and impossible values. Bivariate and multivariate checks are also possible. A set of checks called 'ifthen' checks is also possible to check credibility and consistency between variables. For example, one can check that age and dateofbirth are consistent for each individual. It is sometimes a good idea to record some redundant variables so as to help deal with outliers and missing observations. The machine editing of data from large sampie surveys is discussed, for example, by Pullum, Harpham and Ozsever (1986). Another simple but useful check is to get a printout of the data and look at it by eye. Although it may be impractical to check every digit visually, the human eye is very efficient at picking out suspect values in a data array provided they are printed in strict column formation in a suitably rounded form. Suspect values can be encircled as in table D.1 ofExercise D.1 (see also Chatfield and Collins, 1980, section 3.1). Suspect values mayaiso be detected by eye from plots of the data at the later 'descriptive statistics' stage ofIDA. Graphs such as histograms and scatter diagrams are particularly helpful. There are various other procedures (e.g. Barnett and Lewis, 1985) for detecting errors and outliers, including significance tests and more sophisticated graphical procedures. Some of these involve looking for large residuals after a model has been fitted and so do not form part ofIDA. This is particularly true for multivariate data where outliers may be difficult to spot during the IDA which normally looks only at one or two variables at a time. When a suspect value has been detected, the analyst must decide what to do about it. It may be possible to go back to the original data records and make any necessary corrections. Inversions, repetitions, values in the wrong column and other transcription errors can often be corrected in this way. In other cases correction may not be possible and an observation which is
30
The initial examination of data
known to be an error may have to be treated as a missing observation. Extreme observations which may, or may not, be errors are more difficult to handle. There are tests for deciding which outliers are 'significant', but I suggest that they are less important than advice from people 'in the field' as to which suspect values are obviously silly or impossible and which, while physically possible, should be viewed with caution. It may be sensible to treat an outlier as a missing observation, but this outright rejection of an observation is rather drastic, particularly if there is evidence of heavytailed distributions. An alternative approach is to use robust methods of estimation wh ich automatically downweight extreme observations (see section 7.1). For example one possibility for uni varia te data is to use Winsorization by which extreme observations are adjusted towards the overall mean, perhaps to the second or third most extreme value (either large or smaH as appropriate). However, many analysts prefer a diagnostic parametric approach which isolates unusual observations for further study. My recommended procedure for dealing with outlying observations, when there is no evidence that they are errors, is to repeat the analysis with and without the suspect values (Exercise B.l). Ifthe conclusions are similar, then the suspect values 'don't matter'. Ifthe conclusions differ substantially, then one should be wary of making judgements which depend so cruciaHy on just one or two observations (called infiuential observations). 6.4.4
MISSING OBSERVATIONS
Missing observations arise for a variety of reasons. An animal may be killed accidentally, a respondent may refuse to answer all the questions, or a scientist may forget to record all the necessary variables. It is important to find out why an observation is missing. This is best done by asking people 'in the field'. In particular, there is a world of difference between observations lost through random events, and situations where damage or loss is more likely for high values or for certain types of condition, or where the data are censored (or truncated) at a particular value (e.g. Exercise B.5). Then the prob ability that an observation, y, is observed depends on the value of y. With multivariate data, it is sometimes possible to infer missing values from other variables, particularly if redundant variables are included (e.g. age can be inferred from dateofbirth). With univariate data from a proper experimental design, it is usually possible to analyse the data directly as an unbalanced design (e.g. with the GLIM package), but if there are several factors, it may be better to estimate the missing values by least squares so as to produce a 'fake' fuHy balanced design. This may help in the interpretation of the ANOVA and simplify calculations. In a oneway classification with a single missing value, the latter merely reduces the corresponding group size by one and no substitution is
Data quality
31
necessary. In a twoway classification (e.g. a randomized block design), a single missing value is replaced by (tT+bBS)j[(t1) (b1)], where t=number of treatments, b =number of blocks, T= sum of observations with same treatment as missing item, B = sum of observations in same block as missing item, and S = sum of all observations. Then a twoway ANOV A could be carried out in the usual way but with the residual degrees of freedom reduced by one. More generally, there are many such algorithms for replacing missing observations with 'guessed' values so as to allow a 'standard' analysis. However, the textbook example of a single missing univariate observation is quite different from the common problem of having many missing observations in multivariate data, and statisticians have often used a variety of adhoc procedures, such as discarding incompletely recorded observations. Litde and Rubin (1987) have described a more general approach based on the likelihood function derived from a model far the missing data. This may utilize the socalled EM algorithm (Appendix A.4). Finally, I reiterate the remarks from section 6.3 on the dan gers of coding missing values with special numerical values. I on ce analysed what I thought was a complete set of data involving the driving records of female car drivers. I obtained peculiarIooking results until I realised that so me ladies had refused to give their age and that unknown values were all coded as '99' ! 6.4.5
PRECISION
As part of the assessment of data quality, it is important to assess the precision of the data. It may appear 'obvious' that if the data are recorded to say five significant digits, then that is the given precision. However, this is often not the case. The true recording accuracy may only become evident on arranging the da ta in order of magnitude, or on looking at the distribution of the final recarded digit. For a continuous variable, one would normally expect the distribution of the final recorded digit to be roughly uniform (so that all ten digits are approximately equally likely), but this may not be the case. For example, it is common to find that 'too many' observations end in a zero indicating so me rounding. Preece (1981) gives some fascinating examples showing how commonsense detective work on the values of the final digits can reveal a variety of problems, such as difllculties in reading a scale (e.g. Exercise B.3), evidence that different people are measuring to different accuracy, or evidence that the given da ta have been transformed (e.g. by ta king logarithms) or converted from different units (e.g. from inches to centimetres  Exercises A.1 and D.2). While on the subject of precision, the reader is warned not to be fooled by large numbers of apparendy significant digits. In 1956, Japanese statistics showed that 160180 cameras had been exported to the USA while the
32
The initial examination of data
corresponding American statistic was 819 374 imported cameras fromJapan. Thus both countries claim sixdigit accuracy but cannot even agree on the first digit! 6.4.6
CONCLUDING REMARKS
Data processing and data editing require careful attention to ensure that the quality of the data is as high as possible. However, it is important to realize that so me errors may still get through, particularly with large data sets. Thus diagnostic procedures at the later modelbuilding stage should be carried out to prevent a few data errors from substantially distorting the results. With 'dirty' data containing outliers and missing observations, limited but useful inference may still be possible, although it requires a critical outlook, a knowledge of the subject matter and general resourcefulness on the part of the statistician.
6.5
Descriptive statistics
After the data have been processed, the analysis continues with what is usually called descriptive statistics. Summary statistics are calculated and the data are plotted in whatever way seems appropriate. We assume some familiarity with this topic (Appendix A.1 and Exercises A.1 and A.2) and concentrate on comparative issues. 6.5.1
SUMMARY STATISTICS
Summary statistics should be calculated for the whole data set and for important subgroups. They usually include the mean and standard deviation for each variable, the correlation between each pair of variables and proportions for binary variables. A multiway table of means (and standard deviations?) for one variable classified by several other variables can also be a revealing exploratory tooI.
(a) Measures 0110cation The sampie mean is the most widely used statistic, but it is important to be able to recognize when it is inappropriate. For example, it should not be calculated for censored data (Exercise B.5) and can be very misleading for skewed distributions where the median (or the mode or the trimmed mean) is preferred. As a more extreme example, suppose a disease mainly affects young children and old people giving a Ushaped distribution of affected ages. Then it is silly to calculate the average age ofinfected people, or indeed any single measure of location.
Descriptive statistics
33
The average is still widely misunderstood as indicated by the apocryphal story of the politician who said that it was disgraceful for half the nation's children to be under average intelligence. An average by itself can easily be misinterpreted and normally needs to be supplemented by a measure of spread. Thus in comparingjourney times, one may prefer a route with a slightly higher me an journey time but sm aller variation. Similarly it is much wiser for a doctor to tell a patient the range of 'normal' blood pressure than to mention a single 'average' value. This leads us on to measures of spread.
(b) Measures oJ spread The standard deviation is a widely used measure of variability. Like the mean, it is really designed for roughly symmetric, bellshaped distributions. Skewed distributions are much harder to describe. I have rarely found measures of skewness and kurtosis to be enlightening. Skewed, bimodal and other 'funny' distributions may be better presented graphically or described in words. The range is sometimes preferred to the standard deviation as a descriptive measure for comparing variability in sampies of roughly equal size (especially in quality control), partly because of its simplicity and partly because it is understood much better by nonstatisticians. Its lack of robustness is seen as a desirable feature in quality control because outliers are shown up. Unfortunately, direct interpretation of the range is complicated by the tendency to increase with sampie size as described in Appendix A.1. An alternative robust measure of spread is the interquartile range. The variance is the square of the standard deviation and is therefore not in the same units of measurement as the data. It should therefore never be used as a descriptive statistic although it does of course have many other uses. For example, it is the variance which has the property of additivity for independent random variables.
(c) Correlations Correlations can be useful, but remember that they measure linear association, and that people often have difficulty in assessing the magnitude of a correlation and in assessing the implications of a large (or small) value (Exercises C.l and C.2 and Appendix A.6). Matrices of correlation coefficients, arising from multivariate data, occur frequently, particularly in the social sciences. They need to be examined carefully, particularly if a technique like principal component analysis is envisaged (see Appendix A.13).
34
The initial examination of data
(d) Rounding The analyst should not give too many significant digits when presenting summary statistics. Ehrenberg's twovariabledigits rule (e.g. Ehrenberg, 1982, Chapter 15; Chatfield, 1983, Appendix D.3) says that data in general, and summary statistics in particular, should be rounded to two variable digits, where a variable (or effective) digit is defined as one which varies over the full range from 0 to 9 in the kind of data under consideration. Thus given the (fictitious) sampie : 181.633,182.796,189.132,186.239,191.151 we note that the first digit, 1, is fixed, while the second digit is either 8 or 9. The remaining four digits are variable digits. Summary statistics should therefore be rounded to one decimal place giving x= 186.2 in this case. Note that extra working digits may need to be carried during the calculations in order to get summary statistics and other quantities to the required accuracy, so that the twovariabledigits rule does not apply to working calculations. 6.5.2
T ABLES
It is often useful to present da ta or summary statlstlcs in a table. The presentation of dear tables requires extra care and the following rules are useful, particularly for twoway tables where numbers are dassified by row and by column (as in Exercises B.4 and B.6). 1.
Numbers should be rounded to two variable digits (see earlier comments). It is very confusing to present too many significant digits, as usually happens in computer output. Thus computer tables usually need to be revised for presentation purposes.
2.
Give row and column averages where meaningful, and perhaps the overall average. Obviously if different columns, say, relate to different variables then only column averages will be meaningful. Sometimes totals or medians, rather than averages, will be appropriate.
3.
Consider reordering the rows and the columns so as to make the table dearer. If there is no other natural ordering, then order by size (e.g. order columns by the size of the column averages).
4.
Consider transposing the table. It is easier to look down a column than across a row, so that one generally arranges the number of rows to exceed the number of columns.
5.
Give attention to the spacing and layout of the table. Keep the columns reasonably dose together so that they are easier to compare by eye. With lots of rows, put a gap every five rows or so. Typists often space out a
Descriptive statistics
35
table to fiH up a wh oie page because they think it 'looks better' but a more compact table is usuaHy preferable and needs to be requested before typing commences. 6.
Give a clear selfexplanatory tide. The units of measurement should be stated.
7.
Give a verbal summary ofthe major patterns in the table as weH as the major exceptions.
As an exampie consider tables 6.1 and 6.2. Table 6.1 shows part of a table from a published report comparing the accuracy of several different forecasting methods on a large number oftime series. The hideous Eformat and the lack of ordering makes it impossible to see which method is 'best'. Table 6.2 shows the same results in a suitably rounded form with a clearer tide and the methods reordered from 'best' to 'worst' at a forecasting Table 6.1
Average MSE: all data (111) 4
Method NAIVE 1 Holt EXP BROWNEXP Regression WINTERS Autom. AEP Bayesian F BoxJenkins Parzen
.3049E+08 .2576E+08 .2738E+08 .3345E+08 .2158E+08 .6811E+08 .2641E+08 .5293E+08 .7054E+08
.4657E+09 .2193E+09 .2373E+09 .2294E+09 .2128E+09 .4011E+09 .1328E +09 .2491E+09 .1l05E+09
Table 6.2 Mean square error (x 10 7) averaged over aIl 111 series (methods ordered by performance at lead time 1) Forecasting horizon 4
Method WINTERS Holt EXP Bayesian F Brown EXP NAIVE 1 Regression BoxJenkins Autom. AEP Parzen
2.2 2.6 2.6 2.7 3.0 3.3 5.3 6.8 7.1
21 22 13 24 47 23 25 40 11
36
The initial examination of data
horizon of one period. The results are now much dearer. For example we can see that although Parzen's method is worst at lead time 1, it is actually best at lead time 4. Clear tables like this do notjust 'happen', they have to be worked for. 6.5.3
GRAPHS
'A picture is worth a thousand words.' This old saying emphasizes the importance of graphs at all stages of a statistical analysis. Most people prefer to look at a graph, rather than examine a table or read a page of writing, and a graphical description is often more easily assimilated than a numerical one. Graphs are ideal for displaying broad qualitative information such as the shape of a distribution (e.g. with a histogram) or the general form of a bivariate relationship, (e.g. with a scatter diagram) or data peculiarities. The four graphs in Exercise C.2 illustrate this well. However, tables are often more effective in communicating detailed information, particularly when it is possible that further analysis may be required. In addition, it is worth noting that the features of a graph which make it visually attractive (e.g. colour, design complexity) may actually detract from comprehension. Descriptive statistics involves plotting a range of simple graphs, usually of one or two variables at a time. It is rather trite to say that one should plot the data in whatever way seems appropriate but at the exploratory stage one can use the computer to plot a variety of graphs, only so me of which turn out to be useful. The analyst must show resourcefulness and common sense in presenting the data in the best possible way. The most useful types of graph are histograms, box plots and scatter diagrams. The (univariate) distribution of each variable should be examined by plotting a histogram or a stemandleaJ plot to see (a) what distributional assumptions are reasonable for each variable, and (b) whether any outliers, groupings or other peculiarities are present. An example of a histogram is given in Exercise A.1. StemandIeaf plots provide a useful variant to histograms and examples are given in Exercises A.1 and A.2, together with comments on their construction. StemandIeaf plots contain more information, but the choice of dass interval is a little more restricted. They enable quantiles to be easily calculated and, like histograms, give a quick impression of the shape of the distribution. A box plot (or boxandwhisker plot) displays a distribution by means of a rectangle, or box, between the upper and lower quartiles with projections, or whiskers, to the largest and smallest observations. The median is also marked. They are particularly useful for comparing the location and variability in several groups of roughly equal size (e.g. Exercises B.2 and D .2). N onstatisticians often find them a revelation in contrast to a formal analysis of variance. The dotplot, or onedimensional scatter plot, simply plots each observation
Descriptive statistics
37
as a dot on a univariate scale. It gives fine detail in the tails but does not show up the shape ofthe distribution very weH. It is probably more useful for small sam pies, particularly for comparing two or more groups. An example is given in Exercise B.7. Another general type of graph for examining the shape of a distribution is the probability plot (Appendix A.l). Scatter diagrams are used to plot observations on one variable against observations on a second variable (e.g. Exercise C.2). They help to demonstrate any obvious relationships between the variables (linear, quadratic or what?), to detect any outliers, to detect any clusters of observations, and to assess the form of the random variability (e.g. is the residual variance constant?). Timeseries analysis usually starts by plotting the variable of interest against time, and this time plot can be regarded as a special type of scatter diagram in which one variable is time. For multi varia te data, it is often useful to plot scatter diagrams for all meaningful pairs of variables. However, with say seven variables there are already 21 (= 7 x 6/2) possible graphs to look at. The reader should also realize that it can be misleading to collapse higherdimensional data onto two dimensions in the form of scatter diagrams or twoway tables when the intetrelationships involve more than two variables (see Exercise G.4 for an extreme example). Thus tradition al descriptive statistics may only tell part of the story in three or more dimensions. Nevertheless, Exercises D.l and D.2 demonstrate that a 'simpleminded' approach can often work, even for large multivariate data sets. As a more sophisticated alternative, there are various procedures for plotting multivariate data which take account of all variables simultaneously. In particular, Andrews curves involve plotting a pvariate observation as a function which is a mixture of sine and eosine waves at different frequencies which are scaled according to the values of particular variables. The more controversial Chernoff faces represent a pvariate observation as a cartoon face in which each facial feature (e.g.length of nose) corresponds to the value of a particular variable. Some graphs also arise directly from the use of multivariate methods such as principal component analysis (section 6.6). Although not part ofIDA, it is convenient to mention here the wide use of graphs in diagnostic checking (section 5.3.3). Having fitted a model, there are many ways of plotting the fitted values and the residuals. The main idea is to arrange the plots so that under certain assumptions either a straight line will result (to displaya systematic component) or a 'random' plot will result (to displaya random component).
General remarks What are the general rules for presenting a 'good' graph? A graph should communicate information with clarity and precision. It
38
The initial examination of data
should avoid distorting what the data have to say. It should be capable of making large data sets coherent in a relatively small space. It should encourage the eye to compare different parts of the data. And so on. Some simple rules are as follows: 1. 2. 3.
4.
5. 6.
Graphs should have a clear, selfexplanatory tide. The units of measurement should be stated. All axes should be carefully labelled. The scale on each axis needs to be carefully chosen so as to avoid distorting or suppressing any information. In particular where a systematic linear trend is expected, the sc ales should be chosen so that the slope of the line is between about 30° and 45°. Use scale breaks for false origins (where the scale on the axis does not start at zero) to avoid being misled. A scale break is indicated by a 'wiggle' on the axis. The mode of presentation needs to be chosen carefully. This includes the plotting symbol (e.g. asterisks or dots) and the method, if any, of connecting points (e.g. straight line, curve, dotted line, etc.). A trialanderror approach can be very helpful in improving a graph.
If you still think that plotting a graph is 'easy', then consider the two graphs shown in fig. 6.1. At first sight they appear to be two different time series, but in fact they are the same time series plotted in two different ways. The vertical and horizontal scales are different and a different plotting symbol is used. Which graph, if either, is 'correct'? It is disturbing that apparently minor changes in presentation can affect the qualitative assessment of the data so much. Graphs are widely used (and misused) by the media. Apart from their mainstream statistical use, statisticians need to know how to spot a 'good' and a 'bad' graph in everyday life. Unfortunately, it is very easy to 'fiddle' graphs, perhaps by an unscrupulous choice of scales (Exercise A.4). Tufte (1983) defines what he calls a lie jactor by _ apparent size of effect shown in graph 1. L. le lactor ' 0 f euect er . . actuaISlze m t h e d ata
Obviously one would like the lie factor to be near one, but unfortunately values from near zero to over five are not uncommon!
6.5.4
CONCLUDING REMARKS
Descriptive statistics are useful not only for summarizing a set of data, but also to help check data quality, to start getting ideas for the 'definitive' analysis and, at the end of the analysis, to help in the presentation of the conclusions. It is a sad commentary on the state ofthe art that 'obvious' rules
Descriptive statistics
39
(ij'
alc: ca
2160 Cl
c:
~
~c:
140
~ 120
.!:
20
40
60
80
100
120 Month
140
_160 ca
alc: ca
2
g> 140
"§ Ö
4] c:
ca
E 120
.!: rJl
Cl
c: ·E
(;j
~ 100
"§ o
I
2 Figure 6.1
3
4
5
6
7
8
9
10
11
Year
The same time series plotted in two different ways.
on 'good' presentation are ignored so often in both published work and other written reports and in visual display material. These obvious rules include giving graphs and tables a clear, selfexplanatory title, stating the units of measurement, labelling the axes of graphs and rounding summary statistics in an appropriate way. One reason why these obvious rules are disregarded is that many tables and graphs are produced by computer packages which find it difficult to show the goodjudgement of a human being in such matters as rounding and the choice of scales. For example, the horrible Eformat in table 6.1 was produced by a computer and needs rounding as in table 6.2. The upper graph in fig. 6.1 was also produced by a computer which divided the
40
The initial examination of data
horizontal scale automatically into tens. The lower graph, expressed in years, exhibits the seasonal cycle more clearly. Clearly it is important for the computer output to be modified in an appropriate way before presentation, and it is regrettable that this is often not done.
Further reading There are several good books on graphical methods at a variety of levels including Chambers et al. (1983), Tufte (1983) and Cleveland (1985). Chapman (1986) and Ehrenberg (1982, chapters 16 and 17) deal with both graphs and tables. Much work needs to be done to give graph construction a proper scientific basis (e.g. Cleveland and McGill, 1987).
6.6
Multivariate dataanalytic techniques
Multivariate data consist of observations on several variables. One observation is taken on each variable for each one of a set of individuals (or objects or time points or whatever). Traditional descriptive statistics can be applied to multi varia te data by looking at one or two variables at a time, as in section 6.5 above. This is a valuable initial exercise which may prove sufficient (Exercises D.1 and D .2). However, it is possible to take a wider view of IDA by allowing the use, where necessary, of a group of more complicated, multivariate techniques which are dataanalytic in character. The adjective 'dataanalytic' could reasonably be applied to any statistical technique but I follow modern usage in applying it to techniques which do not depend on a formal probability model except perhaps in a secondary way. Their role is to explore multivariate data, to provide informationrich summaries, to generate hypotheses (rather than test them) and to help gene rally in the search for structure, both between variables and between individuals. In particular they can be helpful in reducing dimensionality and in providing twodimensional plots of the data. The techniques include principal component analysis, multidimensional scaling and many forms of cluster analysis (see Appendix A.13 for abrief description and references). They are generally much more sophisticated than earlier datadescriptive techniques and should not be undertaken lightly. However, they are occasionally very fruitful. They sometimes produce results which then become 'obvious' on looking at the raw data when one knows what to look for. They mayaiso reveal features which would not be spotted any other way. Principal component analysis This rotates the p observed variables to p, new, orthogonal variables, called principal components which are linear combinations of the original variables and are chosen in turn to explain as
The informal use of inferential methods
41
much of the variation as possible. It is sometimes possible to confine attention to the first two or three components which reduces the effective dimensionality of the problem. In particular, a scatter diagram of the first two components is often helpful in detecting clusters of individuals or outliers. Multidimensional scaling aims to produce a 'map', usually in two dimensions, of a set of individuals given some measure of similarity or dissimilarity between each pair of individuals. This measure could be as varied as Euclidean distance or the number of attributes two individuals have in common. The idea is to look at the map and perhaps spot clusters and/or outliers as in principal component analysis, but using a completely different type of data. Note that psychologists have recently tried to change the dataanalytic character of the method by suggesting various probability models which could be applied. However, these models seem generally unrealistic and the resulting analysis does not sustain the informal, exploratory ftavour which I think it should have. Cluster analysis aims to partition a group of individuals into groups or clusters which are in some sense 'close together'. There is a wide variety of possible procedures. In my experience the clusters you get depend to a large extent on the method used (except where the clusters are really clearcut) and cluster analysis is rather less fashionable than it was a few years ago. Many users are now aware of the drawbacks and the precautions which need to be taken to avoid irrelevant or misleading results. I often prefer to plot a twodimensional map of the data, using multidimensional scaling or the first two components from principal component analysis, and then examine the graph visually for clusters. Correspondence analysis is primarily a technique for displaying the rows and columns of a twoway contingency table as points in dual lowdimensional vector spaces. It is the favourite tool ofthe French 'anaIysedesdonnees' (or dataanalysis) school, but is used and understood far less weIl by EngIishspeaking statisticians. According to your viewpoint, it is either a unifying technique in exploratory multi varia te data analysis applicable to many types of data, or a technique which enthusiasts try and force on to all data sets, however, unsuitable. The 'truth' probably lies somewhere inbetween.
6.7
The informal use of inferential methods
Another way of taking a wider view of IDA is to allow the inclusion of methods which would normally be regarded as part of classical inference but which are he re used in an 'informal' way. In my experience, statisticians often use techniques without 'believing' the results, but rather use them in an
42
The initial examination of data
informal, exploratory way to get further understanding ofthe data and fresh ideas for future progress. This type of activity demonstrates the inevitable blurred borderline between IDA and more formal followup analyses, and emphasizes the need to regard IDA and inference as complementary. The need to integrate IDA fully into statistics will become even clearer in sections 6.9 and 7.3. The two main techniques which are used 'informally' are multiple regression and significance tests. The main aim of regression is usually stated as finding an equation to predict the response variable from given values of the explanatory (or predictor) variables. However, I have rarely used regression equations for prediction in a textbook way because of problems such as those caused by correlations between the explanatory variables, doubts about model assumptions and the difficulties in combining results from different data sets. However, I have occasionally found multiple regression useful for exploratory purposes in indicating which explanatory variables, if any, are potentially 'important'. Significance tests are discussed more fully in section 7.2. Here we simply note that they can be used in an exploratory way even when the required assumptions are known to be dubious or invalid, provided that the analyst uses the results as a rough guide rather than as definitive. It is often possible to assess whether the observed Pvalue is likely to be an under or overestimate and hence get informal guidance on the possible existence of an interesting effect, particularly when the result is clear one way or the other. 6.8
Modifying the da ta
The possibility of modifying the data should be borne in mind throughout the analysis, but particularly at the early stages. In section 6.4 we have already discussed two types of modification to improve data quality, namely: 1. 2.
adjusting extreme observations estimating missing observations.
We therefore concentrate he re on two further types of modification, namely: 3. 4.
transforming one or more of the variables forming new variables from a combination of existing variables.
As regards 4, we have already considered, in section 6.6, the possibility of forming a general linear combination ofthe variables, such as that produced by principal component analysis. Here we have in mind much simpler combinations such as the ratio, sum or difference of two variables. For example, the total expenditure on a particular product will almost certainly
Modifying the da ta
43
increase with inflation, and it may we11 be more informative to consider the derived variable: (total expenditure) with constant prices
= (total expenditure)/(some measure of price or the cost of living).
Alternatively, it may be better to look at sales in terms ofthe nu mb er of units sold rather than in terms of expenditure. This suggests looking at the derived variable: nu mb er sold = (total expenditure)/(average price per unit) if the number sold is not available directly. In timeseries analysis, a rather different modification is often sensible, namely to take first differences of the given se ries to see if the change in one variable is related to the change in another variable. Other forms of differencing can also be useful. As regards transformations, there are many forms in regular use. For example, in scaling exam marks it may be desirable to adjust the mean value and/or the spread by making a transformation of the form: scaled mark = a + b (raw mark) where a and bare suitably chosen constants. This is an example ofthe general linear transformation y = a + bx. Common nonlinear transformations indude the logarithmic transformation given by y = log x and power transformations such as y=Jx and y=x 2 • There are various reasons for making a transformation, which may also apply to deriving a new variable: 1. 2. 3. 4.
to to to to
get a more meaningful variable stabilize variance achieve normality create additive effects (i.e. remove interaction effects).
While it IS often helpful to try out different transformations during an IDA, so me of the above objectives are rea11y more concerned with model formulation, but it is nevertheless convenient to consider transformations here. One major problem is that the various objectives may conflict and it may prove impossible to achieve thema11 at once. Another problem is that the general form of a model may not be invariant under nonlinear transformations. One general dass of transformations is the BoxCox family of power transformations given by:
_ {(X}'l)/A
y
log x
This is essentia11y a power transformation, y = x)', which is 'fiddled' so that it
44
The initial examination of data
incorporates the logarithmic transformation as a special case. This follows from the result that (x A1)jA+ log x as A+O. The transformation depends on the value of A, and it is possible to choose A by trialanderror so as to achieve some desired praperty (e.g. normality) or es ti mate A more formally so as to maximize a given criterion (e.g. likelihood). However, in order to get a meaningful variable, it is better to choose Ato be a 'nice' nu mb er such as A= 1 (no transformation), A=! (square raot) or A= (logs). There may be severe interpretational numbers if one ends up with, say, A= 0.59. An example where a square root is meaningful is given by Weisberg (1985, p. 149), when examining the relationship between the perimeter and area of 25 Romanesque churches. A 95% confidence interval for A which maximizes the likelihood for a linear model relating perimeter and (area)A is the interval 0.45:::;; A:::;; 0.80, and then it is natural to (area) is in the same units of choose A= 0.5  the square root  as measurement as perimeter. Logarithms are often meaningful, particularly with economic data when proportional, rather than absolute, changes are of interest. Another application of the logarithmic transformation is given in Exercise B.9 to transform a severely skewed distribution to normality. Finally, it is worth stressing that the most meaningful variable is often the given observed variable in which case a transformation should be avoided if possible. In fact the need for transformations has reduced somewhat since computer software now allows more complicated models to be fitted to the raw data. For example, GLIM allows one to fit a general linear model with a gamma 'error' distribution as well as a normal distribution. Transformations should be the exception rather than the rule.
°
J
6.9
The importance ofIDA
Some readers may be surprised that this section on IDA is comparatively so long. Yet the importance ofIDA is one of the main themes of this book and interest in descriptive dataanalytic methods has increased in recent years. This is partly because an earlier overemphasis on formal mathematical methods has led to some dissatisfaction amongst practitioners. The initial examination of data is often called exploratory data analysis (abbreviated EDA) after the tide of the book by Tukey (1977), which describes a variety of arithmetical and graphical techniques for exploring data. There is no doubt that Tukey's book has provided a major stimulus to data analysis and has publicized several useful additions to the analyst's toolkit, notably the stemandIeaf plot and the box plot. Unfortunately, Tukey's book can also be criticized for intraducing too much new statisticaljargon, for suggesting too many new procedures which are rather elaborate for a preliminary exercise, for omitting some standard tools such as the arithmetic mean, for showing how different tools are used but not why, and for failing
The importance of IDA
45
to integrate EDA into mainstream statistics, (see also Chatfield, 1985, section 5; Chatfield, 1986). There are several alternative books on EDA, such as McNeill (1977), Tukey and Mosteller (1977), Erickson and Nosanchuck (1977), Velleman and Hoaglin (1981) and Hoaglin, Mosteller and Tukey (1983), which have varying degrees of success in integrating EDA. I have used the alternative tide ofIDA to emphasize the differences with EDA and the need to integrate dataanalytic techniques into statistics. The two main objectives ofIDA are to help in data description and to make astart in model formulation. The first objective is fairly obvious in that one must begin by scrutinizing, summarizing and exploring data. However, some aspects are not widely appreciated. In particular IDA may be all that is required because: 1.
The objectives are limited to finding descriptive statistics. This usually applies to the analysis of entire population, as opposed to sample data, and may also apply to the analysis oflarge samples where the question is not whether differences are 'significant' (they nearly always are in large sampies), but whether they are interesting. In addition, IDA is all that is possible when the data quality is too poor to justify inferential methods which perhaps depend on unfulfilled 'random error' assumptions. An IDA mayaiso be sufficient when comparing new results with previously established results. Furthermore, IDA is appropriate when the observed data constitute the entire population and the study is never to be repeated. Thus, whereas inference is primarily useful for oneoff random samples, IDA can be applied additionally to 'dirty' data and to the analysis of several related data sets. It should also be noted that the demand on statisticians can be very different in third wOrld countries. There the emphasis is generally, not on extracting the fine detail of data using sensitive inferential techniques, but rather on finding a concise description of general trends and patterns. Then IDA may be perfecdy adequate.
2.
The results from the IDA indicate that an inferential procedure would be undesirable andjor unnecessary. This applies particularlY when significance tests are env:isaged but when the results ofthe IDA turn out to be 'clearcut' or to indicate problems with model assumptions.
The above remarks explain why I have avoided the alternative term 'preliminary data analysis' in that one important message is that an IDA may be sufficient by itself. However, it should also be said that it is not always easy to decide when a descriptive analysis alone is ade qua te (Cox and Snell, 1981, p. 24). For this reason, some statisticians prefer always to carry out a formal analysis to re ta in 'objectivity', but this has dangers and difficulties of its own. While adhoc descriptive analyses are sometimes very effective, the results
46
The initial examination of data
cannot easily be validated and so it is gene rally preferable to construct some sort of model. Fortunately the second main objective of IDA is to help in model formulation. This objective is not always explicitly recognized despite its impartance. Put crudely, an IDA helps you to do a 'proper' analysis 'properly'. Now many experienced statisticians have long recognized the importance ofIDA. Unfortunately, the literature suggests that IDA is still undervalued, neglected or even regarded with disfavour in some circles. It is therefore worth looking briefly at so me arguments which have been put forward against the discussion and use of IDA. Some people might argue that IDA is all 'common sense' and is too straightforward and weIl understood to warrant serious discussion. However, I would argue that common sense is not common, particularly when IDA has the wider ingredients and objectives suggested here. Secondly, IDA is sometimes seen as being 'ad hoc' and not based on a sound theoretical foundation. However, I would stress that a lack of theory does not imply the topic is trivial. Rather, IDA can be more demanding than many classical procedures which have become very easy (perhaps too easy!) to perform with a computer. In fact, much ofIDA is not ad hoc in that it can be tackled in a reasonably systematic way. However, some aspects ofIDA (like some aspects of inference) are ad hoc, but this is not necessarily a bad thing. A good statistician must be prepared to make adhoc modifications to standard procedures in order to co pe with particular situations. The term 'ad hoc' sounds vaguely suspect but is defined as being 'arranged für a special purpose' which is often very sensible. FinaIly, IDA may be seen as heing dangerously empirical and of downplaying prior knowledge and statistical theory. However, I hope I have said enough to emphasize that while IDA rightly emphasizes the inspection of data, I have no intention of downplaying theory. Rather, I wish to use the data to build on existing theory and fully integrate IDA into statistics. Of course, analyses based on no model at all do run the risk of giving invalid conclusions and one must beware of simplistic analyses which overlook important points. However, a simple analysis need not mean a naive analysis, and an IDA should be helpful in deciding when a more complicated analysis is required and of what form. The other side of the coin is that analyses based on the wrong model are liable to be wrong, so that the use of a model does not automatically make things respectable. It is generally preferable to work within the framework of a probability model, and then IDA can be vital in selecting a sensible model. However, there are some üccasions when it is fruitful to work without a model, particularly if the data quality are poar, and then IDA can he even more important. In summary, the suggested drawbacks to IDA are far outweighed by the important benefits.
Further reading
47
Further reading
IDA is reviewed by Chatfield (1985) and the lively discussion which followed is also worth reading.
7 ANALYSING THE DATA III
The 'definitive' analysis 'I know how to do attest, but not when!'
Although IDA is important, and oeeasionally sufficient, it should normally be seen as a stepping stone to the main or primary analysis, whieh, for want of a better expression, we will eall the 'definitive' analysis. This analysis will normally be based on a probability model of some kind and involve an appropriate inferential proeedure. This may include the estimation of the model parameters and the testing of one or more hypotheses. This ehapter makes some brief general remarks on different statistieal proeedures, and then attempts to give adviee on how to ehoose the most appropriate one.
7.1
Different types of statistical procedure
Brief details of various statistieal proeedures are given in Appendix A. Eaeh proeedure is applieable for use with a partieular type of data for a partieular objective. With computer packages readily available, it is not important to remember the exaet details of a procedure, but rather to understand the broad outline of what they are for. For example, the onesample ttest assesses whether a given sampie mean is 'a long way' from a suggested population mean. Some brief notes on some different classes of procedure are as follows: 1.
Singlesam pie loeation problems. Given a single sampie of (univariate) observations, what is the underlying population mean? Calculate a point estimate or (for preference) a confidenee interval. Does the population me an equal a particular prespecified value? Carry out a significanee test.
2.
Twosample loeation problems. Given sampies from two groups or populations, what is the difference between the two underlying
Different types of statistical procedure
49
population means? Are the two sampie me ans significantly different from one another? 3.
Other significance tests. There is a wide variety of tests for different situations. In view of serious concern ab out their overuse and misuse, section 7.2 considers them in more detail. Estimation should gene rally be regarded as more important than hypothesis testing.
4.
Regression problems. Given observations on a response variable, y, and several predictor variables Xl' . . • , X k , find a regression curve to predict y from the x's. You should avoid the temptation to include too many x's which can give a spuriously good fit. Check the primary and secondary assumptions made in the regression model. The fitted equation is much less reliable when the predictor variables are uncontrolled than when they can be controlled in a proper experiment. Other problems are discussed in Appendix A.6.
5.
Factor analysis and principal component analysis. Given observations on several variables which arise 'on an equal footing', find new derived variables wh ich may be more meaningful. Factor analysis is overused for a variety of reasons (Appendix A.13).
6.
Analysis of variance (or ANOVA). Given observations from an experimental design, the idea is to separate the effect of interest (e.g. do treatments have different effects?) from other factors such as block effects and random variation. ANOV A partitions the total corrected sum of squares of the response variable into components due to different effects, provides an estimate of the residual variance and allows the testing ofhypotheses ab out the systematic effects. Thus ANOV A leads to an analysis of me ans as well as an analysis of variability.
7.
Other procedures. There are many problems which do not fit into the above categories, such as for example the analysis of timeseries data and of categorical data. Many of these problems are found to be 'standard' textbook situations, but problems mayaiso occur where so me modification has to be made to a 'standard' approach.
Rather than go into more detail on particular techniques, it is more important to realize that a 'good' method of analysis should: 1. 2. 3. 4. 5.
use all relevant data, but recognize their strengths and limitations; consider the possibility of transforming, or otherwise modifying, the given observed variables; try to assess a suitable model structure; investigate the model assumptions implicit in the method of analysis; consider whether the fitted model is unduly sensitive to one or more 'influential' observations.
50
The 'definitive' analysis
It is also important to decide if you are going to use a classical parametric modelfitting approach, or use a nonparametric or a robust approach. A nonparametric (or distributionjree) approach (Appendix A.5) makes as few assumptions about the distribution of the data as possible. It is widely used for analysing social science data which are often not normally distributed, but rather may be severely skewed. Robust methods (Appendix A.4) may involve fitting a parametric model but employ procedures which do not depend critically on the assumptions implicit in the model. In particular, oudying observations are usually automatically downweighted. Robust methods may therefore be seen as lying somewhere in between classical and nonparametric methods. Now we know that a model is only an approximation to reality. In particular the fit can be spoilt by (a) occasional gross errors, (b) departures from the (secondary) distribution al assumptions, for example because the data are not normal or are not independent, (c) departures from the primary assumptions. 'Traditional' statisticians usually get around (a) with diagnostic checks, where unusual observations are iso la ted or 'flagged' for further study. This can be regarded as a step towards robustness. Many statisticians prefer this approach except for the mechanical treatment of large data sets where human consideration of individual data values may not be feasible. Then it may well be wise to include some automatic 'robustification' as this gets around problem (a) above as well as (b) to so me extent. Some statisticians prefer a robust approach to most problems on the grounds that litde is lost when no outliers are present, but much is gained if there are. In any case it can be argued that the identification of outliers is safer when looking at the re si duals from a robust fit. Outliers can have a disproportionate effect on a classical fit and may spoil the analysis completely. Thus so me robust procedures may become routine, but only when we have more experience of their use in practice and when they can be routinely implemented by computer packages. Nonparametric methods get around problem (b) above and perhaps (a) to some extent. Their attractions are that (by definition) they are valid under minimal assumptions and gene rally have satisfactory efficiency and robustness properties. Some ofthe methods are tedious computationally (try calculating ranks by hand) although this is not a problem with a computer available. However, nonparametric results are not always so readily interpretable as those from a parametric analysis. They should thus be reserved for special types of data, notably ordinal data or data from a severely skewed or otherwise nonnormal distribution. IDA may help to indicate which general approach to adopt. However, if still unsure, it may be worth trying more than one method. If, for example, parametric and nonparametric tests both indicate that an effect is significant, then one can have some confidence in the result. If, however, the conclusions differ, then more attention must be paid to the truth of secondary assumptions.
Significance tests
7.2
51
Significance tests
A signficance test is a procedure for examining whether data are consistent with a given null hypothesis. The general terminology is described in Appendix A.5 together with brief details of some specific tests. Significance tests have a valuable role, although this role is more limited than many people realize, and it is unfortunate that tests are widely overused and misused in many scientific areas, particularly in medicine, biology and psychology. This section gives so me advice on practical pitfalls and discusses situations where they are inappropriate. Two statements which I like to stress are:
1. 2.
A significant effect is not necessarily the same thing as an interesting effect; A nonsignificant effect is not necessarily the same thing as no difference.
With regard to statement 1, with large sampies, results are nearly always 'significant' even when the effects are quite small and so it should be remembered that there is no point in testing an effect which is not substantive enough to be of interest. (Significance tests were originally devised for use with relatively small, but expensive, scientific sampies.) As to statement 2, a large effect may not necessarily produce a significant result if the sam pIe is small. Here an understanding ofType II errors and power is vital. The basic problem is that scientists may misinterpret a Pvalue to mean the prob ability that Ho is true, but this is quite wrong. A 'significant' result does not provide 'proof', but only evidence of a helpful, but rather formal, type. Any attempt to assess the prob ability that Ho is true will involve Bayes theorem and a prior prob ability for Ho. Is Ho based on unproven theory, on a single sampie, or on a wealth of prior empirical knowledge? In the latter case the analyst is looking for interesting discrepancies from Ho rather than to 'reject' Ho. The overemphasis on 'significance' has many unfortunate consequences. In particular, it can be ditllcult to get nonsignificant results published. This is particularly sad in medical applications where it can be very important to know that a significant result in one study is not confirmed in later studies. More generally, a single experiment is usually only a small part of a continuing study, and yet the literature has overwhelming emphasis on the idea of a significant effect in a single experiment. It is often more desirable to see if 'interesting' results are repeatable or generalize to different conditions. In other words we should be interested in the search for significant sameness (see also sections 5.2,7.4 and Nelder, 1986). A related general point is that the estimation of effects is generally more important than significance tests. Null hypotheses are sometimes rather silly (for example it is highly unlikely that two treatments will have exactly the same effect), and it is more important to estimate effects. Scientists often finish their analysis by quoting a Pvalue, but this is not the right place to
52
The 'definitive' analysis
stop. One still wants to know how large the effect is, and a confidence interval should be given where possible. I regard IDA as an important prelude to significance testing, both in generating sensiblehypotheses in a firsttime study, in checking or suggesting what secondary assumptions are reasonable, and more importantly for indicating that a test is unnecessary, inappropriate or otherwise undesirable for the following reasons: 1.
The IDA indicates that the results are clearly significant or clearly not significant. For example, two large sampies which do not overlap at all are 'clearly' significant. (In fact by a permutation argument, nonoverlapping sampies of sizes four or more are significantly different.) In contrast, I was on ce asked to test the difference between two sam pie means which happened to be identical! It is hard to think of a clearer nonsignificant result. The inexperienced analyst may have difficulty in deciding when a test result is 'obvious' and may find that intuition is strengthened by carrying out tests which are unnecessary for the more experienced user.
2.
The IDA indicates that the observed effects are not large enough to be 'interesting', whether or not they are 'significant'. This may appear obvious but I have frequently been asked to test results which are of no possible consequence.
3.
The IDA indicates that the data are unsuitable for formal testing because of data contamination, a lack of randomization, gross departures from necessary secondary assumptions, inadequate sampie sizes, and so on. For example, a few gross outliers can 'ruin' the analysis, while a lack of randomization can lead to bias. Other potential problems include a skewed 'error' distribution and a nonconstant 'error' variance. It is possible to overcome some problems by 'cleaning' the data to remove outliers, by transforming one or more variables or by using a nonparametric test.
A different type of problem arises through interim and multiple tests. If data are collected sequentially, it is tempting to carry out interim tests on part of the data but this can be dangerous if overdone as there is increasing risk of rejecting a null hypothesis even when it is true. Similar remarks apply when a number of different tests are performed on the same da ta set. Suppose we perform k significance tests each at the cx% level of significance. If all the null hypotheses are actually true, the probability that at least one will be rejected is larger than cx, and as a crude approximation is equal to kcx, even when the test statistics show moderate correlation (up to about 0.5). The Bonferroni correction suggests that for an overall level of significance equal to cx, the level of significance for each individual test should be set to cxlk. Finally, we consider the general question as to whether it is sound to
Choosing an appropriate procedure
53
generate and test hypotheses on the same set of data. In principle a significance test should be used to assess a null hypothesis which is specified before looking at the data, perhaps by using background theory or previous sets of data. However, tests are often not performed in this 'proper' confirmatory way, arid Cox and Snell (1981, section 3.7) discuss the extent to which the method of analysis should be fixed beforehand or allowed to be modified in the light of the data. While it is desirable to have so me idea how to analyse data before you collect them, it is unrealistic to suppose that the analysis can always be completely decided beforehand, and it would be stupid to ignore unanticipated features noticed during the IDA. On the other hand, there is no doubt that if one picks out the most unusual feature of a set of da ta and then tests it on the same data, then the significance level needs adjustment as one has effectively carried out multiple testing. Thus it is desirable to confirm an effect on two or more data sets (see also sections 5.2 and 5.3.3), not only to get a valid test but also to get results which generalize to different conditions. However, when data are difficult or expensive to obtain, then so me assessment of significance in the original data set can still be valuable. As this sort of thing is done all the time (rightly or wrongly!) more guidance is badly needed.
Further reading The role of the significance test is further discussed by many authors including Cox (1977), Cox and Snell (1981, section 4.7) and Morrison and Henkel (1970), the latter containing articles by different authors on a variety of problems. Carver (1978) goes as far as recommending the abandonment of all significance testing in favour of checking that results can be reproduced. 7.3
Choosing an appropriate procedure
Choosing the appropriate form of analysis can be difficult, especially for the novice statistician. Of course 'experience is the real teacher' but one aim of this book is to hasten this learning process. We assume that an IDA has already been carried out and that the conclusions are not yet 'obvious'. The first question is whether the form of the definitive analysis has been specified beforehand. If so, does it still look sensible after the first look at the data? Of course it is prudent to specify an outline of the analysis before collecting the data, but the details often need to be filled in after the IDA. In particular the data may exhibit unexpected . features of obvious importance which cannot be ignored. Suppose instead that the exact form of the analysis has not been specified beforehand. How then do we proceed? The sort of questions to ask are:
54 1. 2. 3. 4. 5. 6.
The 'definitive' analysis What are the objectives? Y ou should at least have some broad idea of what to look for. What is the structure of the data? Are they univariate or multivariate? Are the variables continuous, discrete, categorical or a mixture ? What are the important results from the ID A? What prior information is available? Have you tackled a similar problem before? Ifnot, do you know someone else who has? Ask for help, either within your organization or perhaps at a neighbouring college or research centre. Can you find a similar type of problem in a book? You need access to an adequate library. Can you reformulate the problem in a way which makes it easier to solve? Can you split the problem into disjoint parts and solve at least some of them ?
Some general comments are as follows: (a) You should be prepared to try more than one type of analysis on the same data set. For example, if you are not sure whether to use a nonparametric or parametric approach, try both. If you get similar results, you will be much more inclined to believe them. (b) It is amistake to force an inappropriate method onto a set of data just because you want to use a method you are familiar with. (c) You must be prepared to look at a problem in a completely different way to the one which is initially 'obvious'. In other words, a good statistician must be prepared to use what de Bono (1967) has called lateral thinking. For example, you may want to answer a different question to the one that is posed, or construct different variables to the ones that have been observed. (d) You must be prepared to make adhoc modifications to a standard analysis in order to cope with the nonstandard features of a particular problem. For example, a timeseries/forecasting analysis may be helped by discarding the early part of the data if they have atypical properties. (e) You can't know everything. Thus you are certain to co me across situations where you have litde idea how to proceed, even after an IDA. For example, you may not have come across censored survival data like that in Exercise B.5. There is nothing shameful about this! However, you must know where to look things up (books, journals, reference systems  see Chapter 9) and you must not be afraid to ask other statisticians for help. (f) 'The analysis' should not be equated with 'fitting a model' (or with
Different approaches to inference
55
estimating the parameters of a given model). Rather 'the analysis' should be seen as a modelbuilding exercise wherein inference has three main strands of model formulation, estimation and model checking. In a firsttime study, the model builder's main problem is often not how to fit an assumed model  to which there is often a nice straightforward reply  but rather what sort of model to formulate in the first place. The general remarks in section 5.3 may be worth rereading at this point. The importance of checking the fitted model also needs reemphasizing. In particular, discrepancies may arise which question the original choice of model or analysis. (g) Many models (and by implication the appropriate analysis) are formulated to some extent on the basis of an IDA. Following on from Chapter 6, we can now darify this role for IDA with some examples. In many problems a general dass of models is entertained beforehand using prior theoretical and empirical knowledge. Nevertheless, the IDA is still crucial in making sensible primary and secondary assumptions. For example, suppose one wants to fit a regression model. Then a scatter diagram should indicate the shape ofthe curve (linear, quadratic or whatever) as weH as give guidance on secondary assumptions (normality? homogeneous variance? etc.). Thus ID A is vital in selecting a sensible model and inhibiting the sort of' crime' where, for example, a straight line is fitted to data which are dearly nonlinear. In a similar vein, a timeseries analysis should start by plotting the observations against time to show up important features such as trend, seasonality, discontinuities and outliers. The time plot will normally be augmented by more technical tools such as correlograms and spectra, but is the first essential prerequisite to building a model. As a third example, you may be interested in the interrelationships between a set of variables which arise 'on an equal footing' and are thinking of performing a factor analysis or a principal component analysis. If, however, you find that most ofthe correlations are dose to zero, then there is no structure to explain and little point in such an analysis. On the other hand, if aH the correlations are dose to one, then aH the variables are essentially 'measuring the same thing'. Then the main derived variable will be something like a simple average of the variables. I have frequently come across both situations with psychological data and have had to point out that the results of a formal multivariate analysis are not likely to be informative.
7.4
Different approaches to inference
Wehave hitherto adopted a rather pragmatic approach to the analysis of data, and have said little about the philosophical problems involved in
56
The 'definitive' analysis
assessing probabilities and making inferences. However, if you look at the statisticalliterature, you may find strong disagreement between different writers as to the 'best' way to approach inference. This section (which may be omitted at a first reading) makes some brief remarks on this question. Barnett (1982) distinguishes three main approaches to inference, namely the classical (or frequentist or sampling theory) approach, the Bayesian approach and decision theory. In the classical approach, the sampie data are regarded as the main source of relevant information and the approach leans on a frequencybased view of prob ability in which the prob ability of an event is the proportion of times the event occurs 'in the long run'. Bayesian inference combines prior information with the sampie data via Bayes theorem and leans on a subjective view of probability in which the probability of an event represents the person's degreeofbelief in the event. Decision theory aims to choose the 'best' decision from a prescribed list of possible actions. Each approach has its ardent supporters but I am pleased to note that many statisticians are adopting a flexible approach to statistical inference in which they refuse to 'label' themselves but rather see different approaches as relevant to different situations. The immense variety of statistical problems which can arise in practice, and the flexible interactive approach which is needed to solve them, can make the longstanding arguments between different schools of inference seem rather academic and irrelevant to practical data analysis. A single approach is not viable (e.g. Cox, 1986) and we need more than one mode of reasoning to cope with the wide variety of problems. In particular my experience of real problems suggests that it is rarely feasible to incorporate prior knowledge via the Bayesian formalism, which requires one to know the priors for the unknown model parameters or to know the prior probability of a hypothesis being true. Even so the rejection of a rigid Bayesian viewpoint can be combined with a recognition of the insight given by the approach, and the success of so me empirical Bayes formulations. It is also worth recognizing the problems which can arise with the classical approach, as for example that the same sam pie outcome can lead to different conclusions depending on the sampling procedure employed. In fact these philosophical arguments, fascinating as they may be, have mainly concerned just one part of the statistical process, namely the fitting of an assumed model to a single data set, where 'the choice of model is usually a more critical issue than the differences between the results of various schools offormal inference' (Cox, 1981). This book has argued that we should see statistical inference in a wider context by recognizing three different stages of inference as model formulation, model fitting and model checking. Furthermore, in Chapter 2 we noted that in practice there may be several cycles of model fitting as defects in some original model are recognized, more data are collected, and the model is gradually improved. This circular iteration is another reason why the principles of model formulation and
Different approaches to inference
57
checking are more important than controversies between different schools of inference, especially in their more sterile manifestations. The three main approaches to inference also fail to emphasize the distinction between looking at a brandnew set of data and looking at aseries of similar data sets. Ehrenberg's (1982, 1984) approach to data analysis and model building emphasizes the desirability of establishing regular patterns across several sampies of data, describing these patterns in a suitable way, and finding ways ofhighlighting departures from the model. An initial analysis needs to be compared with further studie~ so that knowledge is built up through empirical generalization leading to a model which Ehrenberg calls a lawlike relationship. Of course no two studies can be made under identical conditions so that when a model is fitted to many data sets, it is much more important to assess the overall agreement than to quest ion the fine details of a particular fit. Even so, systematic local deviations will be of interest. The point is that prior knowledge about empirical regularities should be used, when available, to prevent the analyst from 'reinventing the wheel' every time a new set of data is acquired. For example, in section 5.3.3 I described a general model of consumer purchasing behaviour which I have helped to construct with numerous data sets. Ehrenberg's approach has perhaps not received the attention it deserves. This may be partly because it does not lend itself to theoretical (i.e. mathematical) analysis and partly because many statisticians do spend time analysing moreorIess unique data sets. In my view statisticians need to be able to co pe with all sorts of situations. Finally, we note that with messy data and unclear objectives, the problem is not how to get the optimal solution, but how to get any solution. Once again, philosophical problems see m rather irrelevant! In order to set the above arguments in context, it may be helpful to dose this seetion with a (very) brief review of the history of statistics. Before 1900, statistics was mainly restricted to what is now called descriptive statistics. Despite (or perhaps because of!) this limitation, statistics made useful contributions in many scientific areas. For example, Florence Nightingale's work as a nursing administrator is weH known, but she may have had more lasting influence as the first female member ofthe Royal Statistical Society in setting up a crude but effective record of hospital admissions and fatalities. However, a steady increase in scientific experimentation generated a need for a more formal apparatus to analyse data sets which were often quite small in size. Thus statistical inference has developed throughout this century. One thinks particularly of the early biometrical school with Francis Galton and Karl Pearson and the new biometrical school ofSir Ronald Fisher. The latter worked at Rothamsted Agricultural Research Station in the 1920s and laid the foundations for much of modern statistics, particularly with his contributions to dassical statistics and experimental design. His inferential ideas were developed by Jerzy Neyman and Egon Pearson amongst others. The Bayesian approach has developed mainly since the Second W orld War,
58
The 'definitive' analysis
although the original papers by the Reverend Thomas Bayes on inverse probability were published over 200 years ago. Decision theory was stimulated by the pioneering work of Abraham Wald in the 1940s. Yet all this development was in a rather restricted direction so that, by the 1960s, most statistics textbooks were primarily concerned with the special inference problems involved in estimating and testing an assumed family of models for a single set of data. This narrow view of statistics unfortunately spread into teaching so that practical aspects of analysing data were often neglected. Within the last decade or so, practising statisticians have begun to question the relevance of some statistics courses and much published research. The latter has been described as 'theories looking for data rather than real problems needing theoretical treatment' (Moser, 1980). Partly as a result, there are signs that interest in data analysis has been rekindled, helped by improvements in computing facilities and by the publication of various books and papers (see the references listed in section 6.9). One aim of this book is to continue this revival, particularly through increased emphasis on IDA. Barnett (1982, p. 308) suggests that descriptive dataanalytic methods can almost be regarded as yet another general approach to inference. However, IDA is useful in both data description and model formulation, and so I would prefer to see IDA as an essential ingredient of a broad composite form of inference, which does not have a narrow philosophical base, but which allows the analyst to adopt whichever combination of procedures is appropriate for a given problem. Further reading A comprehensive study of comparative statistical inference is given by Barnett (1982). A broad 'ecumenical' approach to inference is also advocated by Box (1983). There is much historical information in the entertaining book by Peters (1987).
8 USING RESOURCES I
The computer
The vast majority of statistical analyses (outside the Third World) are now carried out using a computer. Thus the statistician needs to: 1. 2. 3. 4.
understand the important features of a computing system, which should be an integrated combination ofhardware and software; know at least one scientific programming language; know how to use so me of the most popular packages; be able to construct and edit a data file.
The choice of computer and its accompanying software is dearly crucial, but there is surprisingly litde guidance on statistical computing in statistical textbooks. This is pardy because the scene is changing so rapidly. New computers and new software arrive continually and computing power is still growing rapidly. Today's desktop microcomputer is more powerful than the mainframe computer of a few years ago. Thus this chapter concentrates on general remarks which, it is hoped, will not become dated too quickly. A second reason for the dearth ofhelpful material is that advice on computing is sometimes regarded as being unsuitable for inclusion in statistics books and journals. This seems misconceived and I hope that more advice will become available. Some useful review artides at the time of writing are Nelder (1984) and Wetherill and Curram (1985) which also give further references. A computer enables much arithmetic to be carried out quickly and accurately. It also allows data to be looked at in several different ways and allows a wide range of graphical and diagnostic aids to be used. On the debit side, the wide availability of computer software has tempted some analysts to rush into using inappropriate techniques. Unfortunately most computer software is not yet intelligent enough to stop the user doing something stupid. The old adage 'GARBAGEIN ~GARBAGEOUT' still holds good, and it must be realized that careful thought and dose inspection of the data are vital preliminaries to complicated computer analyses. Many students still spend considerable time working through problems by hand or with a pocket calculator. While this can sometimes be helpful to understand a method fully, I suggest that students who have access to a computer need more help in learning how to interpret computer output. As
60
The computer
well as getting used to routine output, they need to be prepared for unexpected and incomprehensible messages. For example, so me programs routinely print a statistic called the DurbinWatson statistic without saying what it is or what a 'normal' value should be. Most users have no idea what it means! When choosing a computer, desirable features include versatility, easy data input and da ta storage facilities, highquality graphics and a wide range of good software. However, most people have to use the inhouse computer and this in turn specifies so me aspects of the man/machine interface, such as the operating system. Thus statisticians may be limited to choosing or writing appropriate software and the remaining comments concentrate on this. Faced with a specific problem, the analyst must decide whether to use a computer package, to augment a published algorithm, or to write a special program which may be a oneoff program, or a more general program which could be used with similar sub se quent sets of data. If writing a program, include plenty of comment statements, especially if other people are going to use it, and test different combinations of input variables. It is helpful to write the pro gram in modules which can be tested separately. The choice oflanguage usually depends on a variety of inhouse considerations such as the computer, compatibility with other programs, and portability. As regards algorithms, the reader should realize that good ones have been published to cover the vast majority of mathematical and statistical operations. It is bad practice to 'reinvent the wheel' by trying to write a set of instructions from scratch when a published algorithm could be used. For example, algorithms are printed in journals like Applied Statistics and the Computer Journal, and there is a comprehensive range of algorithms published in the USA by the Institute of Mathematical Statistics (IMSL routines) and in the UK by the Numerical Algorithms Group (NAG routines). These cover such topics as interpolation, curve fitting, calculation of eigenvalues, matrix inversion, regression, ANOV A, and random number generation.
8.1
Choosing a package
Packages vary widely both in quality and in what they will do, and the choice between them may not be easy. Some have been written by expert statisticians, but others have not. Some print out plenty of warning, error and help messages, but others do not, and may indeed go 'happily' on producing meaningless results. Some are written for one small area of methodology while others are more general. Some are written for expert users, while others are intended for statistical novices. Unfortunately
Choosing a package
61
packages are widely misused by nonstatisticians, and we need packages which incline towards expert systems (see below) where the package will be able to say, for example, that a given set of data is unsuitable for fitting with suchandsuch a model. A few packages may still be run in batch mode, where the analyst has to decide beforehand exactly what analyses are to be carried out, but most packages are now run in interactive mode, where the user can react to interim results. The command structure of an interactive package may be what is called 'commanddriven' or 'menudriven'. For the latter, a range of options is given to the user at each stage from which one is selected. This type of system is more suitable for the inexperienced user. Some packages allow completely automatic analyses of data, where the analyst abdicates all responsibility to the computer, but interactive analyses are usually preferable. Software needs to be appraised on various criteria which include statistical, computational and commercial considerations. Desirable features include: 1. 2. 3. 4. 5. 6. 7.
flexible data entry and editing facilities good facilities for exploring data via summary statistics and graphs the procedures for fitting models should be statistically sound and include diagnostic checking the programs should be computationally efficient all output should be clear and selfexplanatory; unambiguous estimates of standard errors should be given and excessive numbers of significant digits should be avoided the package should be easytolearn and easytouse the documentation and support should be adequate
Other criteria include the cost of the package, the required equipment, the needs of the target user and the possibility of extending the package. Regarding computational aspects, I note that many packages were written when computing power was a major constraint. Nowadays the accuracy of algorithms is probably more important than speed and efficiency. It is difficult to assess numerical accuracy and efficiency except by running specially selected, and perhaps unusual, data sets. Although computing power is now enormous, it must be realized that numerical 'bugs' are still alive and 'kicking' and so users must beware, particularly when trying a new package. A set of test (or benchmark) data, where the answers are known, should always be run to see if a package can be trusted. As to documentation, manuals should be easytoread, include examples, and allow easy access for specific queries. Unfortunately, this is often not the ca se and separate handbooks have been written for some packages to clarify the manuals. Established packages are often better supported, maintained and updated than some newer packages.
62
The computer
The needs ofthe target user may vary widely, and one can distinguish the following categories: 1. 2. 3.
the expert statistician; he may want to use established methodology in consulting or collaborative work, or carry out research into new methodology teachers and students of statistics the statistical novice who understands little statistics but knows what he wants to do (even though it may be quite inappropriate!)
It is hard to design software to meet the needs of all these potential customers at the same time. As to statistical validity, it is important that professional statisticians fight to retain control of the development of statistical computing. In so me packages the fight does not appear to have been won. 8.1.1
EXAMPLES OF USEFUL PACKAGES
New and revised packages for computers and microcomputers are being released continually and it is impossible to provide an uptodate review of them all here. This section concentrates on wellestablished packages, written originally for mainframe computers, which at the time of writing are also becoming available on microcomputers. However, note that there are dangers in converting proven, but possibly batchoriented, pro grams to interactive use on a microcomputer. Also note that the comments made below at the time of writing may become outdated as packages are improved. The reader should keep uptodate by referring to reviews of computer software in various journals such as the American Statistician and selected computing journals. Y our computer unit should also be able to provide further information including manuals for any software which is already available locally.
(a) MINITAB This is an interactive, commanddriven package which covers such topics as exploratory data analysis, significance tests, regression and timese ries analysis. It is very easy to use and is widely employed by both commercial and academic institutions. At my own university, we use it for teaching both introductory and intermediate courses. The expert will find it too restrictive for some purposes. (A brief summary is given in Appendix B.l.) The book by Ryan,]oiner and Ryan (1985) may be preferred to the package manual.
(b) GENSTAT This is a statistical programming language which allows the user to write
Choosing a package
63
programs for a wide variety of purposes. It is particularly useful for the analysis of designed experiments, the fitting oflinear models and regression. It also covers most multivariate techniques, timese ries analysis and optimization. It allows much flexibility in inputting and manipulating data. Programs can be constructed from macros, which are blocks of statements for a specific task. A library of macros is supplied. The package is designed for the expert statistician and is not userfriendly. It can take several days to learn and so needs to be used regularly to make it worthwhile.
(c) BMDP77 This suite of programs covers most statistical analyses from simple data display to multivariate analysis. BMD stands for biomedical, and the programs were written by a group at UCLA (University ofCalifornia at Los Angeles). Snell (1987) describes aseries of examples using the package. This comprehensive package is good for the professional statistician and has good user support, but can be rather difficult to learn how to use. (d) SPSS SPSS denotes 'statistical package for the social sciences'. This package is probably used more than any other package, but mainly by nonexpert users. Some statisticians view the package with some reserve. It produces 'answers' of a sort even when the 'question' is silly, and has a tendency to produce large quantities of output which may give the nonexpert user little help in understanding the data. The package can be tricky to use, particularly at first, and there is concern that it is widely misused.
(e) GLIM GLIM denotes 'generalized linear interactive modelling'. It is an interactive, commanddriven package which is primarily concerned with fitting generalized linear models. This means that it covers regression, ANOVA, pro bit and logit analysis and loglinear models. The user must specify the error distribution, the form of the linear predictor and the link function. The package is powerful, but requires considerable statistical expertise, is not userfriendly, and can produce output which is difficult to interpret. Nevertheless, I da sametimes use it to fit generalized linear models as it can da things which are difficult to perform using ather packages (except GENSTAT). It is relatively inexpensive. A brief summary is given in Appendix B.2. (f)S S is an interactive language designed for the expert statistician which covers a
64
The computer
wide range of procedures (see Becker and Chambers, 1984). It is particularly good for graphics. (g) SAS
SAS denotes 'statistical analysis system'. This programming language is widely used in the USA and UK, particularly in pharmaceutical applications. I have heard good reports of the package but have no personal experience of it. It is rather expensive compared with other packages, and is not the sort of package which can be learnt in half a day. There are numerous more specialized packages available, such as T AS for timeseries analysis, CLUSTAN for cluster analysis, and FORECASTMASTER for forecasting. The buyer needs to shop around. 8.1.2
EXPERT SYSTEMS
Many packages are unfriendly, or even dangerous, for the nonexpert user, and so there is much current interest in the development of expert system packages which aim to mimic the interaction between the user and a statistical consultant. Such packages will attempt to incorporate the sort of questions and advice that would come from a statistician, in regard to clarifying objectives, exploring data, formulating a model, and choosing a ~ethod of analysis. Expert systems have been introduced in other areas such as medical diagnosis but much research is still needed to implement them in statistics. In the meantime, we can hope that some expert system features will be routinely included in more packages. For example, the user of an interactive package could easily be asked questions such as 'Are you sure the following assumptions are reasonable?' followed by a listing of assumptions implicit in a given method.
9 USING RESOURCES II
The library
Statisticians, like other professionals, cannot be expected to 'know everything'. However, they must know how to locate appropriate reference material, when necessary, and be able to understand it when found. A library is the most important source of knowledge, and, used wisely, can be a valuable aid in tackling statistical problems. Libraries contain arieher variety of material than is sometimes realized. As weH as books, they usuaHy contain a range of statistical journals, various abstract and index journals as weH as tables of official statistics (see Exercise G.6).
9.1
Books
Obviously libraries contain a range of textbooks and reference books. Some ofthe more important ones are listed in the references at the end ofthis book, and they can give valuable help when tackling difficult or unfamiliar problems. In order to find a book on a given topic, the book index may be used to look up a specific author, a specific tide, or a keyword such as 'Forecasting' which forms part of the tide of a book. In addition, it may be worth searching aH books with a given code number, such as those coded under 'Timeseries Analysis'.
9.2 Journals Recent research in statistics includes developments in both the theory and practice of statistical methods as weH as new reported applications in specific problem areas. This research is usuaHy reported in one ofthe many statistical journals before finding its way into book form at a later date. It is worth having a 'browse' through these journals to see what sort of material is available. The most important journals are as foHows:
66
The library
The (British) Royal Statistical Society publishes three journals, namely series A  concerned with statistics in society; also has good book reVlews 2. series B  concerned with the theory of statistics 3. Applied Statistics (se ries C)  selfexplanatory 1.
The American Statistical Association publishes three journals, namely 4. Journal 01 the American Statistical Association  a mixture of theory, applications and book reviews 5. American Statistician  a readable quarterly journal including tutorial articles and reviews of computing software 6. Technometrics  published jointly with the American Society for Quality Control and concerned with the development and use of statistics in science and engineering
7.
The International Statistical Institute publishes International Statistical Review  mainly review articles
The Institute of Mathematical Statistics publishes 8. Annals 01 Statistics 9. Annals 01 Probability  both of which make contributions to theory 10. Statistical Science  readable review papers Other journals include 11. Biometrics  with applications in the biological sciences, published by the Biometric Society 12. Biometrika  mainly theory There are also numerous more specialized journals such as 13. Statistics in Medicine 14. Journal 01 Marketing Research 15. International Journal 01 Forecasting Statistical applications (and occasionally new methodology) also appear in many nonstatistical journals, but the general quality can be disturbingly low. With thousands of articles published each year, it is impossible to keep up with all statistiral developments. The statistician must be judicious in choosing whichjournals to scan, and even more selective in deciding what is actually worth reading. As an academic, I look at all the above journals but many readers may wish to confine attention to say numbers 1, 3, 4 and 6 above. Abstract and index journals can augment this choice. These specializedjournals do not contain written articles, but rather contain lists or brief summa ries of articles in other journals. For example Statistical Theory and Methods Abstracts contains brief (e.g. half a page) summaries of papers in statistical journals. However, I have found the index journals much more useful in finding papers on particular topics. The Science Citation Index
Other sources of statistical information
67
contains a list, alphabetically by author, of all papers published in science journals (which includes many statistical journals). It actually consists of three journals, called the Citation Index, the Souree Index and the Permuterm Index. Suppose you want to see if anyone has followed up the work of Dr X. Y.Z. Smith in 1970. Y ou look up X. Y.Z. Smith in the Citation Index which lists all recent papers which refer to the earlier paper. Full details of these recent papers may be found in the Souree Index. The Permuterm Index enables you to look up keywords taken from the titles of recent papers. There is aseparate Social Scienee Citation Index. There is also a Current Index oJ Statisties, which is solely concerned with statisticaljournals and enables the user to find, for example, all papers whose title includes the word 'Forecasting'.
9.3
Other sources of statistical information
Most countries have anational statistical office which issues official statistics in various publications at various intervals of time. Such statistics include demographic statistics (births, deaths, etc.), economic statistics (e.g. cost of living, number of unemployed), social statistics (leisure, etc.), as well as statistics on crime, housing, education, etc. In the UK, for example, the Central Statistical Office publishes a monthly digest and annual abstract of the more important statistics, the annual Social Trends, aimed at a general audience, as well as a variety of more specialized publications such as Eeonomie Trends. They also publish a Guide to Official Statisties. In the USA, the US Bureau of the Census publishes a wide range of statistics. Various regional, foreign and international statistics are also usually available and the Demographie Yearbook and Statistieal Yearbook published by the United Nations are particularly helpful. The range of publications is so wide that there are specialized books giving guidance on sources of statistics.
10 COMMUNICA TION I
Consultation and collaboration in statistical projects
Many statistical projects arise from requests for help from specialists in other disciplines. Such specialists will have varying degrees of expertise in statistics, and the ability to be able to communicate effectively with them is most important. Such work is often called consulting, although Cox (1981) has questioned the overtones to this word and advocated greater emphasis on collaboration. It is certainly true that full participation in a collaborative study is more rewarding than giving a quick answer to a cookbook type question. However, in my experience the statistician must be prepared to give advice at a variety oflevels, and the following remarks are perhaps concerned more with consulting. There are many reasons why consulting (or collaboration) may 'break down', and both the statistician and the dient may be at fault. Rather than seek im partial statistical advice, the dient may simply want the statistician to do his work (or 'number crunching') for hirn. Alternatively, the dient may simply want a publishable Pvalue, the confirrnation of condusions which have al ready been drawn, or a 'mirade' ifhe realizes that the da ta are 'poor'. Sometimes the dient gets what he asks for (even if it is not really what he needs) and sometimes he gets more than he asks for (if the statistician's probing questions lead to a reformulation of the problem). Unfortunately, the dient often gets frustrated because the statistician does not understand the problem, or insists on solving a different one, or presents unintelligible condusions, or takes the data away and is never seen again. As regards the last point, it is often difficult to be thorough and yet report on time. For this reason it is important to impose strict deadlines for student projects so that they learn to use time profitably and efficiently. Much of the following advice is 'obvious' common sense but is regrettably often ignored in practice. The statistician should: 1.
Have a genuine desire to solve real problems and be interested in the field of application.
2.
Know the 'customer' and meet his needs. Try to understand the dient's
Consultation and collaboration in statistical projects
69
problem and express the statistical condusions in a language he can understand. Try to avoid statistical jargon. 3.
Be prepared to ask probing questions. Do not take things far granted. Ascertain what prior information is available. There is a fine distinction between getting enough background material to understand the problem, and getting bogged down in unnecessary detail. Simple questions about apparently minor details may sometimes produce startling revelations or bring misunderstandings to light. Be prepared to interrupt your dient (politely, but firmly) if you do not understand hirn, particularly if he starts using his own jargon.
4.
Get the dient to be precise about the objectives ofthe study. Sometimes the real objectives turn out to be quite different from the ones first stated. Instead of asking 'What is the problem?', it is better to say 'Tell me the full story', so as to find out what the problem really iso
5.
Try to get involved at the planning stage so that the project becomes a collaborative study rather than a potentially unsatisfactory consultation. It is much harder (and perhaps impossible) to help if data have been collected without following basic statistical principles such as randomization. Regrettably, statisticians are often consulted too late. If data have already been collected, find out exactly how this was done.
6.
Bear in mind that resource constraints playa large role in determining the practical solution.
7.
Keep the design and analysis as simple as possible as is consistent with getting the job done. Be willing to settle for a 'reasonably correct' approximate solution. (A partial solution to a problem is better than no answer at all!)
8.
Be prepared to admit that you can't ans wer some problems straight away and may need time to think about the problem and consult a library or other statisticians.
It is arguable that the overall 'success' of many statisticians is largely determined by their effectiveness as statistical consultants. Unfortunately, this is difficult to teach except by experience, although more colleges are providing guidance on consulting in applied statistics courses. One possibility is to set problems which are deliberately incomplete or misleading (e.g. Exercises B.3 and F.1) and wait for the students to raise the necessary queries. It is also instructive to point out illustrative statistical errors in the literature, so that students realize that a critical attitude is judicious. In some studies, particularly sam pie surveys and dinical trials, the statistician should also be prepared for the ethical problems which may arise. For example, when is it ethical to prescribe a new medical treatment, what
70
Consultation and collaboration in statistical projects
questions is it reasonable to ask, and to whom does the resulting data set 'belong'? A general code of ethics has been published by the International Statistical Institute (1986), which sets out the obligations to society, to funders and employers, to colleagues, and to human subjects. Recently there has been increased interest in expert systems (see Chapter 8) which attempt to incorporate the functions of a statistician into computer packages so as to obviate the need for a statistical consultant. Some progress along these lines is obviously desirable so as to avoid the misuse of packages by amateur statisticians, but it seems doubtful whether expert systems ever will (or should) take over the statistician's work completely.
Further reading Further advice on consulting is given by Sprent (1970), Jones (1980), Joiner (1982a; 1982b) and Rustagi and Wolfe (1982). Hand and Everitt (1987) give aseries of examples showing the statistical consultant in action (see especially Tony Greenfield's entertaining reminiscences).
11 COMMUNICATION II
Effective report writing
A good statistician must be able to communicate his work effectively both verba11y and by means of a written report. There is little point in 'getting the right answer' unless it can be understood by the intended recipients. An oral presentation a110ws discussion and feedback and is suitable for interim presentations, particularly when supported by appropriate visual aids. However, we concentrate he re on giving general guidelines for writing a clear, selfcontained report which is the normal method of communication. This can be a difficult and sometimes tedious job, and it may appear more inviting to get on with the nextjob. However, written documentation of your work is vital and should be done before memory fades. The three main stages in writing areport may be described as preparation, writing and revision, and they are considered in turn.
11.1
Preparation
Before you start writing areport, you should co11ect together a11 the facts and ideas about the given topic which you want to include in the report. Sketch a brief outline of a11 the different points which need to be included, and plan the structure of the report. This involves getting the material into the right order and dividing it into sections (and possibly subsections) which should be numbered consecutively. Give each section a suitable heading; common titles include 'Introduction', 'Description of the experiment', 'Discussion of results' and 'Conclusions'.
11.2
Writing the report
Statisticians have widely different abilities to express themselves in writing. Yet, by fo11owing simple general principles and acquiring good technique, the reader should be able to produce areport of a reasonable standard. Before you start to write, consider carefu11y who is going to read the report, what their level ofknowledge is likely to be, and wh at action, if any, you want the
72
Effective report writing
report to precipitate. The following general guidelines should be helpful: 1.
Use simple, clear English. In particular, short words should be preferred to long words with the same meaning. Try to avoid sentences which are longer than about 2025 words, by splitting long sentences in two if necessary.
2.
If you cannot think of exacdy the right word, a reference book of synonyms or a Thesaurus may be helpful.
3.
Use a dictionary to check spelling.
4.
Add sufficient punctuation, particularly commas, to make the structure of each sentence clear.
5.
Important words or phrases may be underlined or written in italies or in CAPIT ALS to make them stand out.
6.
The hardest step is often to 'get started' at all. The first word is usually the most difficult to write. A thorough preparation (see above) is very helpful as there may be litde distinction between jotting down preliminary ideas and the first draft. Try writing the first draft as if telling a friend about your work in your own words. Y ou can then polish the style later. It is much easier to revise a draft (however bad) than to write the draft in the first place.
7.
It is often easier to write the middle sections of the report first. The introduction and conclusions can then be written later.
8.
The introduction should provide a broad, general view ofthe topic. It should include a clear statement of objectives and indicate how far they have been carried out. Some background information should be given but avoid details which belong later in the report.
9.
The conclusions should summarize the main findings and perhaps recommend appropriate action.
10.
A brief summary or abstract at the beginning of the report is often useful.
11.
The summary, introduction and conclusions need to stand on their own and be particularly clear, as some readers may only look at these sections.
12.
Graphs and tables form an important part of many reports. They require careful preparation but sadly are often poorly produced. Section 6.5 gives general advice on presentation. In brief, they should have a clear, selfexplanatory tide, so that they are understandable when viewed alone, and they should be numbered so that they can be referred to in the text. The units of measurement should be stated. The
Revision
73
axes of graphs should be labelled. Tables should not contain too many (or too few) significant digits. Tables of computer output are often unsuitable for direct inclusion because of poor formatting. It is sometimes easier to revise a table 'by hand' rather than rewrite the program to get clearer output. Even better, a large table can often be summarized into a much smaller table. 13.
Appendices are useful for detailed material which would break up the flow of the main argument if included in the main text. This includes detailed mathematics and large tables of computer output (if they are really necessary at all). The latter should be summarized in the main text.
14.
The technical details in the main sections should be clear, concise and mathematically sound. Define any notation which is introduced. Give sufficient background theory but do not try to write a book!
15.
A bibliography is often desirable, and references to books and papers should be given in full so that they can easily be found. A reference should include the author's name(s) and initials, date of publication, tide, publisher (for a book) or name of journal, volume and page numbers (for a paper), as illustrated in the references at the end of this book. Get into the habit of recording the full details of a reference at the time you use it.
16.
The report should be dated, given a tide, and say who has written it.
11.3
Revision
When you have finished the first draft of the report, you will find that substantial revision is usually necessary. First, ask yourself if the arrangement of sections (the structure) is satisfactory. Second, examine the text in detail. Does it read easily and smoothly? Is it clear? Is there unnecessary repetition? Have any important points been omitted ? Are the graphs and tables clear? The report can be improved substantially by gene rally 'tightening' the style. Y ou need to be as brief as possible while still including all important details. You should also doublecheck any numbers given in the text, particularly if they have been copied several times. Do not underestimate the time required to revise areport, wh ich often exceeds the time taken to write the first draft. When you think you have finished, put the report aside for at least 24 hours, and then read it through in one sitting. Try to imagine yourself as a reader who is seeing it for the first time. You may even find a sentence which you yourself cannot understand! It is amazing how many obvious errors are left in reports because they have not been properly checked. Y ou will be
74
Effective report writing
judged on what you have written and not on what you meant to write! Before getting areport typed, try and get someone else (your supervisor?) to read it. Do not be surprised, angry or discouraged if you are advised to make extensive changes, as this is the fate of most draft reports (including the first version of this book!). Most secretaries know little statistics, and need guidance with formulae containing unusual symbols, such as Greek letters. Obviously fewer typing errors will occur if you write legibly. After typing, check the typescript carefully and in particular check formulae symbol by symbol. Remember that if amistake gets through, then it is your fault and not the secretary's!
Further reading There are many specialized books on report writing. It is worth looking at the classic text by Sir Ernest Gowers (1977) entitled The Complete Plain Words. Other useful references are Cooper (1976), Wainwright (1984) and Ehrenberg (1982, Chapter 18).
12 Numeracy
As weH as learning statistical techniques, the aspiring statistician needs to develop a sense of numeracy. This is rather hard to define and cultivate, but involves general numerical 'common sense', some knowledge of prob ability and statistics, sound judgement, and the ability to 'make sense' of a set of numbers. This section develops two other aspects of numeracy, namely the need to maintain a healthy scepticism about other people's statistics, and the importance of being able to handle the sort of misconceptions which typicaHy arise in the minds of the general public.
12.1
The need for scepticism
The sensible statistician should be wary of other people's statlstlCS. In particular, it is unwise to believe aH official statistics, such as government statements that the probability of a nuclear meltdown is only one in 10 000 years (remember Chernobyl!). As another example, the number of unemployed is a politically explosive weapon and it might be thought that an index like this is above suspicion. In fact it is quite difficult to define exactly what is meant by an unemployed person. Should we include persons employed parttime, people who are unfit to work, married women who do not want to go out to 'work', and so on? In the United Kingdom, the method of calculating the number of unemployed has been changed several times in recent years. In each case the change has miraculously reduced the number of unemployed! A healthy scepticism also means that you should look out for obvious mistakes in books and journals. Misprints and errors do get published. For example, on reading that a group of 23 medical patients have average age 40.27 years with range 24.62 years and standard deviation 2.38 years, you should see straight away that the latter two statistics are almost certainly incompatible  despite the apparent twodecimal place accuracy. Could the 'standard deviation' be the standard error of the mean? Unfortunately it is easy to quote many other alarming mistakes from published work. In fact we sometimes learn more from mistakes (our own and other
76
Numeracy
people's) than we do from formal textbooks and lecture courses. Here is a random selection of howlers to look out for. 1.
The silly statistic: if, for example, a mean is calculated which is outside the range of the given data, there is obviously an arithmetical error.
2.
The silly graph: a graph with no title or unlabelled axes.
3.
The paired comparison test: with paired difference data, one of the commonest errors in the literature is to carry out a twosample test on means rather than the appropriate paired difference test. Then the treatment effect is likely to be swamped by the differences between paus.
4.
The silly regression: fitting a straight line to data which are clearly nonlinear.
5.
Another silly regression: with some computer packages, regression through the origin gives a higher coefficient of determination (R 2) than fitting a model with a constant term, even though the latter contains one more parameter. Thc user may therefore be tempted to use regression through the origin all the time! This problem arises when sums of squares are calculated about the origin rather than about the mean.
6.
The silly x2test: a frequent howler is to carry out the X2 goodnessoffit test on a table of percentages, or proportions, rather than on the table of count frequencies.
7.
The silly experimental design: examples are too painful to give! Finding out how the data were collected is just as important as looking at the data.
This list could easily bc extended. The reader must be wary of statistical analyses carried out by other people and not ass urne they are correct. It is encouraging that some journals are attempting to improve the presentation of statistical results. In particular, the British Medical Journal has laid down guidelines (Altman et al., 1983) on what to include when writing papers. These guidelines could usefully be imitated by other journals.
12.2
Dealing with popular misconceptions
Statistical reasoning can affect nearly all aspects of our lives. Thus, apart from dealing with 'professional' statistics, it is important for statisticians to be sufficiently numerate to co pe with the flood of statistics and pseudostatistics from the media and the sort of misconceptions embraced by the general public. Given so many statistics of such variable quality, it is unfortunate that
Dealing with popular misconceptions
77
many people either believe everything they hear or (perhaps even worse) come to believe in nothing statistical. In order to und erstand the sort of misconceptions which can and do arise, and leam how to spot 'daft' statistics, it is a good idea to read the lighthearted books by Huff (1959, 1973). Some other good books of a similar style include Hooke (1983) and Hollander and Proschan (1984). A few brief examples must suffice here. One popular misconception is that anyone who handles numbers must be a statistician. This is like saying that anyone who handles money must be a banker. In fact most statistical 'lies' are produced by nonstatisticians. Another popular misconception is that random variability is somehow abnormal. In fact one of the first lessons in statistics is that random variability is perfectly normal and needs to be measured and understood so as to allow the estimation of other more interesting effects. The general public needs to leam that the average can be overworked and must usually be supplemented by some measure of spread. For example, people are different! It is much better for a doctor to tell a patient that recovery will take between, say, three and seven days, than to say that it will take five days on average. Anyone who takes longer than five days will get worried! A third misconception is that any data are good data. In fact the statistician knows that data collection is just as important as data analysis, and that stopping the first 50 people to emerge from the local supermarket on a Saturday moming will not give a random sam pie of consumers. There are many popular misconceptions regarding probability. For example, television commentators often use the word 'certain' when they mean 'probable' (e.g. the footballer was certain to score until he missed!). The law of averages is also widely misquoted. To take just one example, a newspaper reported a team captain as having lost the toss four times in a row before important matches and therefore 'by the law of averages' was more likel y to win the next toss! The final misconception we mention here is that all official statistics are 'true'. This has already been covered in seetion 12.1 We dose with three examples of the sort of potentially misleading or plain daft statistics wh ich is foisted on the general public. 1.
2. 3.
'More accidents happen in the horne than anywhere else.' This is an example of 'sowhat' statistics. Most people spend most of their time at horne and so the result is hardly surprising. We can't all move away from horne to avoid accidents! 'Sales are up 10%.' This is an exanwle of the unmentioned base. If sales were poor before, then they are not much better now. '67% of consumers prefer brand x.' This is an example of the unmentioned sampie size. If a sampie size three has been taken, the result is hardly convincing.
78
Numeracy
No doubt the reader can easily add to this list (e.g. Exercise A.3). The main lesson is to be vigilant at all times.
12.3
Tailpiece
Having emphasized the importance of numeracy, I dose paradoxically by pointing out that many of the most important things in life are difficult or impossible to measure; for example beauty,joy, love, peace, and so on. Thus costbenefit analysis, which was very fashionable a few years ago, has been described as 'a procedure by which the priceless is given a price ... an elaborate method of moving from preconceived notions to foregone condusions' (Schumacher, 1974). The good statistician must be prepared to use his subjective judgement where necessary to modify the results of a formal statistical analysis.
SUMMARY
How to be an effective statistician
We conclude Part I with a brief summary of the qualities needed by an effective statistician and of the general principles involved in tackling statistical problems. The attributes needed by a statistician are a mixture of technical skills and beneficial personal qualities. A 'good' statistician needs to be welltrained in both the theory and practice of statistics and to keep up with the statistical literature. The theory should include an understanding of probability models and inference, and so me mathematical ability is useful. The practice should include plenty of experience with real da ta in a variety of disciplines so as to develop a 'feel' for the realities of statisticallife. In particular, the statistician should appreciate the importance of all the different stages of a statistical investigation, and understand the general principles involved in tackling statistical problems in a sensible way. The latter may be summarized as follows: 1.
When a study is proposed, formulate the problem in a statistical framework. Clarify the objectives carefully. Ask questions to get sufficient background information on the particular field of application. Search the literature if necessary.
2.
The method of data collection needs to be carefully scrutinized (especially ifit has not been designed by a statistician). It is important to realize that real data are often far from perfect.
3.
Look at the data. An initial data analysis (IDA) is vital for both da ta description and model formulation. Be aware that errors are inevitable when processing a largescale set of data, and steps must be taken to deal with them.
4.
Choose and implement an appropriate method of analysis at an appropriate level of sophistication. Rather than ask 'What technique shall I use here?' it is better to ask 'How can I summarize these data and understand them?'. Rather than think of the analysis as just 'fitting a model', there may be several cycles of model formulation, estimation and checking. If an effect is 'clear', then the exact choice of
80
How to he an effective statistician analysis procedure may not be crucial. A simple approach is often to be preferred to a complicated approach, as the former is easier for a 'dient' to understand and is less likely to lead to serious blunders.
5.
Be ready to adapt quickly to new problems and be prepared to make adhoc modifications to existing procedures. Be prepared, if necessary, to extend existing statistical methodology. Being prepared to use lateral thinking as the 'best' solution to a problem may involve looking at it in a way which is not immediately apparent.
6.
If the problem is too difficult for you to solve, do not be afraid to consult a colleague or an appropriate expert.
7.
Finally, the statistician must be able to write a convincing report, which should be carefully planned, dearly written and thoroughly checked.
In addition to technical knowledge and the ability to make technical judgements, the ideal statistician would also have the following more personal qualities: be an etfective problem sol ver be thorough and yet report on time be openminded and displaya healthy scepticism be able to collaborate with other people, and be able to communicate both orally and in writing (e) be versatile, adaptable, resourceful, selfreliant and have sound common sense (a taU order!) (f) be able to use a computer and a library etfectively (g) be numerate, particularly in being able to 'make sense' of a set of messy data. Yet also understand that some important facets of life cannot be expressed numericaUy and hence understand what statistics can and cannot do. (a) (b) (c) (d)
The paragon of virtue depicted he re is very rare, but should still provide us with a target to aim at.
PART 11
Exercises
This part ofthe book presents a varied collection of exercises, ranging widely from fairly smallscale problems through to substantial projects involving reallife, complex, largescale data sets, as well as exercises in collecting data, using a library and report writing. The exercises are designed to illustrate the general principles of Part I, rather than the detailed use oftechniques as in most other textbooks. With a few exceptions, the exercises are generally posed in the form of a reallife problem rather than as a statistical or mathematical exercise. While some of the analyses turn out to be 'standard', the method is usually not specified in the question and so part of the problem is to translate the given information into a statistical formulation. In particular, the exercises illustrate the importance of(a) clarifying objectives and getting background information, (b) finding out how the data were collected, (c) carrying out an initial data analysis (IDA), (d) formulating an appropriate model, (e) modifying standard procedures to fit particular problems, and (f) presenting the results clearly. I hope these exercises give the reader experience in thinking about and then tackling real practical problems, and thus help to develop the practical skills needed by a statistician. I do not claim that the examples are a 'random sampie' of real statistical problems, but they are more representative of 'reallife' in my experience than those in many other statistics books. Each exercise is designed to make at least one important point. Except where otherwise indicated, the data are real and peculiarities have generally been left in. Such peculiarities illustrate the importance of finding out how data were collected and the fact that early consultation with a statistician might have led to a better designed study and hence to 'better' data. Some simplification has occasionally been made to get exercises of a reasonable length, suitable for student use, but this has been kept to aminimum. (The dangers of oversimplification and of giving insufficient background information in illustrative examples are noted by Preece, 1986). I also note that there are so me nonstandard exercises (e.g. Exercises A.1A.4, C.1, C.2) which may appear elementary, but which are designed to be entertaining and yet make the reader think and ask questions.
82
Exercises
Instructions The exercises have been divided into sections with selfexplanatory tides and use a questionandanswer format. Within each section, the solutions (or helpful comments) are presented in order at the end ofthe section so that the reader cannot see both problem and solution at the same time. This is quite deliberate. Y ou should read the problem, think about it, and hopefully tackle it before reading the solution. DO NOT CHEA T!!! The reader is advised to examine carefully each data set before embarking on a sophisticated analysis. This should always be done anyway. In some cases the outcome of the IDA is so clear that a definitive analysis is unnecessary (or even undesirable). In other cases, the outcome of the IDA may not be clear or the data structure may be so complicated that a more sophisticated analysis is necessary. Even so the IDA should be helpful in choosing the method of analysis.
The solutions These are not meant to be definitive but are often left openended to allow the reader to try out different ideas. Some readers may query some of my 'comments' and alternative solutions can doubdess be found. (We alllike to think we can analyse da ta a litde better than everyone else.) I have perhaps overemphasized IDA and underemphasized the use of standard statistical techniques, but I regard this as a desirable reaction to the current overuse of overcomplicated methods. Knowing lots oftheory may make it easier for a statistician to go offin the wrong direction and I certainly hope the exercises demonstrate the vast potential for simple ideas and techniques.
Relation with other practical work It is ofinterest to compare the objectives ofthese exercises with other types of practical work used in statistics teaching. In introductory statistics courses, most exercises are what I call techniqueoriented or drill exercises in that the student is told more or less exacdy what to do. Such exercises are an essential step in learning techniques, but do not prepare students for the possibility that data may not arrive on a statistician's desk with precise instructions as to the appropriate form of analysis. Thus most of the exercises in this book are what I call problemoriented exercises, where the reader has a problem to solve and has to decide for himselfhow to tackle it. Instead ofbeing asked 'Here are some data. Apply technique X', the reader may be given a real problem and simply asked to 'analyse the data'. Sometimes the problem and background information are not given explicidy and have to be 'wheedled
Relation with other practical work
83
out' by the statistician. The selection of an appropriate form of analysis for a given problem can be difficult and so many of the problems have dual titles describing both the subject matter and a hint about the method of analysis. Alternative types of practical work are reviewed by Anderson and Loynes (1987, Chapter 3). They include experiments where students collect their own data (section H), ca se studies and projects. In (ase studies, the students are taken stepbystep through a real problem, and are also told what went wrong. As regards projects (e.g. Kanji, 1979), many teaching establishments now require students to undertake a fairly substantial piece of work to make the student work by himself, find out how to use a library effectively, carry through an investigation from start to finish and write up a clear report. I would argue that aseries of smallerscale projects, as in Part II here, will give a student a wider variety of experience than a single largescale project. However, I also suggest that at least one of the larger exercises needs to be tackled individuaHy, analysed thoroughly and written up properly. Further exercises of a related kind may be found for example in Cox and SneH (1981) and Anderson and Loynes (1987).
A Descriptive statistics
Descriptive statistics is an important part of the initial examination of data (IDA). It consists primarily of calculating summary statistics and constructing appropriate graphs and tables. It may weH be regarded by the reader as the most familiar and easiest part of statistics. Thus the exercises in this section are intended to demonstrate that descriptive statistics is not always as easy as might be expected, particularly when data exhibit skewness andJor outliers. It is not always clear what summary statistics are worth calculating, and graphs and tables are often poorly presented. Further relevant exercises include Exercises B.l, B.2 and B.3 which demonstrate the power of a good graph, and Exercises B.4 and B.6 which give further hints on presenting a clear table.
Exercise A.l
Descriptive statistics  I
The 'simplest' type of statistics problem is to summarize a set of uni varia te data. Summarize the foHowing sets of data in whatever way you think is appropriate. (a) The marks (out of 100 and ordered by size) of 20 students in a mathematics exam: 30,35,37,40,40,49,51,54,54,55 57, 58, 60, 60, 62, 62, 65, 67, 74, 89 (b) The number of days work missed by 20 workers in one year (ordered by size) : 0, 0, 0, 0, 0, 0, 0, 1, 1, 1 2,2,3,3,4,5,5,5,8,45 (c) The number of issues of a particular monthly magazine read by 20 people in a year: 0, 1, 11,0, 0, 0, 2, 12, 0, 12, 1, 0,0,0, 0, 12, 0, 11,
°°
Exercise A.3
Interpreting 'offtcial' statistics
85
(d) The height (in metres) of 20 women who are being investigated for a certain medical condition:
1.52, 1.60, 1.57, 1.52, 1.60, 1.75, 1.73, 1.63, 1.55, 1.63 1.65, 1.55, 1.65, 1.60, 1.68, 2.50, 1.52, 1.65, 1.60, 1.65 Exercise A.2
Descriptive statistics  11
The following da ta are the failure times in hours of 45 transmissions from caterpillar tractors belonging to a particular American company:
4381 3953 2603 2320 1161 3286 2376 7498 3923 9460 4525 2168 6922 218 1309 1875 1023 1697 4732 3330 4159 2537 3814 2157 6052 2420 5556 309 1295 3266
6914 1288 1038 7683 6679
4007 5085 3699 5539 1711
3168 2217 6142 4839 5931
Display a sensible stemandleaf plot of the data and from it calculate the median and interquartile range. Without calculating th~ mean, say whether it is greater than or smaller than the median. Construct the box plot ofthe data. Do you think it is preferable to display these data using (a) a histogram, (b) a stemandIeafplot or (c) a box plot? Describe the shape of the distribution of failure times and indicate any observations which you think may be outliers. Find a transformation such that on the transformed scale the da ta have an approximately symmetrie distribution, and comment again on possible outliers. Exercise A.3
Interpreting 'official' statistics
One dass of 'descriptive statistic' is formed by the wide variety of national and international statistics. They are apparently easy to interpret, or are they? (a) Discuss the following statements: (i) In re cent years, Sweden has had one of the highest recorded suicide rates. This indicates problems in the Swedish way of life. (ii) UK official statistics show that women giving birth at horne are more at risk than women giving birth in hospital. This indicates that all babies should be delivered in hospital. (iii) On comparing death rates from tuberculosis in different states ofthe USA, it is found that Arizona has the worst record in re cent years. This indicates that Arizona is an unhealthy place to live. (b) The railways ofGreat Britain have always set high standards of safety. A spate of serious accidents in 1984 suggested that safety standards may
86
Descriptive statistics have deteriorated. Is there any evidence from the data given in table A.1 that standards are declining?
Table A.l
Railway accidents on British Rail, 197083 Collisions
Year
No.of train accidents
Between passenger trains
Between passenger and freight trains
1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983
1493 1330 1297 1274 1334 1310 1122 1056 1044 1035 930 1014 879 1069
3 6 4 7 6 2 2 4 1 7 3 5 6
7 8 11
12 6 8 11
11 6 10 9 12 9 16
Passenget
Freight
No.of train miles (millions)
20 17 24 15 31 30 33 18 21 25 21 25 23 25
331 235 241 235 207 185 152 158 152 150 107 109 106 107
281 276 268 269 281 271 265 264 267 265 267 260 231 249
Derailments
Source: ChiefInspecting Officer ofRailways Annual Reports, Department ofTransport. Figures given may be slightly infiuenced by changes in statistical treatment, but they have been allowed for where possible and are thought not to affect significandy the conclusions.
Exercise A.4
Lies, damned lies, ...
Capital punishment for murder in the United Kingdom was provisionally abolished in 1965. Permanent abolition followed in 1969, but only after a lively debate. During this debate anational newspaper published the graph
No. 01 violent crimes
No. 01 murders 200
30000
150
15000
1963
Figure A.6
A graph from a newspaper.
1968
N ates an Exercise A.l
87
shown in fig. A.6 to support the case for retaining the death penalty. Comment on the graph.
NOTES ON EXERCISE A.l
Data sets may be summarized graphieally, numerieally, andjor verbally. The ehoice of appropriate summary statisties depends in part on the shape of the underlying distribution. Assessing shape is therefore an important first step in data deseription. This can be aehieved by drawing a bar ehart (for a diserete variable), a histogram (for a continuous variable), or a stemandIeaf plot. (a) The exam marks data are the only straightforward data set. The histogram in fig. A.1 reveals a reasonably symmetrie bellshaped distribution. As the data are approximately normally distributed, suitable summary statisties are the sampIe mean, x= 55 marks, as a measure of location, and the sam pIe standard deviation, s = 14 marks, as a measure of spread. The histogram, mean and standard deviation together provide a satisfactory summary of the data. Even in this simple ease, there are pitfalls. Did you give too many significant
Frequency
20
40
60
80
Exam mark
Figure A.l
Histogram of exam mark data.
3 3 4 4 5 5 6 6 7 7 8 8
THE STEM UNITS=10 marks Figure A.2
0 57
00 9
144 578
0022
57 4
9
THE LEAVES UNITS=1 mark
A stemandleaf plot of exam mark data.
100
88
Descriptive statistics
figures in the summary statistics? When the data are recorded as integers there is no point in giving the summary statistics to more than one decimal place. As to the graph, you may have chosen a different width for the dass intervals or given a different plot. For comparison, fig. A.2 shows a stemandIeaf plot with the sm aller dass interval width of five marks. This graph is like a histogram on its side with the first significant figure of each observation in the stern (on the left) and the second significant figure in the leaves (on the right). The digits within each dass interval (or leaf) have been ordered by size. The reader can decide for hirnself which graph is the dearer. (b) The bar chart in fig. A.3 shows that the frequency distribution of 'days work missed' is severely skewed to the right. Note the break in the horizontal axis between 8 and 45 days. The sam pIe mean, x= 4.2 days, is highly infiuenced by the largest observation, namely 45 days. The latter is an outlier but there is no reason to think it is an error. By their nature, skewed distributions give outliers in the long 'tair. The median (1.5 days) or mode (0 days) or the 5% trimmed mean (2.2 days) may be a better measure of location. The standard deviation is of litde help as a descriptive measure of spread with such skewed data. The range is 45 days but this is also unhelpful when most of the observations lie between 0 and 5. The interquartile range, namely 4.7 days, may be preferred. Even so, summary statistics have limited value and the bar chart is probably the best way of summarizing the data.
10
Frequency 5
No. of days work missed
Figure A.3
Bar chart of absence data.
(Note that the standard deviation may be of some use, not as a descriptive measure, but in finding a probability distribution to fit the data should this be desired. As the variance (52 = 97.2) is so much larger than the mean, the Poisson distribution is (surprisingly?) not appropriate. Instead a distribution called the negative binomial may be appropriate.) (c) These data have not been ordered and are impossible to read 'by eye'. The first
Notcs on Exercise A.2
89
task is to construct the frequency distribution of 'number of issues read' which is plotted in fig. AA. There are two modes at zero and twelve. The bimodal Ushape is even more difficult to summarize than a skewed distribution. Most people do not read the magazine at all, but a substantial minority read nearly every issue. The sampie mean and standard deviation are potentially very misleading. The proportion of 'regular' readers (5/20 or 25%) is a useful statistic, but it may be sensible to describe the data in words rather than with summary statistics.
Frequency 10
o ~~~~12 o No. cf issues seen
Figure A.4
Bar chart of magazine data.
(d) There was amisprint in these data wh ich I have deliberately induded in the problem. Did you spot it and deal with it? The observation 2.50 is not only an outlier, but almost certainly an error. It is probably meant to be 1.50 but you may prefer to omit it completely. As the remaining da ta are reasonably symmetrie, they may be described by the sampie mean and standard deviation. Another feature ofthe da ta is that, although they appear to be measured to two decimal places, inspection of the final digits suggests that so me numbers, such as 1.65, keep recurring. A little detective work suggests that the observations have been measured to the nearest inch and converted to metres. Did you spot this? Moral
Descriptive statistics is not always straightforward. In particular the calculation of summary statistics depends on the shape of the distribution and on a sensible treatment of errors and outliers. NOTES ON EXERCISE A.2
This exercise concentrates on investigating the shape of the underlying distribution of a given set of data. As highlighted by Exercise A.1, this is an important aspect of descriptive statistics, and a useful preliminary to the calculation of summary statistics. The data in this example range from 218 hours to 9460 hours. A suitable width for each dass interval in a stemandleaf plot is 1000 hours. This will give ten dass
90
Descriptive statistics
intervals which is about 'right' for a sam pie size 45. The plot is easy to construct by hand, or using a package, and is shown in fig. A.5. The leaves contain the second significant digit of each observation, ordered within each dass interval. The length of each leaf is proportional to the frequency (c.f. the corresponding histogram). The median can easily be found from fig. A.5 as the twentythird observation, namely 3300 (or 3286 to be exact from the raw data). The interquartile range is from
o 1
2 3 4 5 6 7 8 9
THE STEM UNITS=1000 hours
Figure A.5
23 001333779 22234456 2333789 0024578 1 569 01 799 57 5
THE LEAVES UNITS=100 hours
A stemandleaf plot, with one digit leaves, of transmission failure times.
2200 (the twelfth observation) to 5100 (the thirtyfourth observation). As the distribution is skewed to the right, the mean exceeds the median. The box plot has a 'box' from 2200 to 5100, with whiskers to the two extreme observations, namely 218 and 9460. The median is marked on the box at 3300. The box plot loses much information and is generally unsuitable for a single sam pie (whereas a set ofbox plots can be helpful for comparing several groups of observations, as in Exercise B.2). The shape of the distribution is 'dearly' nonnormal, but rather skewed to the right with a mode around 2000 hours. It is hard to assess outliers in the long 'tail' of a skewed distribution. 00 you think the largest observation is an outlier? Although 2000 hours Ion ger than the second highest observation, the long tail means there is no reason to think it is an error. Using a computer, it is easy to try different transformations. In this ca se logarithms overtransform the data (making them negatively skewed), while square roots are 'about right' for giving approximate symmetry. In the histogram of the transformed data, the largest observation no longer looks 'outlying', whereas you may be surprised to find instead that the two smallest observations now look somewhat separated from the lower tail. In the absence of any extern al explanation why a square root transformation should be meaningful, the two lowest observations are stilllikely to be genuine, and I would say that there are no obvious outliers here. It is to be hoped that we now have a good understanding of the distribution of failure times, namely that it is skewed to the right with median 3300 hours and an interquartile range of 2900 hours.
Notes on Exercise A.3
91
NOTES ON EXERCISE A.3
(a) These three statements demonstrate how easy it is to mlSlnterpret official statistics. In (i) the word 'recorded' is important and this popular 'fact' (or myth?) about Sweden arises partly because the Swedes are more honest at recording suicide than many other nationalities. One or two countries actually have a zero suicide rate because suicide is not allowed as a legal cause of death! Any attempt to make inferences about the Swedish way of life would be very dangerous. In (ii) it is important to realize that there are two categories of horne births, the planned and unplanned. The former have low risk because of careful selection. The latter have high risk because they include premature and precipitate deliveries, etc., which will continue to occur at horne whatever the official policy may be. Aside from ethical considerations, such as freedom of choice, statistics are needed on planned horne deliveries in order to assess this question. As to (iii) , Arizona is actually a healthy place for people who already have chest complaints to go to. Such people go to Arizona in sufficient numbers to boost artificially the death rate. (b) These data are typical of many data sets in that a general sense of numeracy is more important for their interpretation than formal statistical training. As they are nationwide figures, there are no sampling problems, but there is still natural variability from year to year. The time series are too short to justify using formal timeseries techniques. The wise analyst will begin by spending a few minutes just looking at the table. We see that the number of train accidents has reduced but then so have train miles, albeit by a sm aller percentage. Collisions are few in number (thankfully!) and appear fairly random (are they approximately Poisson ?). Freight derailments are substantially down, but passenger derailments, although much sm aller in number, are not. If there is a decline in safety standards, it is not immediately obvious. Is any further analysis indicated? There is no point in attempting any form of inference but it may be possible to clarify the table somewhat. Using the guidelines of section 6.5.2 ofPart I, together with commonsense, suggests (a) some rounding, (b) calculating column averages (but not row averages!), (c) reordering the columns, and (d) grouping years to give fewer rows. The revised table is shown on p. 92. The general trends are now clearer. However, it is probably just as important to notice what data are not given and to ask questions to get further background information. For example, no information is given he re on accident severity, although the total number of passenger fatalities is certainly of interest and must be recorded somewhere. No doubt further queries will occur to you.
Moral When examining official statistics find out exactly what has been recorded and what has not been recorded. Take care in summarizing the data. The preparation of clear tables needs particular care.
92
Descriptive statistics
Table A.l (revised)
Statistics related to accidents on British Rail, 197083 Collisions
No.of train accidents
No.of train miles (millions)
Between passenger trains
Between passenger and freight trains
Passenger
Freight
197074 197579 198083
1350 1110 970
275 265 250
5.2 3.2 3.7
8.8 9.2 11.5
21 25 23
250 160 107
Average
1160
265
4.0
9.7
23
177
Year
Derailments
NOTES ON EXERCISE A.4
The general principles of good graph design are given in section 6.5.3 of Part I. Figure A.6 is an appalling example of a graph deliberately designed to mislead the reader. The two scales have been chosen with false origins so that the line for murders appears to have a much steeper slope than that for violent crimes. In fact the percentage increase in murders is less than that for violent crimes. Other queries will no doubt occur to the reader. Why have the years 1963 and 1968 been chosen? Why draw a straight line between the two end years when intermediate figures are available? How is murder defined? Criminal statistics can be difficult to interpret because of changes in the law and in the attitude of the courts. Murders are particularly hard to define. Many people who are indicted for mur der are subsequently acquitted or found guilty of a lesser offence, such as manslaughter. Are these figures for murder convictions or what? The total nu mb er of persons indicted for murder in England and Wales over the years 195776 are officially recorded as follows: 104, 104, 115, 124, 152, 135, 154, 158, 186,247 223,278,249,298,309,319,361,368,394,433 The overall increase in murder is, of course, very worrying, but it is difficult to detect evidence that the removal of capital punishment at about the middle of this period has had an appreciable effect.
Moral As the media bombard the general public with statistics, tables and graphs, it is vital for numerate people to be on the lookout for the misuse of statistics and expose any 'horrors' which occur.
B Exploring data
Given a new set of data, it is usually wise to begin by 'exploring' them. How many variables are there and of what type? How many observations are there? Are there treatments or groups to compare? What are the objectives? The initial examination of data (IDA), of which descriptive statistics is apart, is important not only to describe the data, but also to help formulate a sensible model. It is wise to explore data whether or not you think you know what analysis technique should be used. It is particularly important to check the assumptions made in carrying out a significance test, before actually doing it. The examples in this section demonstrate how to cope with a varied selection of data sets, where the method of analysis may or may not be apparent. There are so many different types of da ta which can arise that it is quite unrealistic to suppose that you can always assess the appropriate method of analysis before closely examining the data.
Exercise B.t
Broad bean plants  a twosample ttest?
The da ta given below show the (scaled) concentration of a certain chemical in 10 cut shoots of broad bean plants and in 10 rooted plants. Cut shoots: Rooted plants:
53 36
58 33
48 40
18 43
55 25
42 38
50 41
47 46
51 34
45 29
Summarize the data in whatever way you think is appropriate. From a visual inspection of the data, do you think there is a significant difference between the two sam pie means? Carry out a formal test of significance to see if the observed difference in sampie me ans is significantly different from zero at the 1 % level. Do any other questions occur to you?
Exercise B.2
Comparing teaching methods/ANOVA?
In an experiment to compare different methods of teaching arithmetic, 45 students were divided randomly into five equalsized groups. Two groups
94
Exploring data
were taught by the currently used method (the control method), and the other three groups by one of three new methods. At the end of the experiment, all students took a standard test and the results (marks out of30) are given in table B.1 (taken from Wetherill, 1982, p.263). What conclusions can be drawn about differences between teaching methods? Table B.t Group Group Group Group Group
Exercise B.3
Test results for 45 students
A B C D E
(control (control) (praised) (reproved) (ignored)
17 21 28 19 21
14 23 30 28 14
24 13 29 26 13
20 19 24 26 19
24 13 27 19 15
23 19 30 24 15
16 20 28 24 10
15 21 28 23 18
24 16 23 22 20
Germination of seeds/ANOVA?
Suppose that a biologist comes to you for help in analysing the results of an experiment on the effect of water concentration (or moisture content) on the germination of seeds. The moisture content was varied on a nonlinear scale from 1 to 11. At each moisture level, eight identical boxes were sown with 100 seeds. Four ofthe boxes were covered to slow evaporation. The numbers of seeds germinating after two wecks wcre noted and are shown in table B.2. Table B.2
Numbers of seeds germinating Moisture content 3
5
7
9
11
Boxes uncovered
22 25 27 23
41 46 59 38
66 72 51 78
82 73 73 84
79 68 74 70
0 0 0 0
Boxes covered
45 41 42 43
65 80 79 77
81 73 74 76
55 51 40 62
31 36 45 *
0 0 0 0
*Denotes missing observation.
The biologist wants your help in analysing the data, and in particular wants to carry out an analysis ofvariance (ANOVA) in order to test whether the number of seeds germinating in a box is affected by the moisture content and/or by whether or not the box is covered. Analyse the data by whatever method you think is sensible. Briefty summarize your conclusions.
Exercise B.5 Exercise BA tables
Cancer in rats/truncated survival times
95
Hair and eye colour/family income and size/twoway
(a) The data in table B.3 show the observed frequencies of different combinations of hair and eye colour for a group of 592 people (Snee, 1974). Summarize the da ta and comment on any association between ha ir and eye colour. Table B.3 Observed frequencies of people with a particular combination of hair and eye colour Hair colour Eye colour Brown Blue Hazel Green
Black
Brunette
Red
Blond
68 20 15 5
119 84 54 29
26 17 14 14
7 94 10 16
(b) The data in table B.4 show the observed frequencies of different combinations of yearly income and number of children for 25 263 Swedish families (Cramer, 1946). Summarize the data and comment on any association between family income and family size. Table B.4 family size
Observed frequencies of Swedish families with a particular yeady income and Yearly income (units of 1000 kron er)
Number of children
0 2 3 ~4
Total
Exercise B.S
01
12
23
3+
Total
2161 2755 936 225 39 6116
3577 5081 1753 419 98 10928
2184 2222 640 96 31 5173
1636 1052 306 38 14 3046
9558 11 110 3635 778 182 25263
Cancer in rats/truncated survival times
An experiment was carried out on rats to assess three drugs (see data set 3 of Cox and Snell, 1981, p. 170). Drug Dis thought to promote cancer, drug X is thought to inhibit cancer, while P is thought to accelerate cancer. Eighty
96
Exploring data
rats were divided at random into four groups of20 rats and then treated as follows: Group II
Drugs received
o
D,X
ur
0, P
IV
0, X, P
The survival time for each rat was noted and the results are given in table B.8. A post mortem was carried out on each rat, either when the rat died, or at 192 da ys when the experiment was truncated. The letter N after a survival time means that the rat was found not to have cancer. Summarize the data and try to assess whether the three drugs really do have the effects as suggested. Table B.8
Survival times in days for four groups of rats
Group I; 0
18 N 57 63N 67N 69 73 80 87 87N 94
106 108 133 159 166 171 188 192 192 192
Exercise B.6
Group 11; DX
2N 2N 2N 2N SN 55 N 78 78 96 152
192 N 192 N 192 N 192 N 192 N 192 N 192 N 192 192 192
Group II1; DP
37 38 42 43 N 43 43 43 43 48 49
51 51 55 57 59 62 66 69 86 177
Group IV; DXP
18 N 19 N 40N 56 64 78 106 106 106 127
127 134 148 186 192 192 192 192 192 192
N N N N N N
Cancer deathsftwoway tables of proportions
The data shown in table B.I0 are taken from a cohort study into the effect of radiation on the mortality of survivors of the Hiroshima atom bomb (see data set 13 of Cox and Snell, 1981, p. 177). This exercise requires you to carry out an initial examination of the da ta and report any obvious effects regarding the incidence of death from leukaemia and death from 'all other cancers'. You are not expected to carry out a 'proper' inferential analysis even if you think this is desirable. In practice, an analysis like this should be carried out in collaboration with appropriate medical experts, but he re you are simply expected to use your common sense. You should mention any queries which come to mind and
Exercise B.7
Vaccinating lambsJtwosample ttest?
97
state any questions which you would like to ask the medical experts, as weIl as presenting your analysis of the data. Table B.10
Number of deaths from leukaemia and from other cancers du ring the period 195059 for the given sampie size alive in 1950 Radiation dose in rads
Age in 1950
Total
0
19
lü49
5099 100199 200 +
514
Leukaemia All other cancers Alive 1950
14 2 15286
3 1 6675
1 0 4084
0 0 2998
1 0 700
3 0 423
6 1 406
1524
Leukaemia All other cancers Alive 1950
15 13 17109
0 6 7099
2 4 4716
3 2 2668
1 0 835
3 0 898
6 1 893
2534
Leukaemia All other cancers Alive 1950
10 27 10424
2 9 4425
2 9 2646
0 4 1828
0 2 573
1 1 459
5 2 493
3544
Leukaemia All other cancers Alive 1950
8 114 11571
0 55 5122
0 30 2806
17 2205
1 2 594
1 2 430
5 8 414
Leukaemia All other cancers Alive 1950
20 328 12472
9 127 5499
3 81 3004
2 73 2392
0 21 664
11
496
5 15 417
5564
Leukaemia All other cancers Alive 1950
10 371 8012
2 187 3578
0 80 2011
2 57 1494
2 22 434
1 17 283
3 8 212
65+
Leukaemia All other cancers Alive 1950
3 256 4862
1 119 2245
1 59 1235
0 48 935
0 13 232
0 10 123
7 92
4554
Exercise B.7
Vaccinating lambs/twosample ttest?
The growth of lambs can be seriously affected by parasitic diseases, which may depend in part on the presence of worms in the animals' intestines. Various vaccines have been proposed to reduce worm infestation. An experiment was carried out to investigate these vaccines, full details of which are given by Dineen, Gregg and Lascelles (1978). For financial reasons, each vaccine was investigated by a fairly small experiment in which unvaccinated lambs acted as controls and vaccinated lambs formed the treatment group. Each lamb was injected with worms and then sampies were taken so me weeks later. For one vaccine, the data were as shown in table B.l2. Is there any evidence that vaccination has reduced the nu mb er of worms present?
98
Exploring data Table B.12
Worms present in sam pIes taken from vaccinated and unvaccinated lambs Numbers of worms ( x 10 3)
SampIe size Control group Treatment group
Exercise B.8
4 8
22, 21.5,
21.5, 0.75,
30,
23
3.8, 29, 2, 27, 11, 23.5
Ankylosing spondylitis/paired differences
Ankylosing spondylitis (AS) is a chronic form of arthritis which limits the motion of the spine and muscles. A study was carried out at the Royal National Hospital for Rheumatic Diseases in Bath to see if daily stretching of tissues around the hip joints would help patients with AS to get more movement in their hip joints. Thirtynine consecutive admitted patients with 'typical' AS were allocated randomly to a control group receiving the standard treatment or to the treatment group receiving additional stretching exercises, in such a way that patients were twice as likely to be allocated to the 'stretched' group. The patients were assessed on admission and then three weeks later. For each patient several measurements were made on each hip, such as the extent of flexion, extension, abduction and rotation. This study is concernedjust with flexion and lateral rotation, where all measurements are in degrees and an increase represents an improvement. The data are presented in table B.13. No more details of the datacollection method will be given here. The statistician would normally analyse the data in collaboration with a physiotherapist, but he re you are expected to use your common sense. The question as posed by the hospital researchers was: 'Has the stretched group improved significantly more than the control group?' Y our report should attempt to answer this question as well as to describe any other analysis which you think is appropriate and to discuss any queries which come to mind. This is a fairly substantial data set which will take so me time to analyse. As a rough guide, I suggest: • • • •
thinking and looking time: 1 hour preliminary examination of the data: 12 ho urs further analyses as appropriate: 2 hours writing up the report: 3 hours
Exercise B.9
Effect of anaesthetics/oneway ANOV A
A study was carried out at a major London hospital to compare the effects of four different types of anaesthetic as used in major operations. Eighty
TableB.13
Measurements offlexion and rotation in degrees before and after treatment for 39 patients (or 78 hips). An oddnumbered row shows observations for thc right hip of a particular patient and the next evennumbered row shows observations for the patient's corresponding left hip Hip No.
Flexion Before
(a) Control group 1 100 2 105 3 114 4 115 5 123 6 126 7 105 8 105 9 120 10 123 11 95 12 112 13 108 14 111 15 108 16 81 17 114 18 112 19 103 20 105 21 113 22 112 23 116 24 113 (b) Treatment group 1 125 2 120 3 135 4 135 5 100 110 6 7 122 8 122 9 124 124 10 11 113 12 122 13 130 14 105 15 123 16 125
Rotation
After
Before
After
100 103 115 116 126 121 110 102 123 118 96 120 113 109 111 111 121 120 110 111 118 115 120 121
23 18 21 28 25 26 35 33 25 22 20 26 27 15 26 14 22 26 36 33 32 27 36 4
17 12 24 27 29
36 30 30 14 25 13 24 26 41 36 35 31 30 2
126 127 135 135 113 115 123 125 126 135 120 120 138 130 127 129
25 35 28 24 26 24 22 24 29 28 22 12 30 30 33 34
36 37 40 34 30 26 42 37 29 31 38 34 35 27 42 40
27
33 24 30 27
Table B.13 continued Flexion Hip No.
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
45 46 47 48 49 50 51 52 53 54
Rotation
Before
After
Before
After
123 126 115 125 120 105 120 110 127 125 128 124 115 116 106 111 113 105 100 99 113 126 102 94 116 112 122 122 118 115 115 118 78 77 129 127 127 132
128 128 120 120 135 127 130 120 135 130 138 136 124 124 110 116 114 115 105 88 119 126
18 26 20 22 7 10 25 25 35 25 30 33 26 40 20 16 10 11 14 2 48 35 22 13 22 20 30 33 22 25 32 12 35 30 36 33 10 39
27 32
110
115 111 114 128 128 127 124 125 127 121 126 129 132 127 139
40 40
20 28 28 26 39 29 43 45 34 42 21 17 12 14 23 10 50 35 30 24 25 25 35 35 30 27 39 25 34 32 44
25 15 36
Notes on Exercise B.1
101
patients undergoing a variety of operations were randomly assigned to one of the four anaesthetics and a variety of observations were taken on each patient both before and after the operation (see the Prelude). This exercise concentrates on just one of the response variables, namely the time, in minutes, from the reversal of the anaesthetic until the patient opened his or her eyes. The data are shown in table B.IS. Is there any evidence of differences between the effects of the four anaesthetics? Table B.IS
Time, in minutes, from reversal of anaesthetic till the eyes open for each of 20 patients treated by one of four anaesthetics (AD) A
B
C
3 2 1 4 3 2 10 12 12 3 19 1 4 5 1
6 4 1 1 6 2 1 10 1 1 1 2
3 5 2 4 2
t 7
5 1
12
10
2 2 2 2 1 3 7
D
4 8 2 3 2 1 3 6 6 13 2 1 3 4 8 4 5 1 10 1 2 1 0 8 10 1 2 2 3 4 9 1 0
NOTES ON EXERCISE B.l
This problem would have been more typical ofboth teallife and statistics textbook questions if the reader were simply asked to carry out a test of significance. This would make the problem much easier in one way (the mechanics of a twosample test are straightforward) but much hatder in other ways. Let us see what happens if we dive straight in to carry out the 'easy' significance test. There are two groups of observations to compare, but no natural pairing, and so a twosample ttest would appear to be appropriate. Let Jlc, JlR denote the population mean concentrations for cut shoots and for rooted plants, and let XR denote the corresponding sampie means. Then to test
xc'
Ho: Jlc= JlR
102
Exploring data
against H I :J1C=PJ1R
we calculate
tOb,=(xcXR)/5JCI0 + 110)
which is distributed as tl8 if Ho is true, where 52 denotes the combined estimate of withingroup variance which is assumed to be the same in both groups. We find xc =46.7, xR =36.5,5=9.1 and tob,=2.51 which is not quite significant at the 1% level for a twotailed test. Thus we fail to reject Ho. The above analysis treats the data mechanically. Despite its unsatisfactory nature, it is unfortunately what many people have been taught to do. Let us now return to the question and start, as suggested, by summarizing the data. It is helpful to order the two groups by size and calculate the means (or medians?) and standard deviations (or ranges?) of each group. Even the most cursory examination of the data reveals an obvious outlier in the first group, namely 18. This can be highlighted by drawing a pair ofboxplots or simply tabulating the group frequencies as folIows:
Concentration
1519 2024 2529 3034 3539 4044 4549 5054 5559 Cut shoots Rooted plants
3 2
2
2
3
2
3
The entire analysis depends crucially on what, if anything, we decide to do ab out the outlier. If possible, we should check back to see if it is an error. If the outlier is ignored, then a visual inspection suggests there is a significant difference, and this is confirmed by a revised tvalue of5.1 on 17 degrees offreedom. This is significant at the 1 % level giving strong evidence to reject Ho Alternatively, the outlier may indicate a different shaped distribution for cut shoots, in which case attest is not strictly appropriate anyway. I leave it to the reader to judge the best way to analyse and present the results. My own view is that there is evidence of a ditference between means with or without the outlier. Do any other questions occur to you? I suggest there are fundamental queries about the whole problem which are potentially more important than the analysis questions considered above. First, no objective is stated in the problem. Presumably we want to compare the chemical conccntration in cut shoots with that in rooted plants. Is there any other background information? Have similar tests been carried out before? Why should the population means be exactly equal? Is it more important to estimate the ditference in group means? Are the sampIes random, and, if so, from what populations? These questions should really be answered before we do an ything! (So this exercise is a litde unfair!)
Notes on Exercise B.2
103
Moral Get background information before starting a statistical analysis. Note that even a single outlier can have a crucial effect on the results of the analysis. NOTES ON EXERCISE B.2
There are five groups of observations to compare and so the 'standard' method of analysis is a oneway ANOV A. Any computer package should give an Fratio of 15.3 on 4 and 40 degrees offreedom. This is significant at the 1 % level giving strong evidence that real differences exist between the group means. This could be followed by a leastsignificantdifference analysis or by a more sophisticated multiple comparisons procedure to show that group C achieves the best results and group E the worst. Note that these comparison procedures use the estimate of residual variance provided by the ANOV A, so that an ANOV Ais not just used for testing. However, you will understand the data better if the AN 0 V A is preceded by an IDA. The latter is essential anyway to check the assumptions on which the ANOVA is based. First calculate summary statistics for each group. With small equalsized groups, the range may be used to measure spread. Summary statistics Group
Mean
Range
A B C D E
19.7 18.3 27.4 23.4 16.1
10 10 7 9 11
The roughly constant withingroup vanatlon supports the homogeneous variance assumption of the ANOVA. For small sampies, the standard error of a sampie mean, namely sl.Jn, is approximately equal to rangeln, which is about one in this case. This makes the differences in group means look relatively high. The differences become clearer in the set of box plots in fig. B.l. There is no overlap between groups C and E, while Band C only just 'touch'. It is arguable that no formal analysis is required to demonstrate that there really are differences between teaching methods. If you still wish to ca!culate a Pvalue to confirm your subjective judgement, then there is nothing technically wrong with that here (though in general unnecessary statistical analyses should be avoided), but, whatever analysis is used, it is probably unwise to try and draw any general conclusions from such small artificial sampies. Rather than concentrate on 'significance', it is more important to estimate the differences between group means (together with standard errors). If the differences are thought to be important from an educational point of view, then more data should be collected to ensure that the results generalize to other situations.
104
Exploring data 1...._ _'_'1 A
I
J. B
rn
c
iL,,1 0
I1
o Figure B.t
10
I
E
20
30 Test result
Box plots of test rcsults for five groups of students.
Moral Summarize the data before performing significance tests. Checking repeat ability is more important than significance testing anyway. SOLUTION TO EXERCISE B.3
You should not rush into carrying out an ANOV A. As usual, your analysis should start with an IDA. Calculate the group means and plot them, as for example in fig. B.2 where the lines connect group means and the 'whiskers' show the range of each sampIe of size four. Is your graph as clear? Did you remember to label the scales? Have you shown the withingroup variation as weil as the group means? Looking at fig. B.2, we see that numbers germinating increase to a maximum around level 7 for uncovered boxes, while for covered boxes the maximum is clearly at a lower level between 3 and 5. Judged against the relatively small withingroup scatter, the differences between group means look 'large'. It is clear not only that water does have an effect, but also that covering a box has an effect. Only at level 5 do the results from covered and uncovered boxes overlap. The results at level 11 are different in that the seeds are swamped and the values are all zero in both groups. Is there any point in confirming this assessment with formal tests of significance via an ANOV A? In this ca se I suggest the answer is no, even though that is what the biologist wants. This emphasizes that a statistician should not always answer the question which is posed but rather tackle the problem in the way thought to be most appropriate. There are several reasons why hypo thesis testing is inappropriate here. First, there is much prior information that water does affect plant growth, and it would be silly to ignore this and set up a null hypothesis that water has no effect. Second, the results of the significance tests are obvious beforehand from fig. B.2, although the AN OVA will not tell us how the hypotheses are rejected. The analyst stilI has to draw something like fig. B.2 to see where the maxima occur. Third, an ANOV A
Solution to Exercise B.3
105
80 Covered Uncovered
o 3
5
7
9
11
Moisture content
Figure B.2

Sampie range with boxes uncovered
tI
Sampie range with boxes covered
Numbers of seeds germinating.
will be difficult to carry out anyway. Any reader who ploughed straight into an ANOVA will have spent unnecessary energy worrying ab out the missing observation even though it obviously nukes no qualitative difference to the conclusions. In addition, fig. B.2 is needed to see what secondary assumptions are reasonable. A standard AN OVA based on normal errors is clearly inappropriate since an assuq1ption of constant variance cannot be made (compare the residual variation at levels 7 and 11). To a first approximation, the data may be regarded as binomial counts and it is possible to carry out an analysis of proportions, though it will not be fruitful. Ifthe biologist insists on an ANOV A, perhaps because the results are to be published, then it may be argued that there is nothing 'wrong' in calculating Pvalues to confirm one's subjective judgement. However, this does not alter the fact that the null hypotheses would be silly and obviously rejected. Some people may think it useful to fit a model, such as a pair of regression curves. However, in my view the presentation offig. B.2 is both necessary and sufficient and thus obviates the need for a more elaborate analysis.
Moral Don't always do wh at you are asked to do!
106
Exploring data
COMMENTS ON EXERCISE B.4 The data in both tables consist of observed frequencies as opposed to measured variables. Such data are often called count data or categorical data, and the twoway tables of frequencies are often called contingency tables. You should begin by looking at the tables. Various rules for improving the presentation of a table are given in section 6.5.2 of Part I. The only modification which needs to be made here is that row and column totals should be added to table B.3 (as they already have in table B.4). This gives table B.5. TableB.5
Observed frequencies of people with a particular hair and eye colour Hair colour
Eye colour
Black
Brunette
Red
Blond
Total
Brown Blue Hazel Green Total
68 20 15 5 108
119 84 54 29 286
26 17 14 14 71
7 94 10 16 127
220 215 93 64 592
Looking at the columns of table B.5, we see that brown eyes are in the majority except for the blond hair column where there are far more blue eyes. Indeed, the pattern in the blond column looks quite different to the rest. Alternatively, looking at the rows, we see that brown and blue eyes have roughly equal row totals, but give substantially different values in the black and blond columns. Clearly there is so me association between hair and eye colour. Applying a similar approach to table B.4, we look at the columns and see that the highest frequency is one child for lower income families but zero children for higher income families. Perhaps poorer families tend to be larger. A crucial difference between tables B.3 and B.4 is that whereas both variables are nominal in the former (since colours have no particular order), both variables in the latter are ordinal. This means that there may be alternative ways of summarizing the data. For example, the mean ofthe frequency distribution in each co lu mn oftable B.4 may be calculated as in table B.6 by counting' ~ 4' as '4'. We can now see more precisely how family size relates to income. Table B.6
Average number of children per family in different in co me groups (counting '~4' as '4')
Yearly income (1000 kr) Average number of children
01
0.89
12 0.94
23 0.76
3+ 0.60
Comments on Exercise B.4
107
Are any inferential methods appropriate? Most students will have learnt to analyse contingency tables by means ofaX2 goodnessoffit test. This tests the null hypothesis that 'rows and columns are independent', or, to be more precise, that the probability of an observation falling in any particular co lu mn does not depend on which row that observation is in (and vice versa). Expected frequencies under this hypothesis may be calculated using the formula (row total) x (column total)Jgrand total. Then the X2 test statistic is given by X2 = ~)(observedexpected)2Jexpectedl summed over all cells. The corresponding degree of freedom (DF) is (number of rows1) x (number of columns1). For table B.3, we find l= 138.3 on 9 DF, while for table B.4 we find l=568.6 on 12 DF. Both these values are highly significant leading to rejection ofthe independence hypothesis. But we really knew this already. With such large sam pie sizes, the X2 test is nearly always significant anyway, and it is more important to ask how the independence hypothesis is rejected and whether the deviations are of practical importance. The main benefit of the l test is to provide expected frequencies which can be compared by eye with the observed frequencies. Thus table B.7 shows, for example, the excess of subjects with blue eyes and blond hair, and the shortfall of people with brown eyes and blond hair. A similar table derived from table B.4 would not be particularly fruitful, and table B.6 above is more useful with these ordinal variables. Table B.7
Observed and expected frequencies of subjects with particular hair and eye colours Hair colour Black
Eye colour Brown Blue Hazel Green Total
Brunette
Blond
Red
Obs.
Exp.
Obs.
Exp.
Obs.
Exp.
Obs.
Exp.
Total
68 20 15 5
40 39 17 12
119 84 54 29
106 104 45 31
26 17 14 14
26 26 11 8
7 94 10 16
47 46 20 14
220 215 93 64 592
108
286
71
127
A simple analysis along the above lines gives a good idea of the da ta using only a pocket calculator. More complicated analyses, such as those using loglinear modelling and correpondence analysis (e.g. Diaconis and Efron, 1985; Snee, 1974) may occasionally prove fruitful for the expert but are hardly necessary in the vast majority of practical cases. It may be more important to consult an appropriate subject specialist on any speculation raised by this analysis to see if the findings are confirmed by other studies or by alternative theory. A biologist may advise on
108
Exploring data
hair/eye colour, while a demographer may be able to shed light on Swedish families in the 1930s. Another important question is whether the given da ta are likely to be representative of some wider population, so that genuine inference can be made. The sam pie in table B.3 was collected as part of a dass project by a group of university students and should not pcrhaps be taken too seriously. However, the much larger sam pie in table B.4 was taken from the 1936 Swedish census and is likely to be more representative.
Moral In a twoway table of counts, it is often helpful to compare the observed frequencies by eye with the expected frequencies calculated assuming independence between rows and columns. SOLUTION TO EXERCISE B.5
These data are unusual in two respects. Firstly, the observations are truncated at 192 days. Data like these are called censored data, and arise widely in reliability studies and in dinical trials. Secondly, some rats have died from causes other than cancer and this makes comparisons between the groups more difficult. You may not have seen data like these before. But don't despair! Let us see how far we can get using common sense. With censored data, it is inappropriate to calculate some types of summary statistic, such as the mean and standard deviation, because the value 192 days (still alive) is quite different to 191 days (dead). If more than half the sam pie have died, it is possible to calculate median lifetime, but an alternative way to start looking at the da ta is to treat them as binary (e.g. cancer/no cancer) and find the numbers ofrats in each group which (a) develop cancer, (b) survive until 192 days, and (c) die prematurelY from causes other than cancer. These are given in table B.9 with the columns reordercd to demonstrate the effects more dearly. First we note the high number of cancers. Clearly drug D does promote cancer although a control group, receiving no drugs, might have been desirable to confirm this (the extent of prior knowledge is undear). Comparing groups I and III with II Table B.9
Numbers of rats developing cancer, surviving and dying prematurely of other causes Group II DX
No. No. No. No.
dying of cancer surviving with cancer surviving without cancer of 'other' deaths
No. of rats in group
Group IV DXP
Group I D
Group III DP
4 3 7 6
11
13 3
19
20
20
0 6 3
0 4 20
0 0 1 20
Solution to Exercise B.5
109
and IV, we see clear evidence that X does inhibit cancer in that the proportion of rats developing cancer goes down substantially (from 87% to 45%) while the proportion surviving until 192 days goes up substantially (from 7% to 40%). However, we note the worrying fact that five rats in group 11 died of other causes within the first five days of the trial. Could X have lethaI sideeffects? See also the three early noncancerous deaths in group IV. Comparing group 11 (DX) with group IV (DXP) and group I (D) with group III (DP), we see that drug P does see m to have an accelerating effect, while comparing group IV with group I suggests that X has a 'stronger effect than P'. We now have a 'feel' for the data. It is also worth noting that the data look somewhat suspect in that too many survival times are repeated. For example, there are four 2s in group 11, five 43s in group III, and three 106s in group IV. If this effect cannot be explained by chance, then some external effects must be playing a role or observations may not be taken every day (ask the experimenter!). Is any further analysis indicated? You may want to know if the results are significant and this is not obviousjust by looking at the data. It mayaIso be helpful to fit a model to estimate the main effects and interaction of drugs X and P. Snell (1987, p. 153) shcws how to fit a proportional hazards model (Appendix A.15 and the last paragraph of Example EA) which shows that the main effects are significant and allows survival functions to be estimated. With appropriate expertise available, such an analysis can be recommended. However, a descriptive analysis, as given above, may be adequate, and perhaps even superior for so me purposes, when there are doubts about the data or when the dient has limited statistical expertise (although the statistician may want to do a 'proper' analysis for hirnself). The descriptive analysis concentrates on understanding the data. 'Significance' is mainly important if little background knowledge is available and it is expensive to replicate the experiment, but for the given da ta it seems silly to set up unrealistic null hypotheses wh ich ignore the prior information about the effects of the drugs. There is further difficulty in analysing these data in that there are competing risks of death from cancer and from other causes. It is tempting, for example, to compare the proportions developing cancer in groups land 11, namely 16/20 and 7/20, by means of a twosample test of proportions, which gives a significant result. However, there were several early deaths in group 11 which did not give time for cancer to develop. So should the proportion be 7/15 rather than 7/20 or what? Clearly there are traps for the unwary! Snell (1987) treats deaths from other causes as giving a censored value for which the postmortem classification is ignored. We sum up by saying that the three drugs do have the effects as suggested, that X inhibits cancer more than P accelerates it, but that X may have nasty sideeffects.
Moral When faced with an unfamiliar data set, use your statistical judgement to decide which summary statistics to calculate, but be prepared to consult an 'expert' in the particular area of statistics and/or area of application.
110
Exploring data
NOTES ON EXERCISE B.6 Some obvious queries which co me to mind concerning the da ta are: (a) How was the sam pIe selected, and do they provide a fair representation of Hiroshima survivors? (b) How was the radiation dose assessed? Was it based solelyon the person's position at the time of the explosion? (c) Is diagnosis ofleukaemia and of other cancers perfect? Taking the data at face value, it is a twoway table in which each 'cell' contains three integer counts. You will probably not have seen data exactly like these before, but you should not give up but rather 'look' at the table using your common sense. We are interested in seeing if deaths are related to age and/or radiation dose. In other words we are interested in the (marginal) effects of age and of radiation dose as well as in possible interactions between the two. Even a cursory gl an ce at table B.10 suggests that while deaths from all other cancers increase with age, deaths from leukaemia do not. Thus we need to treat leukaemia and 'all other cancers' separately. We also need to look at proportions rather than the number of deaths in order to get fair comparisons. When proportions are sm all (as here), it is better to compute rates of death rather than proportions to avoid lots of decimal places. The number of deaths per 1000 survivors is a suitable rate which is l0 3 X corresponding proportion. The resulting twoway table of rates for leukaemia is shown in table B.l1. A similar type of table may be obtained for 'other cancers'. TableB.l1
Number of deaths from leukaemia per 1000 survivors tabulated against age alld radiation dose Radiation dose in rads
Age in 1950
0
19
1049
5099
0.45
0.2 0.2 0.8
1.1
1.4 1.2
514 1524 2534 3544 4554 55{i4 65+
1.64 0.56 0.45
0.8
Overall
0.50
0.44
0.45
1.0
100199
200+
Overall
7.1
15 7 10 12 12 14 11
0.92 0.88 0.96 0.69 1.60 1.25 0.62
10.6
0.4 0.8
1.7
1.3
4.6
3.3 2.2 2.3 2.0 3.5
0.55
1.24
3.20
Constructing a nice dcar twoway table is not trivial (see section 6.5.2 ofPart I). Were your tables as dear? There are several points to note. First the overall death rates are obtained as a ratio ofthe total frequencies, and not as the average ofthe rates in the table. Second the overall row rates are given on thc right ofthe table as is more usual. Third, overall column rates should also be given. Fourth, zero rates are indicated by a dash. Fifth, different accuracy is appropriate in different parts of a table as the sampie sizes vary.
N otes on Exercise B. 7
111
It is now clear that leukaemia death rates are strongly affected by radiation dose but little affected by age. In contrast, 'other cancers' are affected by agc but not by radiation dose. There is little sign ofinteraction between age and radiation dose or of any outliers when one bears in mind that the number of deaths for a particular agejdose combination is likely to be a Poisson variable and that one death may produce a relatively large change in death rate. For example, the apparently large leukaemia death rate for age 5564 and dose 5099 rads is based on only two deaths. The above findings are so clear that there seems little point in carrying out any significance tests. What could be useful is to fit some sort of curve to describe the relationship between, say, leukaemia death rate and radiation dose. Alternatively, we could carry out a more formal inferential analysis which could here involve fitting some sort of loglinear or logistic response model (see Exercise G.5 for an example of a logistic model). Such a model might provide a proper probabilistic foundation for the analysis and be weil worthwhile for the expert analyst. However, the analysis presented here is much simpler, will be understood by most analysts and clients, and may weil be judged adequate for many purposes, particularly ifthere are doubts about the rcliability of the data.
Moral Constructing a clear twoway table is not trivial, particularly when the 'cell' values are ratios of two frequencies. NOTES ON EXERCISE B.7
The statistician has to cope with sam pies ranging from thousands to single figures. Here the sampie sizes are very small and the power ofany test will be low. Thus we cannot expect the treatment effect to be significant unless the effect is very large. Rather we can treat the data as a pilot study to see ifthc vaccine is worth testing on a larger sam pie in a variety of conditions. Thus, rather than ask 'Is the treatment effect significant?'  though it will be a bonus if it is  it is more sensible to ask 'Is there something interesting here?'. Ifwe were to perform a twosample test on the two group means, we would find that the result is nowhere near significant even though the two sam pie means, 24.1 and 14.8, differ substantially. Yet a glance at table B.12 suggests that some observations in the treatment group are much lower than those in the control group and that there is an effect of potential practical importance here. Let us look at the data in more detail using an IDA. Thcre are too few observations to plot histograms, and so we plot the individual values as a pair of dot plots in fig. B.3. In order to distinguish between the two sam pies, one dot plot is plotted using crosses. It is now clear that the shape of the distribution of worm counts for the treatment group is quite different to that of the control group. Four lambs appear unaffected by the vaccine, while four others have given large reductions. The group variances are apparently unequal and so one assumption for the usual ttest is invalid. If we carry
112
Exploring data
•••
Control Treatment
•
x x x x x x x x ~'~~~
o
10
20
30
No. of worms (x 103)
Figure B.3
Worm counts for 4 controllambs and 8 vaccinated lambs.
out a twosample Welch test, which aIlows for unequal variances, the result is much doser to being significant, but this or any other parametric approach is heavily dependent on the assumptions made. A nonparametric approach could be affected by the fact that there may be a change in shape as weIl as in the mean. Yet the IDA, in the form of fig. B.3, suggests that there is something interesting here, namely that the vaccine works on some lambs but not on others. This possibility is supported by knowledge about the behaviour of other vaccines which can have different effects on different people. Once again it is vital to use background information. Unless other experimental vaccines do better in these preliminary trials, it appears worth taking larger sam pies for this treatment. There seems little point in any further analysis of this small data set.
Moral When sampies are very small, look at the da ta to see if the systematic effects are of potential practical importance, rather than rush into using a technique which depends on dubious assumptions. DISCUSSION OF EXERCISE B.8
I have no definitive analysis for these data, but various sensible suggestions can be made. The first thing many students will try is to compute the change for each hip and then carry out a twosample ttest to compare the changes in the treatment group with the changes in the contro!. When this is done, the rotation in the treatment group is found to improve significantly more than in the control group (t = 3.8 is significant at the 1 % level) but that the results for flexion (t = 1.7) are only just significant at the 5% level if a onetailed test is thought appropriate. But is this analysis appropriate? This book should have taught you not to rush into significance testing. The first problem is choosing a sensible response variable. Should (absolute) changes be analysed or perhaps percentage change? It could be argued that a given absolute change in score is more beneficial for a severely handicapped patient than for a more mobile patient. However, there is an upper bound to the scores (180°?) which will affect percentage change more than absolute change. In any case it is possible for changes to go negative and one hip decreases from 4° to 2° giving a
Discussion of Exercise B.8
113
misleading reduction of50%. So perhaps we have to stick to absolute changes even though they are not ideal. Before doing any analysis, we should first look at the data by means of an IDA. Do the data look reliable? Are there any outliers? Are the treatment and control groups comparable? If we look at the distribution of the final recorded digits we see that there are 'too many' zeros and fives. Measuring how much a hip will bend is clearly difficult to perform accurately and some rounding is perhaps inevitable; but this should not affect the conclusions too much. If stemandleaf plots of starting values and of changes are formed, a number of potential outliers can be seen (e.g. the improvement in flexion for treatment hips 49 and 50). There is no obvious reason to exclude any observation but these outliers do negate a normality assumption. Summary statistics should be calculated and the mean values are given in table B.14. (Query: should medians be preferred here?) Note that the treatment group produces higher changes, particularly for rotation. Box plots of improvements for each of the different groups are also helpful in displaying the apparent benefit of the treatment. Table B.14
(a) Flexion Control Treatment
Mean values far the hip da ta Before
After
Change
110.0 116.5
113.7 124.0
3.7 7.5
25.0 24.8
26.0 31.4
1.0 6.6
(b) Rotation
Control Treatment
There is some evidence that the control group has a lower initial mean for flexion wh ich raises the question as to whether a random allocation procedure really was used. Then a relevant question is whether change is related to the corresponding initial score. In fact these two variables are negatively correlated (down to 0.6 for the control flexion group) indicating that severely handicapped patients tend to improve more than others although these correlations are inflated by outliers. The correlation suggests that an analysis of covariance may be appropriate. If we ignore the initial information, then we are being 'kind' to the control group and this may help to explain why the flexion results are not significant in the abovementioned ttest. Another problem with these data is that there are measurements on two hips for each patient. Analysis shows that observations on the two hips of a patient are positively correlated (up to over 0.7 in one group). Thus observations are not independent, and our effective total sam pIe size is between 39 and 78, rather than 78 as assumed in the earlier ttest. There is also a small correlation between improvements in flexion and rotation for the two hips of a particular patient. We could average each pair of observations, giving 39 independent observations, but this loses information about betweenhip variation. An alternative is to analyse right
114
Exploring da ta
and left hips separately. The effect of ignoring the correlations in an overall ttest of all individual hips is to overestimate the extent of significance. It may be possible to allow for 'patient' effects in a more complicated analysis, but it is not clear that this is worth the effort in an exploratory study like this. A completely different approach is to use nonparametric methods. The simplest approach is to see how many hips improved in the control and in the treatment groups. The results are shown below. Note that the treatment group gives better results, particularly for rotation. Proportion improving
Control Treatment
Flexion
Rotation
0.75 0.83
0.54 0.89
The difference in proportions is significant for rotation but not for flexion. Alternatively, we can carry out a more powerful, nonparametric test on the ranks, called a twosample MannWhitney test. This suggests that the rotation results are significant at the 1 % level (as for the ttest) and that the flexion results are significant at the 5% level. This test still ignores the problems due to correlated pairs and to unequal initial flexion scores. However, these two effects will tend to cancel each other out. The test does not rely on a normality assumption and, in view of the outliers, seems a better bet than the ttest. Hitherto we have considered flexion and rotation separately. A more thorough and sophisticated analysis would consider them together. For example, does a patient whose rotation responds weil to treatment also have a good flexion response? Is it possible to form a single variable from the measured responses which provides an overall measure of patient response? We will not pursue this here. The overall conclusion is that the new treatment does seem to work, especially for improving rotation. However, we should also note a number of queries such as (a) Were patients really allocated randomly? (b) Wh at is a 'typical' AS patient? (c) How expensive is the treatment and will its effect last?
Moral The problems encountered here, such as correlations between observations and nonnormality, are 'typical' of real data, and so the analyst must always be on the lookout for departures from 'standard' assumptions. NOTES ON EXERCISE B.9
By concentrating on one variable, and ignoring other information, the following analysis should be regarded as exploratory. It will at least give some idea as to
Notes on Exercise B.9
115
wh ether there are 'large' differences between the effects of the anaesthetics. It is usuallya good idea to start by looking at a large set of variables one or two at a time, before moving on to look at the multivariate picture. It is easy to reorder the four groups of observations by size and then construct box plots, as in fig. B.4. In Exercise B.2 we were also interested in comparing observations in several groups and there a set of box plots revealed obvious A
~C=c==~
B
CICI= }   
ccr=Jro lL.....J.._Ilo
10
5
20
15
Recovery time (minutes)
Figure B.4
Box plots of recovery times for groups of 20 patients treated by one of four anaesthetics (A, B, C or D).
differences between groups. Here the set of box plots shows substantial overlap between groups and suggests that there are not significant differences between groups. The plots also reveal that the distributions are severely skewed, which suggests that medians are better measures oflocation than means. Some descriptive statistics are shown in table B.16. TableB.16
Mean Median Range
Some summary statistics on groups AD of fig. B.4 A
B
C
D
5.4 3.5
3.2 2.0
3.0 2.0
4.3 3.0
013
010
119
110
The differences between group means (or medians) are sm all compared with the variability within groups. The obvious inferential technique for comparing several groups is the oneway ANOVA. However this technique assumes that observations are normally distributed. This is not true here. We could ignore the problem and rely on the robustness of ANOV A, or use a nonparametric approach, or transform the data to normality. We try the latter. If a BoxCox transformation is applied (see section 6.8), the power parameter Acould be estimated by maximum likelihood but it is easier in practice to use trialanderror as only a few special transformations
116
Exploring data
make sense. Trying square roots, cube roots and logarithms, it is found that logarithms give a roughly symmetric distribution in each group. As there are some zero survival times, and log 0 is infinite, the analyst needs to use log(x + k) rather than log x. By trialanderror, k = 1 is found to be a reasonable value. It is this sort of 'trick' which is so necessary in practice but which is often not covered in textbooks. A oneway AN OVA of the transformed data produces results shown in table B.17. Note that the Fvalue only needs to be given to one decimal place accuracy. Table B.l7
Oneway ANOV A
Source Anaesthetics Residual Total
SS
DF
MS
F
2.28 32.50 34.78
3 76 79
0.76 0.43
1.8
The Fratio is nowhere near significant at the 5% level and so we can accept the null hypothesis that there is no difference between the four anaesthetics in regard to recovery time. It is interesting to note that if an ANOVA is carried out on the raw data, then, by a fluke, the Fratio happens to take exactly the same value. This is an indication of the robustness of ANOV A. Is any followup analysis indicated? The estimated residual mean square, namely 0.43, could be used to calculate confidence intervals for the means of the transformed variables, and it is worth noting that recovery times for anaesthetic A do seem to be a little Ion ger. It would be interesting to see if this effect recurs in further cases.
Moral Be prepared to transform ur otherwise modify your data before carrying out a formal analysis, but note that ANOV A is robust to moderate departures from normality.
C Correlation and regression
When observations are taken simultaneously on two or more variables, there are several ways of examining the relationship, if any, between the variables. For example, principal component analysis (or even factor analysis?) may be appropriate ifthere are several variables which arise 'on an equal footing'. In this section we consider two rather simpler approaches. A regression relationship may be appropriate when there is a response variable and one or more explanatory variables. A correlation coefficient provides a measure of the linear association between two variables. These techniques are straightforward in principle. However, the examples in this section demonstrate the difficulties which may arise in interpreting measures of correlation and in fitting regression relationships to nonexperimental data.
Exercise C.1
Correlation and regression  I
This is a simple exercise, but with an important message. Figure C.l shows some observations on two chemical variables arising in a chemical experiment. The vertical axis is a partition coefficient while the horizontal axis is the volume fraction of one of two chemicals in a mixed solvent, but the fuH chemical details are unnecessary to what folIows. The task is simply to make an educated guess as to the size of the correlation coefficient without carrying out any calculations. Note that the straight line joining the 'first' and 'last' points is not the least squares line but was inserted by the chemist who produced the graph.
Exercise C.2
Correlation and regression  11
This is a 'fun' tutorial exercise on artificial data which nevertheless makes some valuable points. Figure C.2 shows four sets of data. In each case, comment on the data, guesstimate the value of the correlation coefficient, and say wh ether you think the correlation is meaningful or helpful. Also comment on the possibility of fitting a regression relationship to predict y from x.
16
I
i
14
~
"
I I
I
+ 1Qt
12
K r
8
I I I
"
i i t I
! 6+ !i
4
*
+i
"
i
2
+ i
I
o Lt +++++++4I o
Figure C.1
A
y
0.2
0.4 0.6 Volume fraction
0.8
1
Observations on two variables for the chemical hexanebenzonitrilediMesulfox.
B
• • • • •• • • • • •
y
•
•
•
••
• ••••
•
x
c y
• •• ••• ••
o
•• •
y
x Figure C.2
Four bivariate sets of data.
•
I x
Exercise CA Exercise C.3
Petrol consumptionjmultiple regression
119
Sales data/multiple regression?
Table C.l shows the sales, average price per ton, and advertising support for a eertain eommodity over six years. Find the regression relationship between sales, priee and advertising and eomment on the resulting equation. Does the equation describe the effeets of different priee and advertising levels on sales? Can you think of an alternative way of summarizing the data? Table C.1
Sales, price and advertising data
Sales (in 1:, million), S Price (in [,), P Advertising (in 1:,'000), A
Exercise C.4
1979
1980
1981
1982
1983
1984
Average
250 25 35
340 48 32
300 44 38
200 20 30
290 38 34
360 60 46
290 39 36
Petrol consumption/multiple regression
Table C.2 shows the petrol eonsumption, number of eylinders, horse power, weight and transmission type (automatie or manual) for 15 American ear models made in 1974. This is part of a data set formed by Dr R. R. Hoeking whieh was reanalysed by Henderson and Velleman (1981). The original data set eontained more variables and a greater variety of ear models, but this data set is quite large enough to get experienee with. (Though normally the Table C.2 The petrol consumption (in miles per US gallon), number of cylinders, horse power, weight (in '000 lb) and transmission type for 15 1974model American cars Automobile
MPG
No.of cylinders
HP
Weight
Transmission
Mazda RX4 Datsun 710 Hornet Sportabout Valiant Duster 360 Mercedes 240D Mercedes 450SLC Cadillac Fleetwood Lincoln Continental Fiat 128 Toyota Corolla Pontiac Firebird Porsche 9142 Ferrari Dino 1973 Volvo 142E
21.0 22.8 18.7 18.1 14.3 24.4 15.2 10.4 10.4 32.4 33.9 19.2 26.0 19.7 21.4
6 4 8 6 8 4 8 8 8 4 4 8 4 6 4
'110 93 175 105 245 62 180 205 215 66 65 175 91 175 109
2.62 2.32 3.44 3.46 3.57 3.19 3.78 5.25 5.42 2.20 1.84 3.84 2.14 2.77 2.78
M M A A A A A A A M M A M M M
120
Correlation and regression
larger the sam pie the better!) The object of this exercise is to find a regression equation to predict petrol consumption (measured as miles per US gallon under specified conditions) in terms of the other given variables.
SOLUTION AND DISCUSSION TO EXERCISE C.l
If, like me, you guessed a value of about 0.8 to 0.9, then you are wide of the mark. The actual value is  0.985. Showing this graph to a large number of staff and students, the guesses ranged widely from 0.2 to 0.92 (excluding those who forgot the minus sign!), so that the range of guesses, though wide, did not include the true value. Experienced statisticians know that correlations can be difficult to guesstimate (Cleveland, Diaconis and McGill, 1982). Our judgement can be affected by the choice of scales, the way the points are plotted and so on. One reason why everyone underestimated the true value in this case could be that the straight line exhibited in the graph is not a good fit. A second reason is that the eye can see that the departures from linearity are of a systematic rather than random nature. However, the main reason arises from the (unexpected?) nature of the relationship between correlation and the residual standard deviation, which is s~x = residual variance of y = s~(1  ,2)
where sY = (unconditional) standard deviation of y, and , = correlation. When r = 0.985, we have SYlx= 0.17 Sy' so that, despite the high correlation, the residual standard deviation for a linear model would still be 17% of the unconditional standard deviation. The measure 1s y1x /s y= 1~(1r2) comes doser to most people's perception of correlation. Y ou are invited to guess and then evaluate r when (a) sYlx=0.5sy and (b) Sylx=0.7s y. The data were given to me by a colleague in the chemistry department, where there is controversy regarding this type ofrelationship. One school ofthought says: 'The very high correlation shows the relationship is linear'. A second school of thought says: 'Despite the very high correlation, the departures from linearity are systematic so that the relationship is not linear'. I hope the reader can see that the argument used by the first school of thought is false. I supported the alternative view having seen several similar nonlinear graphs and having heard that the residual standard deviation is known to be much smaller than the residual standard deviation for a linear model. It is of course potentially misleading to calculate correlations for data exhibiting obvious nonlinearity, and if you refused to do so in this case, then you are probably wise! Indeed your first reaction to this exercise may (should?) have been to query whether it is sensible to try and guess a preliminary estimate for the correlation or even to be interested in the correlation at all. Nevertheless, the statistician has to be prepared to understand and interpret correlations calculated by other scientists. Another striking example, demonstrating that an apparently high correlation
Discussion of Exercise C.3
121
(0.992!) can be misleading, is given by Green and Chatfield (1977, p. 205). In their example, the correlation is grossly inflated by an extreme outlier.
Moral Correlations are difficult to assess and interpret.
SOLUTION TO EXERCISE C.2
These da ta were constructed by Anscombe (1973) so as to have an identical correlation coefficient, namely 0.82, even though the data sets are very different in character. (These data are also discussed by Weisberg, 1985, Example 5.1.) Data set A looks roughly bivariate normal and the correlation is meaningful. Set B is curvilinear and, as correlation is a measure of linear association, the correlation is potentially misleading. If, say, a quadratic model is fitted, then the coefficient of determination is likely to be much higher than 0.82 2• Data set C lies almost on an exact straight line except for one observation which looks like an outlier. The correlation only teils part of the story. Set D looks very unusual. The xvalues are all identical except for one. The latter is a very influential observation. With only two xvalues represented, there is no way ofknowing if the relationship is linear, nonlinear, or what. No statistics calculated from these data will be reliable. As to a possible regression relationship, the construction of the data is even more cunning in that corresponding first and second moments are all identical for each data set. Thus the fitted linear regressions will be identical, as weil as the correlations. However, although the fitted lines are identical, an inspection ofthe residuals should make it apparent that the linear model for data set A is the only one that can be justified for the given data.
Moral Look at the scatter diagram before calculating regression lines and correlations.
DISCUSSION OF EXERCISE C.3
The first part of the question is an 'easy' piece of technique. Using MINITAB or some similar package, a multiple regression equation can easily be fitted to the data glvmg
S=4.26P1.48A + 176+error with a coefficient of determination given by R 2 = 0.955. Is this equation useful and does it describe the effects of different price and advertising levels on sales? The short answer IS no. First you should query whether it is wise to fit a threeparameter model to only six
122
Correlation and regression
observations. In particular, the standard errors ofthe fitted coefficients are relatively large. Far more observations are desirable to get an empiricallybased model which describes behaviour over a range of conditions. Indeed it could be argued that this is too small a da ta set to be worth talking about, but I have included this exercise because, rightly or wrongly, people do try to fit equations to data sets like this. Thus the statistician needs to und erstand the severe limitations of the resulting model. This data set is also ideal for demonstrating the problems caused by correlated explanatory variables. A second obvious query ab out the fitted model is that the coefficient of Ais negative. However, if sales are regressed on advertising alone, then the coefficient of A does turn out to be positive as one would expect. Thus the introduction of the second variable, P, not only alters the coefficient of A but actually changes its sign. This emphasizes that the analyst should not try to interpret individual coefficients in a multiple regression equation except when the explanatory variables are orthogonal. Rather than start with a technique (regression), it would be better as usual to start with an IDA and also ask questions to get background information. In particular you should have plotted the data, not only S against P and against A, but also A against P. From the latter graph the positive correlation of the explanatory variables is evident. Advertising was kept relatively constant over the years 197983 and then increased sharply in 1984 when price also increased sharply. We cannot therefore expect to be able to separate satisfactorily the effect of advertising from that of price for the given set of data. The fitted equation is simply the best fit for the given set of da ta and does not describe the effects of different price and advertising levels on sales. In particular if price were to be held constant and advertising increased, then we would expect sales to increase even though the fitted model 'predicts' a decrease. It is possible to find an alternative model with a positive coefficient for advertising for which the residual sum of squares is not much larger than that for the leastsquares regression equation. This would be intuitively more acceptable but cannot be regarded as satisfactory until we know more about marketing policy. An alternative possibility is to omit either price or advertising from the fitted model. A linear regression on price alone gives S= 140 + 3.84P with R 2=0.946, whereas the regression on advertising alone gives a much lower R 2, namely 0.437. Thus it seems preferable to 'drop' advertising rather than price to get a model with one explanatory variable. The regression on price alone is one alternative way of summarizing the data. Is there another? The perceptive reader may suggest looking at the data in an entirely different way by considering a new response variable, namely volume = sales/price. The problem with the variable 'sales' when expressed in monetary terms is that it already effectively involves the explanatory variable price. The response variable 'volume' is arguably more meaningful than sales and leads to a locally linear relationship with price where volume decreases as price increases. This is to be expected intuitively. The linear regression equation for V = S/P is given by
Notes on Exercise C.4
123
V=12.20.11P with R 2 = 0.932. Thus the 'fit' is not quite so good as for the regression of S on P, but does provide an alternative, potentially useful way of summarizing the data. All the above regression equations give a 'good' or 'best' fit to the particular given set of data. However, as noted earlier, it is open to discussion whether it is wise or safe to derive a regression equation based only on six observations. There is no reason why the models should apply in the future to other sets of data, particularly if marketing policy is changed. In particular a model which omits the effect of advertising may rightly be judged incomplete. The mechanical deletion of variables in multiple regression has many dangers. Here it seems desirable to get more and better data, to ask questions and exploit prior information to construct a model which does include other variables such as advertising. For example, advertising budgets are often determined as a fixed percentage of expected sales, but could alternatively be increased because of a feared drop in sales. Knowledge of marketing policy is much more important than 'getting a good fit'.
Moral Multiple regression equations fitted to nonorthogonal, nonexperimental data sets have severe limitations and are potentially misleading. Background knowledge should be incorporated.
NOTES ON EXERCISE C.4
Here miles per gallon is the response variable and the other four variables are predictor variables. This is a typical multiple regression problem in that the analyst soon finds that it is not obvious how to choose the form of the regression equation. Here the particular problems include correlations between the predictor variables and the fact that there is a mixture of discrete and continuous variables. This is the sort of data set where different statisticians may, quite reasonably, end up with different models. Although such models may appear dissimilar, they may weil give a rather similar level of fit and forecasting ability. This data set has already given rise to different views in the literature and I shall not attempt a definitive solution. So me general questions are as folIows: 1.
Wh at is the correct form for a suitable model? Should nonlinear or interaction terms be included, and/or should any of the variables be transformed either to achieve linearity or to reduce skewness? Which of the predictor variables can be omitted because they do not have a significant effect on the response variable? Is there background information on the form of the model  for example that so me predictor variables must be included.
2.
What error assumptions are reasonable? What, for example, can be done ab out nonnormality or nonconstant variance if such effects occur?
124
Correlation and regression
3.
Are there any errors or outliers which need to be isolated for further study or even removed from the da ta set? Wh ich observations are influential?
4.
What are the !imitations of the fitted model? Under what conditions can it be used for prediction?
Cox and Snell (1981, Example G) discuss these problems in more detail and illustrate in particular the use of the logarithmic transformation and the fitting of interaction terms. Returning to the given data set, start by plotting histograms (or stemandleaf plots) of the five (marginal) univariate distributions to assess shape and possible outliers. Then look at the scatter diagrams of the response variable with each of the four predictor variables to assess the form of the relationship (if any) and to see if a transformation is needed to achieve linearity. It is also a good idea to look at the scatter diagrams of each pair of predictor variables to see if they are themselves correlated. If some of the correlations are high then some of the predictor variables are probably redundant. This means looking at 10 ( = SC 2) graphs but is weIl worth the eifort. Then a multiple regression model can be fitted to the da ta using an appropriate variable selection technique. Y ou may want to try both forward and back ward selection to see if you get the same subset of predictor variables. Having fitted a model, plot the residuals in whatever way seems appropriate to check the adequacy of the model, and modify it if necessary. The interactive approach recommended here contrasts with the automatie approach adopted by many analysts when they simply entrust their data to a multiple regression program. This may or may not give sensible results. An automatie approach is sometimes regarded as being more 'objective', but bear in
+4
x Residual
o
x x
I
~
x
x 4
x Automatie Transmission type
Figure C.3
An example of a residual plot.
*
Manual
Notes on Exercise C.4
125
mind that the analyst has made the major subjective decision to abdicate all responsibility to a computer. One regression program fitted an equation which included horse power and weight but excluded the number of cylinders and transmission type. As an example of an interesting residual plot, fig. C.3 shows the resulting residuals plotted against one of the discarded predictor variables. There are two values which might be regarded as outlying which both give high values of miles per gallon for a manual transmission, they are the Fiat 128 and Toyota Corolla. On looking back at table C.2 we note that these two models have much higher values for miles per gallon than other models and the residual plot helps us to pick them out. There is no reason to think they are errors, but rather they are 'exceptional' data points which do not conform to the same pattern as the other cars. It is mainly the responsibility of the automobile engineer to decide on the limitations of the fitted model and to assess whether or not it can be used usefully to predict petrol consumption for a new model. It would be desirable to have a wider variety of cars to fit the model to, and in any ca se the deviations from the model are just as interesting as the predicted values.
Moral Fitting a multiple regression model requires skilled interaction with the data and with an appropriate subject specialist.
D Analysing complex largescale data sets
The exercises in this chapter are concerned with more substantial multivariate data sets. They should give the reader valuable experience in handling a largescale 'messy' set of data, and emphasize the point that a simple 'commonsense' approach to data analysis is often preferable to the use of more sophisticated statistical techniques. With a large data set, it is easy to feel rather overwhelmed. Then it is tempting to feed the data into a computer, without checking it at all, and then apply so me sophisticated analysis which may, or may not, be appropriate. The ostensible reason for this is that data scrutiny and IDA are too time consuming for large da ta sets. In fact data scrutiny and da ta summarization are perhaps even more important for large data sets, where there are likely to be more errors in the data and more chance of going horribly wrang. The time involved will still be relatively small compared with the time spent collecting the data and getting it onto a computer. Data scrutiny can be done partly 'by eye' and partly 'by computer'. As the size of the data set increases, it soon be comes impractical to look at all the tabulated raw data by eye although I still recommend 'eyeballing' a small portion of the data. Nevertheless, the vast majority of the data scrutiny will be done with the aid of a computer, albeit backed up by an experienced eye. For example, the histogram of each variable can be quickly plotted by computer and then checked by eye for grass outliers. There is increasing use of multivariate techniques such as principal component analysis and multidimensional scaling (see section 6.6). Examples demonstrating the use of such techniques may be found in many books such as Gnanadesikan (1977), Chatfield and Collins (1980) and Everitt and Dunn (1983), and will not be duplicated here. These techniques can sometimes be very useful for reducing dimensionality and for giving twodimensional plots. However, traditional descriptive statistics is still a valuable prerequisite, even when the data sets are larger than those considered here, and proves sufficient in the two examples considered here.
Exercise D.1
Chemical structure of plants
The data given in table D.1 are the weights in grams of fodder ra pe seeds and plants at 13 growth stages, together with the concentrations (Jlg/g) of 14
~ (I(,
n(ljP
n.2
e 1.1 (I
27ro
1 iJ I
OQ3
(JOOOUtUtl
0000"300 UOO04400 üOv04S00
000l"'12(1(,
oooou 1 O(}
00004000
COOO~~OO
00003700 (JOO03tC'O
(I.Ol/n
" •• f2n • .I
\l,?
0.5
~,2
b,5
't,,.q
~b.~ n,~
0," o ,S
o.~
"," 0,,,
10,1 0,5
l.u
J3.Q 1,0 3,5
1.ll
1.0 0," 1,0 0,5 0,5 0,5 0,5 0,5 0,5 0,5
0,'>
0,~
v.~
0.5 1,0 0.5
l
0," 0,'> 0," 0.5 0," 0.5 o,e, 1,0 0,5 0,5 0,5 1,0 v," 1,0 0,5 0,5 0,5 0,5 0,5 0.5 0,5
ü.~
0.5
O,~
u.~
n,s
0."
n,,>
0,5
"0," ..,
p.~
o.?
0,,>
o,~
(I.';
v,~ (I. ;,
0."
2~~.b
202.3 1.",7 IG7.h 411.
'1",<1 '15.1
1.(1
7".3 9b.l
13 •• 5 ~3.~ 1,0 1,0 ~'1,O 151 ,~ 1,0 I. U 302,1 126,~ H,b '17,0 60,S 20,1 78,M 42," 21,4 135,3 bA,2 74," 22c,~ 205,1
10,~
152,5 2 13, ~ 1.0 210,3
13113.3 i21.t' qll,q '134."
3., I ,7
2~(l,e
3'11,0 519,1
3~9,2
BO.G 527.0
~
U7b,Q ijULI.l
30b, I 3 .. 7,3 2bO, I "3~,4
~~4,1
3"7,7 ~bl." <11,1 31 S.7 52'1,1 11'I,q ~IO,2 412,1 oS 1, 752,6 24b,LI 425." b81,7 47Q,b 1~3,3 111," 711, 'I 311,1 802,'1 2SJ,b b'l",O 531,1 ~7".7 451. I fi,w 1, U bl.1, I 152,0 203,2 O,~ bbb,2 25.1,<, 0.5 IOb4,'1 i'Ol,h
~b.l
42,3 12,'1 15,1 IH,2 n3,O 125,e 3.1,3 5 •• 3 0,5 0,5 0.5
co7,4
12,2 1 t. ~
2P._
'.It'l.b
2P.q 4t,7
13,2
.lP,b
I",~
Ib,8
IQ~.2
52,1 105,1> FO,J Ib8,1 2'14, I 122,2
12~,Q
~,"
lu,
1.0
7h,u
12'1,2 39.1 \" I. I
1'1. " '11." 1'1'1.0
O".{1
"/h.t!
IOn •.~
~il.r
b?",c 50. I 11'.l. "
52,Q
14~.3
"h.t.
0,017'1 1. I) v,v234 b".1 O,üloS 1,0 0,0227 1\3,3 (I. Ü2o., 0,5 0,,> U,OIb 0,5 0,012" 0,5 O,~ 0.0 I"~
H.nc~n
C.02 • .I
t.UlJO:3t:(lO
ll,I,c. ..)tl
li,I,2/0
O.H2~f~
(} .. f)
(I. IJ
u .111~C, (1.111 1 ~ ('. ~Il ~ 7 \). I)Oh 1
000ü3300 000034110 OOCo3';fJO
(' v \) ("\ 3,2 ('1(1
3(1
()r, ('0 I) i') 31 fl tJ
o (J \/11
(lUOVt:(ffl()
U(II)OcP lJu
{lv(jt I
li IJ () U 2.
I)(J
(I,01u4
Glh.lOC:':::
Qn.l
121.7
{l.()lf1lJ
(I l.1
I,j
00002.00 () 00 ('2 3 0 lJ
0,('11'11
{J.
U (lI,) Ue
o
II 000 ~ 0 0(, U U(IC' 1 (1(.
l'OO" I GOIl
(;()vO 1 fiHi
0000170,)
OO(11J1t:0(J
00 nol
(I(JUO 1 LlOO
000013CO
0(1)012(11)
OIj(llJ 11 (10
0000UqO,1 OOOl' I 000
OOOUil~('O
0000050" UOOOOeOO 000001'10
uounOijOl'
0000 r, 30"
CI ('0 (. i tJ 0
~~ 1, ~ ~J." 131, b ';2eo.3 U,0015 137 ,n 1'10_.7 (1,0040 3Ub,l' e50,< o.uon 30,2 tql)." U ,l..h.t1Q 111,'1 22," iJ,OUc8 "" 1,2 I"",{) U.tlO3R '12,'1 134b,7 0.004'> 28H,O 2000.0 V,0024 312,0 472,5 eq ,0 O,007P 10Q,Q ,l. OtJ~7 150.3 ,,2.t! C'. 0 1 0, H',4 540,8 1I,OOc.47 171, e b().~ O.OOlQ 1~3.? 3{Jb.~ 11,0 5'17,4 n.oos~ lJ. liOQQ 15.~ .7 "'~t'I.b 0.(107(1 n~.5 1'15.'1 t). (dJtJ~ I ~ I,. c'7 i'. I v,007t 131.b ~3~,~ ~Q.q 11. CI \13 .01 • '>
u. tl 04 7
('r
(I
Cl 1I II Cl Cl
Weight (g)
ll,'"
0,5 0," 0,5 \,0 (1,5 1,0 0,5 0.5 0,5 0,5 0,5 0,5 0,5
0."
1,0
(I.~
o,~
0."
0.5
1\."
(I,,>
1''4,0
0,'> 139.7 1.2,2 0,5 2hO,2 (I • ., 330.u 0,5 b8,e r,~ 2"10,3
(I."
o,S 'J l:'3,Q 0,5 2el.2 C,5 410,'1 0,5 SOO,I 0,5 110,'1 0,5 111,4 0." 3~3,b 0.'> 1'12.8 O,S b41,Q 0," 554,1 l'," 1,22 ,1 0," SIb,3 v.5Se t .,b 0,5 1.,2.0 0, ~ "75, I 0,,> 41",0 0.5 4Rl,b 0," 1"2,4 0.5 15P.7 0,5 2.10,3 o. S 20\,2 0," 177.0 O,~ In,7 0,5 25S.~ 13<','1
0," 0.5 0,5 0,5 0,5 1.0 1.(. l,ll O.~ 0," b",3 0,5 v," 51." 0.5 92,1 '1.5 1,0 1.0 1,0 0.5 10 I." 0," 1.0 1.0 1,0 H.S 0,5 302,1 ü.S 0,5 107.e 0,,> 0,5 3b.2 0,5 0,5 0,,, Ü,S 0,5 0,'> 0,5 31,0 0,5 0,5 0.5 19b,4
0.5
0." ü,"
O,~
OIe;
u.~
0."
0,5 0,5 0.5 0.5
o,~
O.~
0,5 0,5 0,5 0,5 0," 0," 0.5 0,5
O,~
0,5 0,5 0,5
(J.~
0,5
'1.5 0," 0,"
0." 0.'>
(J.5
0,5
O,~
ii,S
O,~
O,S
O.~
0" 0.5 0,5
ü,'::l
0,5
C,.,
0,5 v.5
O,~
0,5 0,5
(I.~
(I.~
O,S
Concentration (flgfg)
UQ4,lj
"b,_
21,ti 3\ .0 144.0
47.1.1
1\4,1 "v.4
12~,4
1.0 I 0 1.5 1,0
hGi.e
~~.~
nO.3 I. v 5".1
18b.9
<'bl.2
b.O I ,~. 7 1.4.3
13b.~
?.,.,.e
13" .1: 14e.2
1~4.'1
,1.1
bu.3
3,
.~
1J2.~
75,1 142,1 4q.2
u~.
Lq
71.0 3°.2 1.0 75,1.1 1,0 52,e
71,2 27u.3 \,0
cQ.~
112.2 73,Q
;n,o
1 t3
137 .1 11 , 7
bS,~
bl."
!il.O
~~7,~
2~q,4
5~1,~
IP3,t 2U,7
31~,~
499,f 43 t. b 133,4 it)2.1.i.
492,P lei ,I, 23",3
331,b
25~.~
121.t
4n.<
520,3
"27 ,b
J~ t, 2 b21',.9 1000.3 IQ",2 ellb.1 2"1.1 411,'1 1/J2.1 .l11.~ 141, 'I 871,2 4~e.b 112b,? 411.1 3b2," 181. e 520,~ 54~,b
227._
lbi.~
O.~
('."
0.'> 0." 0.5 0.5 0,5 0." 0,,,
O.~
Ci." v,"
0," 0." 0," 0,5 0.5 0," 0." 0.5 CI.5
2
~ays
day old pl<'ll1ts
Stored seed
0."
O.~ 0." 0.5 v." 0,5_ _ _ _ 0.5 O,~ 0,5 0,5 0." 1.0 1.0 C • ., 0.5 0.5 v." 0.5 3 days (I," 1.0 1.0 0.5 0." 1.0 1,0 0,5 0." 0,5 O• .,' 0,5 0.5 0.5 0." 0." 0,5 4 days 0,5 0,5 ü.5
O.~
0,5 0,5 0,'> 0.5
(I.'J
o.~
0.5
('I.~
0,5 0,'> 0,5 0.5 0,5 0,5 0,5
O,'l
0,'> 0,5 O.S 0,5 0." 0," o.~
(I.~
0,5
{l.~
o.~
0.5 ('.5
~,~
O,S 0,5
Code numbers, weights and concentrations of 14 chemieals for fodder rape at various stages of growth
Code No.
Table D.l
o~~~~~~~~.~.~~~~~~~.~~~~~~~~~~ •
••••••••••••••
_ooo~~
___
'~
••••••••••••
,.
,
..
••
!I
~~~~~~~~ •
,
••••••••
~_~~o~~_~~c~~_~~o~o~~o~q~~~~~~~~~~
· . . . . . . . . . . . . . . . . . .. ..... . . . . . . . . . . . . . ... · . . . . . . . . . . . . . . . . . . . . . . . . ,. ... ,. ,. . . . . . . . . . . ..
· I
   __
~.~~~~_~~~~~~~~~~~~~~~~~~~~~q
o~~~~~~~~.~
__
~~~~~.oo~~~~~~~~~o_~o~~~o~~~~~ ~
~
oooo~~~~~~~~~4~~~~~~~~q~~q~~~~~~~~~~
~~~~~~
~~
O~~~~~~~~.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
•
~.~ooooo~Oooooooooooooooooooocoooccc.
0c~OO
.~O~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _O~_oooooooooooOOO~oOOOo~
'
__ 0
__
__
~N_·~_~oO
· . . . ................. .. .... ,. .. ,. . . . . . .' .
~~~~O
oo~_~~~~~~~~~~~~~~~~.~~~._~~~~.~_.~~~r~~~~~ '
_~d~OOOOOOOOOOQOOOOOOPQpq_~~~~~~~N~O~
O~~V~
'N~
o~~~~~~~~~~~~~~~~~~~~~~~o~~_~.~o~~
__
~OC~._o
.························~···,··,t···,,·····
_OQOOOOOeOOOOO~OOOQOOCOQO~_~~~C~~~~~~~~~~O~
•
~__
~
~N~
~
~
Q~~~~~~~~~~~~~~~~~~~~~~~~~~~~N~~~3~~q~~~
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
.P
•
•
•
•
•
•
_OO~OOOOOOOCOOOOOOOOOQOO_N~~~Nh~~N_~N~
•
•
•
__
•
•
~N_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
•
_ooOOOOOOOOOOOOOOOOOOOOOOOQOO~OQOOOQoooooqo
........................ .......... , ........ .. . _N_
O~~~~~~~~~~O~~~~~~~~~~.~~~~~N~.~·N~~N~~O~~~ '
'
_~Q~OOOOOOO~OQOOQOO~OOOO~~~4~~~~~~~_~O_~~_~ I~O~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _~q~oOoooOOOOOOOOOOOOOOOOOOOOOOOO~_QNOO~_OQ
INN~
·· .. .. .
O~~~~~~~~~~~~~Q~~4~~~~~~~~~~~~~~~~~~~~~~~~~
..
..
•
..
. . . . .. . . .. .. .. . . . . . . . . . . . . . . . . . . .............. . •
..
•
..
..
•
..
•
..
..
..
..
..
..
..
•
•
..
•
..
•
..
,.
....
t
..
..
•
..
•
•
•
.........
.
_OOoOOOOOoooOoqO~~~~~~~.ooo~O~o~~o~oooooooa
O~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
•
_oooooooooooooooooooooooooooooooooo~OOOOOQO
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
•
_OOOOOOOQOOOOOOOOOOCOOOOOOOOOCO~OOOjoooooco
· . . . . . . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . .
O~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~r~~~~~~~~~~~
•
_ooooooooooooooooo~oo~oc~ooco~~ooJG~ccooo~o
~
u
.S
..
0 u
~
~
~
~
0
~
~ ~
~
~
~_~~OOOOOOOOOOOOOOOOOOOOOOQOOOooeoooooooooo
. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . ...
~~
~O~OOOOOOOOOOOOOOOOOOOOCOOCO~oooooo~ooooooo
u~
OOOO~~~O~~~~~~~~4~~~O~~~~~~~O~~~~~c~~~D~c
~
___
~~.~'~~_N~.~~~~No.~N~ooo~oOOO~COQOOCOcoo
~OOOOO
__
OO_OOO~~NN~~
__
~~~~O~~~~~~_~~,~O~~~~
~
ci
Z u
~
0
u
~~~~~~~~
oooooeooooooOooooooeOOOOOO~OOCCOOCOQCOOOOOO OOOOOOOOCOOOOOOOOOOOOOOCOCOOC~OOOOOOOOOOoOv ~~~O_N~q~~~~~O_N~.~~~~~O_NMa~M~~~Y_N~a~~h~~
••
~~~~~~~~~~~~~~~~~~M~~~hh~~~~~~~~~~~~~~~~.
OOOOOOOOOOOOOOOOOOOOOOOOOOooooooeoc~o~OOOCO OOO~OOOOOOOOOOOOOOOCOCOOOCOOOOOOOO~~COOOOCO
OOOOOOOOOOOOOOOOOOOOOOOOO~OOOOOOOOOOOOOOOOO oOOOOOOOOOOOOOOOOOoOOOOo~oo~ooaoooooCOOOOOo
.
··•
e
•• •
~
~ ~
w
3
~
··••
~
~
~
w
Jg_~~~~~.~O~~~~~~~~N
~
~~~O~~~~~~~~~~~~~~
• • • • .. .. 11 • .. .. • .. .. .. • .. .. • .. .. • .. .. .. .. .. .. .. • .. . . . . . . . . . . . . . . . . . . . . .. _~_~~_~~~~_04~~_~~~~qm~~_4~~~_~OO~OOOOOOO .~~~~~~NNq~
~_~~
____ N __
~~
___
N~_
.. . .. .. .. .. .. . .... .................. .................. . ................ ..
04~~QONO~D~~~O~~~~
N .
~~~~~N~~~~~~~~~~~
_~~~~~~~_~~NO~~O~~_~_Cq~~~~~V~~OOCCOOOOOC
•
_
N~_NN_~
__ N _ _
.. .. .. .. .. . .. .. .... .. ................ . " O. O" Q " O . "QO""GOO"O. "O.C"o"o"ao"c"o"o • . . .. .. . __ .. .... "" .... . ...................................... . __ ~~~~~~~~~~~~~~~~~~~
O~~~~~~~~~~~~~~~~~~~~ _OOO~O~OOOOOOOOOOOGOQ
~~~_~~~~~~~~~~ _OO~C
~~~~_o~~c~ooooooooo
~_O_~~O~~eN_~~_~N
~_N_N~~_~~~~~~~
~~~qO~~~O~
•
· . .. .".." ..
.
~~~q~~Oq~~
.
~~~~~~~~~OOQ~~OQOOO
o~~~~~c~~OO~~~~~~~~
" " " " " " " " " " " " ".""" " " " " " " " ~~~~o~~~~~ ~~~~~~~~~~~ ~~~~~~~~~~~~
• . _.. _.. . . . .. .. .... _
·
~q~~~~~~~~
.... .. ........ .................. NO _ _ _ _ _ _ _ .. ................ .
~~~~uO~~~4~_O~~~~~~~~~~ONC~O~~~~~~~~~~~
.

N~~~N~N~~~Q~~~~~~
OOOO~ooooO
O~~~~~~N~~~~~~~~~~~~~4~#~~~~~~~~~~~~~
" " " " " " " " ""

""
_~~~~4~C4~~~~
.
_" _
O~~~~~~~~~~~~
" " "
" " " ""
" " "
""
" _OOOOOOOOOO~N~
•
"""""."" " __ " CO
" " " " " " . "_ "_ """""
~~~~~~~O
.. "
q~~~~q
_~O~OQOOOOO
.
.. "
O~~NN~~~O~~~~~~~~~~
" " ""
_ " _" _" _ _
"""""""
~CO_O~OQOOOCOO
~~~~~~~~~~~~~~~~~~~~~,,~~~~~~~~N~~~~~~~~~~ _~N~W~O_~~~~~~~N~~~NNOOOOO#~Oq~~~
•
~~~~~~N~~~
~~~~~~~~~~
O~~~·~N~~_~~_~O_O~~~~_O~
" " ""
"""".
_~Q~r
"""".""""
__
_
""""""""""
" " "_ "_ "_ """"
q~~~~O~q~~N~#_~N~~O
•
OO~~~OOOOOOO~OO
~N_N~~~_N~_~
~~~~q~~<~N ~N_~~NONO
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _OOO~OOOOOOOOOQ~OOOOOOOOOOOOOOOO~~O~.~O~
•
#qN~~~~~~~
O~~~.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
" " ""
"."""
"."""""""""."""".".
.
"""
""".".".""
_OOO.~OOOOOOOOOOOOOOOOOOOOOOO~QOOO~OOOOOO
•
..
..
O~~~.~~~~~~~~~~~~··~~~~~~~~~~~~~~~~~~~~~~~
" " ".
""""
"""""
""""""""."
""
_OOO.~OOO~OOOOOOOOOOOOOOOOCcOOOOOOOC~OOOO
•
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~r~~~~~~~~~~
_ t 000'
~oooeooooo~eo~oooco~c~e~e04~~~~~
~~~~~O_~~_
~~~~~ OOOQ'COOOOOOOOOOO~COOOCGOCOOOOO~~~~~U~
oooC<~~OO~~OOCOCD~COOooooooe~~o~~~~~~N~~~ oooo~coocc~c~coCOCOOOOOQ~oeOCOOaeQeDOeDCC ~~~~~~~~~~_ocooooeoOO~DOOOCOOCOOOOCO=CCOC
" " "
. . . . .. "
."
.."""""".""."""..."

.. .. ""
,,".:.
4~_~~~~~~_4~~~~e~~~~~~~~o~e~co~~o~~cooo~oc NQO~q~~~~q'~~N~~~~~~O~~J~~~~ ~
~~
oooo.COOOOCOcoooo~OOOCOOOOCOOO~OOO=Oc=ooeo
oooocoocoooc~ococeooooo=oooo~~~c~occc~~o~c
O_N~.~M~~~O_~~.~M~~~ON~a~M~~~o~~.~~~m~o__________ ~~~~~~~u~~ooaooooooo
NN~NNNNN~~~~
ooooo~ooooOOOOOOOOOOOOOOOOOOOOOOoooooooooee~ooocCOOO oo~ao~ocooOOOOOOOOOOOOOOO=OOOOOOOOOOOOOOOo c~~o~oooooooooooocooooooocooo~oc~~oo~Ov~OO
130
Analysing complcx largescale data sets
volatile chemicals extracted from them. As the data are presented, each line represents a plant, the first value being weight. There are 10 plants of each age and the ages are: stored seed, 1,2,3,4 days, 2, 3, 4, 5, 6, 7, 8 weeks, fresh seed. A value ofO.5 for a concentration means that the chemical in question was not detected, while a value of  1.0 indicates a missing value. (Note that table D.1 is exactly how the data were presented to me, except for the two lines near the bottom of the table and the encircled observations at week 7 which are referred to in the notes overleaf.) The purpose of the exercise as presented to me was to 'use an appropriate form of multivariate analysis to distinguish between the ages of the plants in terms of the amounts of different chemicals which are present'. In other words we want a (preferably simple) description ofthe chemical structure of the plants and seeds at each stage so that the different stages can easily be distinguished and so that the ageing process can be described in chemical terms. In practice the exercise would best be carried outjointly by a biochemist and a statistician and the results related to other studies on the attraction of insects to host plants. However, in this exercise the reader should simply try to summarize the data in whatever way seems appropriate.
Exercise D.2
Backache in pregnancy
Backache is a frequent complaint among pregnant women. An investigation (Mantle, Greenwood and Currey, 1977) was carried out on aH the 180 women giving birth in the labour wards of the London Hospital during the period May to August, 1973. Each wo man received the questionnaire within 24 hours of delivery and had the assistance of a physiotherapist in interpreting and answering the questions. There was a 100% response rate and 33 items of information were coHected from each patient. The data are shown in table D.3 together with a detailed list ofthe items recorded and the format. As weH as a (subjective) measure ofback pain severity, each patient's record contains various personal attributes, such as height and weight, as weH as whether or not a list of factors relieved or aggravated the backache. The object of the analysis is to summarize the da ta and in particular to see if there is any association between the severity of backache and other variables, such as number of previous children and age. As part of this exercise you should scrutinize the data in order to assess their quality and reliability. In practice this analysis should be carried out with an appropriate medical specialist, but he re you are simply expected to use your common sense. GeneraHy speaking, only simple exploratory techniques should be used to highlight the more obvious features of the data. (Notes for Lecturers: This exercise is intended to be carried out on a computer by students who have access to the data on file. Working 'by hand'
Notes on Exercise D.1
131
would be too time consuming and punching the data would also take too long. Teachers may obtain a copy of the data on a floppy disc from the author at a nominal charge, or (within the UK) by transference between computers where possible. This is weIl worth doing as the data are ideal for analysis by students at all levels. The above project to 'summarize the data' is really for thirdIevel students. Firstlevel students will find the data suitable for trying out descriptive statistics. I have also used the data for secondIevel students with much more specific objectives such as: 1. 2. 3.
4.
5. 6.
Comment brieflyon the datacollection method. Comment brieflyon the quality of the data, and (a) pick out one observation which you think is an error, (b) pick out one observation which you think is an outlier. Construct (a) a histogram, and (b) a stemandIeaf plot of the gain in weight of the patient during pregnancy. Which of the graphs do you prefer? Construct an appropriate graph or table to show the relationship, if any, between backache severity (item 2) and (a) age (item 4), (b) number of previous children (item 9). Briefly say which factors, if any, you think may be related to back pain severity. Construct a table showing the frequencies with which items 1118 relieve backache.
Indeed, it is easy to think of a large number of different exercises, so I have found the data ideal for use in different projects with different students.) NOTES ON EXERCISE D.l
Have you read the comments at the beginning of this section? Then you know that the first step is to scrutinize the data, assess their structure, pick out any obvious errors and oudiers, and also notice any obvious properties of the data. You may think the data look a 'bit of a mess' and you would be right. You have to get used to the look of real data, although anyone with a litde experience could easily clean them up. For example, the code numbers could (and should) be given as simple integers without all the zeros. Also notice the large number of undetected va lues recorded as 0.5 which could (and should) be replaced by zeros or blanks. With the data stored in a computer, it is relatively easy to print revised data to replace the original, rather 'horrible' data format. The observations could be rounded in an appropriate way, usually to twovariabledigit accuracy, the missing values could be replaced by asterisks or removed completely, and undetected values removed as noted above. The values for fresh seed should be reordered at the
Table D.3
Data on backache in pregnancy compiled from 180 questionnaires
Key: Item
Description
1 2
Patient's number Back pain severity: 'nil'  0; 'nothing worth troubling about'  1; 'troublesome but not severe'  2; 'severe'  3 Month of pregnancy pain started Age of patient in years Height of patient in metres Weight of patient at start of pregnancy in kilogrammes Weight at end of pregnancy Weight ofbaby in kilogrammes Number of children from previous pregnancies Did patient have backache with previous pregnancy: not applicable  1; no  2; yes, mild  3 ; yes, severe  4 Factors relieving backache: no  0; yes  1 1. Tablets, e.g. aspirin 2. Hot water botde 3. Hot bath 4. Cushion behind back in chair 5. Standing 6. Sitting 7. Lying 8. Walking Factors aggravating pain: no  0; yes  1 1. Fatigue 2. Bending 3. Lifting 4. Making beds 5. Washing up 6. lroning 7. A bowel action 8. Intercourse 9. Coughing 10. Sneezing 11. Turning in bed 12. Standing 13. Sitting 14. Lying 15. Walking
3 4 5 6 7 8 9 10 1118
1933
Fortran format
13 12 12
13 F5.2 F5.1 F5.1 F5.2 12 11 11 11 11 11
11 11 11
11
11 11 11 11 11 11 11 11 11 11 11 11
11 11 11
1.5~
5?3
~q.~
~310100000100100001000001
3.~1
Z~OOOOlOIOOOOOOOOOODOOIOO
31 1.60 47.7
~O.tl ~1.3
1.5~
1.57
4 \0
'1 ~
? o~~
n60?,
19 1.65 73.6 92.7 3.35 0000000000000000000000000
20 1.05 70.0 89.1 3.21 1200000000\11\11000000000
029
030
'1,)A 7
~,>
~o
7 1.7
30 1.03 02.7 72.3 3.07 4000000000000000000000000
028
t.OR
2.Sh
1\~.7
~.h'l
&0.0 7 r).9 ?Att 1.fd 66.'1
1.~1
l.Q~
3.Kl
1.11J bO.() 71.'1
~II.~
l.q~
5q.l
~7.7
&1.1 77.3 3.\'\
~K.b
~7.3
75.0 3.nb
~.qij
2.7\
3.5~
~_t,10nlllll)()O,)tllUnf)I)!)1l111t:
li'()I:JtIOOlO:H)')UOOOilUOi)lPOll
1?1(101~UI0,)tOl00001\~\)tOI)
3~OoooOODn~n()noOOO~o0000n
?~nOOOnplnOOOnooooqrn10nn
QlnOOI)nooooooonOOOO~00nOa
~lnl)Ol!'lOUOOOOOOOOOOOQOO~
t3nllntOOol0tl11noo~otooo
14000nooooooO~00000001110
"ln00100001DO~OOOoon00100
1200n100100000000000~1011
130~0100000t010000000uooo
1.62 0110000010100000000000000
~H.b
1.7~
o i1 057
25 1.70 52.3 5'.5 2 •• 4 131000000000010'010010000
027
1.~5
~q
1.7~
2q
~~
056 2 b
18 1.00 73.2 81.' 2.05 0100001000010001000010000
?
~~.S
i? 1.70 7?1 AQ.l
\)5'' 1
0""
026
~300100010010101000000000
0100010000000000000001001
32 1.52 b9.5 75.5 3.04
3.3~
05\ I
025 2
024 2 0 21 1.55 57.3 77.3
b~.q
052 2 0 l? 1.70
23 1.57 55.Q 60.' 3.07 0100000000000000000000000
q
26 1.52 '4.5 50.4 3.4' 0101000000000100000000000
I
023
051
022
1300000000000011000001001
28 1.03 59.1 72.1
05.5
021
1.6~
050 l6 23
42 1.65 53.6 63.0 2.73 2000000000000000000000000
020
3.7~
~5.q
t 8 11 1.60 7b.4 "q.t
04~
01' 2 6 18 1.00 56.4 70.0 3.S' 1300010100000001000001001
01820 26 1.65 70.0 AI.4 3.01 1300010000000101000000010
23 '.52 55.5 6'.5 3.35 0100000000000000000000000
o
007
42 1.52 06.8 73.2 2.4' 0400010000010000000000000
017 0't8 '2
040
24 1.51 02.3 77.3 3.3R 0000000000000000000000000
o
5 2' 1.05 57.3 71.0 3.69 0000010100001101000001001
045
1.50 47.3 55.0 2.73 1000000000000000000000000
2~
31 l.bO 60.5 77.3 3.27 1000100'00000000000000000
37 1.03 00.' 69.5
2b 1.03 55.5 63.2 3.30 0100011001100101000000110
016
~
015 I 6 34 1.08 0".2 77.3 3.75 33000'0010100001000001000
O/~tl
8 20 1.00 52.3 b4.5 ".'. 1300110001000000000000100
043 2
014
ZO 1.73 03.0 09.5 3.18 '200000100111001000001000 26 1.50 OS.5 60.4 4.35 2400000000000000000000000
8 27 1.5/ 55.9 02.3 3.09 0100110000010100000000000
2
8 21 1.00 02.7 73.2 2.95 0100010000000000000000001
041 3 0 12 1.60 47.7 66.8 3.41
040
35 1.65 70.0 RZ.7 3.0' 72001000000'0000000000100
2.V< 0100000000000000000000000
013 2
~3.6
75.~
042 )
"9.~
6 •• 5
05Q 2
038
037
2.b" 0110000000100000000000000
I.b~
50.9 60.4 3.10 1200010000000000000000100
036 2 5 2b 1.75 09.5 84.1 4.20 0100010000000000000001000
2b 1.55
0 2,
ntloo0010000UOOIOOOOOU~10
3.h Q tqOOOOOOloooOOonOOOQnlOno
4.01 110001QOOOI090000000UOOOO
1.6~
035 Z Z 30 1.00 50.9 02.7 3.75 IZIOOOOOOOOOOOOOOOOOOIOOO
27
29 1.5/ 68.2 75.0 3.35 0000000000000000000000000
034
on
0100000000000000000001000
21 l.bO 56.2 69.5 3.30 0000000000011111000001101
28 1.68 56.8 70.Q 3 •• '
032
031
012
Oll
?Q~oo~nOOOOQOOOI00Qnn0010
3.qt 010000000000000000001)0000
~.ol
1.7~
Ul0 2 8 l' 1.63 A3 6 2 qo.q \.31)
ooq 2 b iO
A~.O
8 i5 1.61 10.0
85.~
006
70.~
~q.l
1.7~
0 24 t.ll 76.4
007
OOb 1 1 32
bO.O
t2(lOf",nOOOOnO~OQonOOJOOOO
~5.~
o
OO~
l.hO
nlrloonoooonoon~100~Ot)n000
~7
~1~O~)11010100t"ln~r,ryJnon'l
~2.7
q.l~
1.5/ 73.2
1?lQnlnOOf)OU1!)nlnn~i'~I)~hn
otOOO(l(lOUI'}IHl'I:lO'J(lOIl/l'J:.lf)IHI
H 22 1.52 qt.ij /17.3 2.Rl
I~
~?l
\)I)JI
005 ? h
l.hO
&".b 2.lZ
~4
I)
0 lfl 1.':12 "'LI.S 7,.u 3.\')
001
For key see facing page.
OOi
Table D.3
II'I.~
~n.5
~6
~b
~"
076
017
018
2QOOOtOlOOl0t011000001000
1.6~
57.3 '2.7 3.B' 0100010010110011000001100
1.68 "9.1 Q'.5 2.5. 0100010000000101000000000
1.57 60.5 69.5 3.24 0000000000000000000000000
"~.7
3.89 2300100000000000000001000
18 1.60 62.7 71.4 2.44 1000000000000000000000000
105
104 0
0 ~~
~O
1.&5 &1.R 66.A 3.1R 0100000000000000000000000
1.6; 53.6 71.2 2.78 0000000000000000000000000
103 2 6 22 1.52 SQ.l 65.0 2.78 0101000010000000000001010
21 1.55 54.5 67.3 3.72 0100010100000000000000100
101 102
100 2 4 26 1.50 43.6 53.6 3.81 0100000010000000000000101
099 2 6 22 1.60 63.6
24 1.70 60.0 65.0 3.18 0100000000100000000000000
30 1.55 47.1 59.1 3.15 1200000000000000000000000
5 25 1.65 50.5 bO.9 3.35 1200010000001000000000000
1 23 1.55 62.7 85.5 4.18 0100010000011111000001001
$3 l.b3 63.6 10.9 2.93 1310010010100001000001000
o 31 1.52 52.3 53.2 2.39 2200000000000000000000000
085
086
081
08B
089
090
084 2 6 24 1.60 50.0 59.1 3.52 1200110100111100000000000
30 1.73 63.b 80.5 3.61 041001001000100000000[000
083
t
.;0
11'J
118
I
t
111 l
116
115
11 "
113
~~
~6
~1
2.q~
Sb.~
7h.~
71.~
3.~~
2.~O
78.2 3.01
1.60 SQ.l bA.2
1.&0
1.60
1.57 51.H 71.ft 3.35
1.&$ 58.0 62.3
1.',U
b.~"
~~
l.bO
1.~1
0 lh 1.&u 0
~.50
0100000010100000000000000
~6.n
hO.~
~7.S
5'.~
~n.5
7~.S
&~.9
{.~.4
2.q~
3.24
3.ft"
tjOonOOOl01()0nnnoo~nnnonl
0101)oonooooooOOOOU~(10U001'
n0000000Uoo()oonOOOO()Onn10
3.qA 131)01001000t01000000(11000
1100100000JOOOOOOOOOOOOO~
0100010001000101000000101
rylOOOOOOOOOOOOOOOOOOOOOOO
0100000000000000000000000
OOOOOOOOOOOOOOOOOOQOOOOO~
0100000000000000000000000
1.57 53.6 65.0 2.87 1200010000000000100000000
19 1.65 57.3 70.0 2.6\
R SO
q
lO
o l5
o
1 19
0
112 0
111
23 1.6' 47.1 51." 2.84 1300010000000101000001001
8 19 1.70 57.3 74.5 3.69 2201110100010010000000001
OBI
b
8 32 1.63 52.3 Sb."
109 110 2
~2
22 1.6a "7.7 5J.6 J.01 3300010000010000000001000
b
107
lOB 2
29 1.55 75.5 78.6 2.61 0100000010101000000010000
082
060
80.5 4.1> 3210000000000000000000101
1.60 7".5 82.7 2.90 0100000010000000000010000
q.l~
2~
097 2
66.~
1.70 65.q 68.2 3.35 1400000000100000000000000
0 26 1.10 80.5 P9.1 1.42 0010010010111111000001001
~O
096 0 0
098
~b
2 34 1.63
095 2 6
094
l.bO 65.0 00.5 3.95 0100000000000100000000000
67.1
68.2 ß2.7 5.97 0000000000100000000000000
106 2 4 23 1.57 16." 87.3 2.78 1100010000001000000000100
~5.5
50.0 63.& 2.7& OlnOOtOOOOlOOUOOOOOoooOOO
1.7~
~7
25 1.60 68.2 16.4 3.18 0100010000000000000000100
093 ~
7 21 1.50 63.b 63.6 2.98 2200000000000000000000000
092
091
1.75 51.3 65.5 3.07 1200000000010000000000000
1.b~
1.~U
079 2 0 56 1.73 75.9 87.3 3.64 1300010010111111000000001
~4
2&
07Q
075
nl00000000000000000000000
I.QO J300110000010000000001100
3.7~
21 1.55 61.3 AO.O 3.21 G100000000100000000000000
50.~
73.2
320uoonooonOOOf)OOOOO~OOOO
075
hn.~
1.&0 q7.7
1.6~
3.'0
5?
6q.l
28
hO.~
072 3 b
0 29 1.51
071
070
0111000010111100000001000
1301000000100010000000000
130001010000010000000000n
3.8t
\.?!
7'1.1
~7.7
1.63 bb.q R2.3 2.21 0100010000000001000000000
Hq.5
06925 j\
1.h~
1.~7
t.70 hl.R 7~.~ 3.;1
SO
0&7 3 2
lqOOOOOOOl~00100~O()OOOOOO
bq.~
OlOOoOnOl0()011f)OOOO~OonOl
120UOI000010000100U011I000
otooonnOOnlll)OOOOoo~ql)nnnl)
ntODOOOOOOOOQ()OO'JOQO'lOf)Oo
3.01
~.5~
3.47
t!..c'7
S5.0 5.10
SR.~
7~.O
'II\.~
0&8 2 Q ~2
~'?
Obh 2 5
q~.q
~3.2
1.bb
Ob~
~2
o lq 1.50
Ob/~
~
117.7
1.6U bl.h
1.',5
1.~~
q
Ub}
~R
1~
1 n
~
t!~
1 0
IJbl
Table 0.3 continued
Obi
~.~
b"l.'"
~.kq
I~OO(JO!lOlvlnl()Oi)not)()tl001
1&0 161 162
20
2~
l~
35 1,63 &•• ~ 72.3 3,52 ?200000001000010000000000
129 1
130
131 0 0 2H 1,70 78.2 0;.6 2.'" 1000000000000000000000000
132 3 4 25 1,63 163
59.1 67,3 2,O~
0100000000000000000000000
52.3 59.1 3.&4 0100000010010000000000000
83,6 8n,8 3.64 0000000000000000000000000
38 1,52 43 •• bO.9 3.2" 14100000000000000000'0000
2
0
3.35 0000010100000000000001101
~q.l
17q
21 1.55 53,2 56.4 2.4. 0100000010000000000000001
22 1 •• 7 44.5 S8.2 2.87 0100010000000101000010000
150 2
1'30
11"
146 I 0 26 1.65 50 •• 70.0 3.52 0200000000000000000000000
109
1 IP.
t!~,
b SQ
t n
~
t."c!
l.f)~
1.~u
1.L~
0101000000tOO~0000000t010
otoonoooooooooooooonoo~on
~~.1
otoonOlOOOllOOOOOOoont10loo
1~nOO(JO(IOOOOOOOOOIIO"O(I(lon
otoooooOOOOooo~OOOon()oooo
3.10 1301)OJOIIOOnl()OOOOOOf)n111)()
2.q~
3.21
2.~6
"('.~
bh •. \
Hf'." ".'1.1
V.. ~.r)l'
~.
I?OOn')I)I)OntlflIl111IlOn1o\noo!)
t ..~tOf\()O(ljl()I)()n'J(H}'lOflO'lOI)O"
',/1.1 bO.') 3 • ..,i' nn(WOnlI10\I')I'lUIIOOllnnO ' IOlnn
hh./\
177
5 21 I.bö 6&.A ß7,3 2.00 0100000110011101000000000
1.1
3 2&
176 1 0 31 1.&j 50.q 60.5
hS.~
1.5l 3A.? IIK.2 ~S.U
2.8~
S.~~
&\.6 3.01 0100000000000000000000000
60.0
1.7U
~O.'J ~D
1.5~
f.h3
1.68 S7.3 61.4
120000~OOOOOOOOOOOOOOoryOO
53.6 63.2 3.35 0100011000000000000000100 57.3 &5.0 3.07
qz
~1
~b
2~
t.~~
1.~0
14& 2 4 21 1.52 50,9 68.2 3,6. 0100000010000101000001000
I)
0 (1
175 0
t7~
21
52.3 65.0 3.47 0100000010000000000001000
26 1,57 52.7 65 •• 3.49 0100010010100001000000000
144
145
172 3 0
171
173 0 0
1.5~
~2
170 0 0 lq
16~
16 1.63 49.1 56.8 3,24 0000000000010000000010100
3 20 1.52 44,5 51.4 1,93 1300000000100101010011111
143 2
3,64 0100010010110100000001000
142
6~.5
6 l7 1,65 51.3
29 1.63 58,2 65.5 ?A. 1310000000000000000000000
141
6 33 l.b5 63.& 82.1 3.86 0100010000010000000000000
130
140
6~,&
168 2 6 l5 1.60 50.5 72.3 2.73 1300000100101000000000000
10 1.50 .0,5
161 2
33 1.65 40.0 55.0 3.41 0110000000010010000101000
24 1,63 56.4 70.0 3,07 0000000000000000000000000
48.2 56.4 2.70 0110000100100000000000000
o
138
1.5~
32 1.73 54,S 62.7 3,41 1300010010100000000001000
137
,
1,~2
1.5~
1,6~
21 1,68 54.1 74,5 3.7? 1200000010111101000001001
1&5 3 8 15
164
3~
24
0 21
0
16626 23 1.57 60.5100,0 3,58 0100010000100000000000000
1.60 60.5 73,2 3.27 0100000000000000000000100
2.21 5210000000100000000001001
73.2 3,08 0100100000010000000000000
4 20 1.'.2 60.5 75.0 3.64 0000010000000001010000000
~8.b
60.~
2 35 1.73 72.3 SO.I 6,28 3310010000001000000000100
8 l7 1.55
6 II
~4.5
159 0
52.7 51,7 2,61 0100010000000000000000010
13&
q2 1.50
& l2 1.63 00.5 17.1 2.3' 0111000000000000000001000
00.5 3.15 0100000000000000000000010
01000000tO~00000000001110
1.5~
o 22 1.60 67,3 73.6 2,05 0000000000000000000000000
1 10
4 24 1.65 '4.5 75.0 2.81 0110000000000000000000100
0 24 1,57 &0,0 72,1 3,35 0100000000000000000000000
135
134
133
79,~
1.&5 bl.R 7S.6 2.81
158
157
l.b8 5~.S 71.8 3.72 0100010010001101000000000
l.bJ 58.!, 70.0 2.8~ 1100000100000000000UOOOOO
155
128
~tnonll00101'}(IOOoonOOOOU1011
127 1
2.q'~
~.~~
156
~o.n
q3.~
01"oonOOOOOOOO~OOOrn(lODOO
1.&3 7?7
77.5
~b
l.b~
153 154
126 1 0
12~
39 1,50 50,9 72,1 3,04 4200000100000000000001001
152 2 & 27 1,&3 &0,5 71,4 3,21 0100000100000000001011000
151
~3
1."'2 11~.2 &1.R 2.J.'Cl" nOOOlI)OOlOOOl0000111f1000n
1. r , { ')9.1
S. <:,~ ,." ('O()O:J{IOI) )1)r}I'}'}t)0001)')(}IjOl)
3.1'i OOOtl(}O()I)OOr)OOOOI)(IOonnOOfJ'l
I 7
12 1' 2.3 11
123 3 6
~
u',_.~
1 0 2''
/')0.')
1.1)~
t. t..~ ')il. r, (,7.
0 V 1/.
It!l
1':~
136
Analysing complex largescale data sets
beginning of the da ta as they need to be com pared with the storedseed values rather than with the 8week data. Inspection ofthe data can also be made easier by drawing lines on the data table to separate the different growth stages (as at the bottom of table D.1), and by labeHing the 14 variables from A to N. The data should then be inspected for suspect values wh ich could be encircled. There are several ca ses where the same figure is repeated in two or more columns, (e.g. the va lues at 7 weeks encircled in table D.1) where digits are possibly transposed, or where there are possible outliers. With no access to the original data, it is a matter ofjudgement as to which observations should be adjusted or disregarded.
SUMMARIZING THE DATA
It may see m natural to begin by caIculating the sam pie mean for each variable at each growth stage. However, several variables appear to have a skewed distribution so that medians may be more appropriate than means. Alternatively, so me sort of transformation may be desirable. Instead of analysing concentrations, as in the given data, perhaps we should multiply by the appropriate weight and analyse amounts. A completely different approach, wh ich turned out to be suitable in this case, is to treat the da ta as binary (the chemical is observedJnot observed). If insects are attracted by certain chemicals and repelled by others, then the presence or absence of certain chemicals may be more important than concentrations or amounts. Table D.2 shows the presenceJabsence table as weH as the mean weights at different growth stages. The order ofthe variables has been changed to emphasize the block structure of the table. (Did you think of doing that?) Table D.2 gives cIear guidance on distinguishing between the different growth stages. For example, chemical C is only present in stored seed and 1dayold plants. Week 2 is characterized by the presence of M and N and so on. The table also suggests that it is sensible to query the appearance of chemical D in week 3, and reveals that the period of greatest change is between day 4 and 2 weeks. The absence of data in this period is therefore unfortunate.
DISCUSSION
The presenceJabsence table gives a cIear guide to differences between growth stages. It is not obvious that anything further can usefuHy be done with these data in the absence ofbiochemical advice. Certainly there seems little point in carrying out any multi varia te analysis as suggested in the statement of the problem. Indeed principal component analysis gives no further understanding of the data (see Chatfield, 1982, Exercise 5). Although no explicit statistical model has been fitted, we now have a good idea of the main properties of the data and it is not uncommon for binary information to represent most ofthe available information in quantitative data (e.g. Exercise E.4).
Brief comments on Exercise D.2 Table D.2
Presence/absence table Volatile substance B
Growth stage
D
fresh seed Stored seed Day 1 Day 2 Day 3 Day 4 Week 2 Week 3 Week4 Week 5 Week 6 Week 7 Week 8
+ + + + + + + + + + + + +
.Key:
137
+
C
A
L
K
f
J
E M N
+ + + + +
+ + + + + +
+ + + + + +
+ + + + +
+ + + + + +
+ + + + + + + + + + + + + + + + +
+ + + + + + +
H
+ + + + +
G
+ + + + + +
Mean weight (g) 0.0030 0.0034 0.0071 0.012 0.025 0.017 0.80 2.4 11.2 29.0 43.0 72.0 172.0
+ The substance is detected in alI sampies .
? The substance is detected in same sam pies.
Moral Simple descriptive statistics is sometimes adequate for apparently complicated data sets. BRIEf COMMENTS ON EXERCISE D.2 It would take too much space to comment fully on the data. These brief remarks concentrate on important or unusual aspects and leave the rest to the reader (see also Chatfield, 1985, Example 3). You should first have queried the datacollection method, which is not difficult to criticize. The sampie is not random and the results will not necessarily generalize to women in other parts of the country or women who were pregnant at a different time of year. It is also debatable whether women should be questioned within 24 hours of delivery, although the 'captive' audience did at least produce a 100% response rate! The questionnaire is open to criticism in places and the assessment of pain necessarily involves a difficult subjective judgement. However, the collection methods could only have been improved at much greater cost which would be unwarranted in an exploratory survey of this nature. While any modelbased inference is probably unjustifiable, it would be overreacting to reject the data completely. As a compromise, they should be treated in a descriptive way to assess which variables are potentially important. The data should be scrutinized and summarized using an appropriate computer package. I used MINITAB which is very good for this sort of exercise. The data are
138
Analysing complex largescale data sets
fairly 'messy' and you should begin by plotting a histogram of each variable, with the particular aim of finding suspect values and other oddities. Four examples must suffice here. The histogram of patients' heights reveals several cells with zero frequencies which destroy the expected 'normal' shape. A litde research reveals that heights must have been recorded in inches and then converted to centimetres in order to produce the given data. Did you spot this? The histogram ofbaby weights reveals two babies whose weights are 5.97 kg and 6.28 kg. These values are much high er than the remaining weights and I would judge them to be oudiers. While they may have been misrecorded (or indicate twins ?), they are medically feasible and should probably not be excluded. Patient weight gains during pregnancy are worth looking at, by subtracting 'weight at start' from 'weight at end'. One patient has zero weight gain indicating a possible repetition error. However, the histogram of weight gains indicates that a zero value is not out of line with other va lues and so need not be regarded as suspicious. Finally, we note that 26 patients recorded a zero value when asked to assess backache in previous pregnancies. A zero value is impossible according to the listed code and so it is probably wise to ignore item 10. After screening the data, a variety of summary statistics, graphs and tables should be calculated. The most important statistic is that nearly halfthe sam pIe (48%) had either troublesome or severe backache, indicating that backpain reallY is a serious problem. In order to assess the relationship between the discrete variable 'severity of backache' and other variables, I note that scatter diagrams are inappropriate, but rather that a set ofbox plots (e.g. fig. D.1) or a twoway table (e.g. table D.4) should be formed depending on whether the other variable is continuous (e.g. height) or discrete (e.g. nu mb er of previous children). When plotting box plots with unequal sampIe sizes, the latter should be clearly recorded, as in fig. D.1, because the range increases with sam pIe size and is potentially misleading. Without doing any ANOV A, it should now be obvious that there is litde or no association between backache and height, weight, weight gain or weight ofbaby. In fig. D.1 we see a slight tendency for older women to have more severe backache,
0 Backpain severity
1 1
2
+
13
+
* * *
60
+
3
r
+
20
80
30
27 I 40
age (years)
Figure 0.1
The set of box plots showing backpain severity against age.
NO.of patients in group
Brief comments on Exercise D.2 Table DA
139
The number of women tabulated by degree ofbackpain severity and number of previous children Degree of backpain severity
Number of previous children
2
0
0
8(8%) 3(6%) 2(7%)
2 or more Total
13
3
Total
56(55%) 16(31 %) 8(30%)
28(28%) 22(42%) 10(37%)
9(9%) 11(21%) 7(26%)
101(100%) 52(100%) 27(100%)
80
60
27
180
while table D.4 indicates a clearer association with the number of previous children. The latter relations hip is easy to explain in that picking up a young child may injure the back. Of course age and number of previous children are correlated anyway (see Mantle et al., 1977, fig. 1). To analyse the factors relieving or aggravating backache, the first step is to produce count frequencies for each factor and tabulate them in numericalorder, as for example in table D.S. A 'cushion behind the back' is best for relieving backache, while 'standing' is the most aggravating factor. The more important factors could, if desired, be tabulated against backpain severity to see which are most associated with severe backache. While the above analysis looks fairly trivial at first sight, I have found that students often have difficulties or produce inferior results as compared with those produced by a more experienced analyst. For example, did you remember to look for errors and outliers? In table D.4, did you combine the data for two up to seven pregnancies (or so me similar grouping)? If not, I hope you agree that this modification makes the table easier to 'read'. In fig. D.1, did you separate the age values into discrete groups and produce a twoway table rather than a set of box plots? In table D.S, did you present the factors in the same order as listed in table D.3, and, ifso, do you agree that table D.S is easier to read? Finally, are all your graphs and tables clearly labelled? Table D.S
The number of women for whom a particular factor relieved backache
Factor Cushion behind back in chair Lying down Tablets, e.g. aspirin Sitting Hot bath Standing Hot water bottle Walking
Frequency 57 38 23 20 15 9 9 5
140
Analysing complcx largescale data sets
SUMMARY
Backache is troublesome or severe for nearly halfthe given sampIe of women. There seems to be little association with other variables except that it is more likely to occur in women with previous children. 'Standing' is the factor which aggravates backache most, while 'a cushion behind the back' is best for relief.
Moral Elementary 'descriptive' methods are not as easy to handle as is sometimes thought. Students need plenty of practical experience and a 'messy' set of data like the one given here is ideal for learning how to apply graphical and tabular techniques in a more methodical way.
E Analysing more structured data
Data which arise from experimental designs and more complex surveys often have a welldefined structure which needs to be taken into account during the analysis. For example, in a randomized block design we know that each observation on the response variable corresponds to a particular treatment and to a particular block. Comparisons between treatments should then be made after eliminating variation due to betweenblock differences. It will normally be useful to fit a proper stochastic model to structured data. Most such models can be regarded as special cases of the generalized linear model (Appendix A.9), but the simpler formulation ofthe general (as opposed to generalized) linear model should be used where possible as it is easier to explain to nonstatisticians. Some complicated designs, such as fractional factorial experiments and Latin squares, give rise to highly structured data sets where the form of analysis is largely determined apriori. Then the scope for IDA may be rather limited, apart from data quality checks (e.g. Example E.2). However, in many other cases, IDA has an important role to play, not only for checking data quality and getting a 'feel' für the data, but also in formulating a sensible model based on empirically reasonable assumptions. The examples in this section concentrate on problems where IDA has an effective role to play. Exercise E.l
Assessing row and column effects
A textbook reports that the following experimental results (table E.l) arose when comparing eight treatments (tltS) in three blocks (blocks IIII). (a) What sort of design do you think was used he re ? (b) From a visual inspection of the da ta : (i) Are the treatment effects significantly different from one another? (ii) Are the block effects significantly different from one another? (c) Do you think this is an artificial data set? (d) How would you analyse these data 'properly'? (Note: The units of measurement were not stated.)
142
Analysing more structured data
Table E.l
Some experimental results from treatment comparisons Treatment tl
tz
t3
t4
ts
t6
t7
tg
Row mean
III
35.2 41.2 43.6
57.4 63.4 53.2
27.2 33.2 29.6
20.2 26.2 25.6
60.2 60.2 53.6
32.0 32.0 32.0
36.0 36.0 30.0
43.8 43.8 44.4
39.0 42.0 39.0
Co lu mn mean
40.0
58.0
30.0
24.0
58.0
32.0
34.0
44.0
40.0
Block II
Exercise E.2
Engine burners/Latin square design
The data* shown in table E.3 resulted from an experiment to compare three types of engine burner, denoted by B 1 , B 2 and B 3 • Tests were made using three engines and were spread over three days and so it was convenient to use a design called a Latin square design in which each burner was used once in each engine and once in each day. Analyse the data. Table E.3
Day 1 Day 2
Day 3
Exercise E.3 ANOVA
Measurements of efficiency of three burners Engine 1
Engine 2
Engine 3
BI: 16
B 2 : 17
B3 : 20 BI: 15 B z : 13
B z : 16
B3 : 15
B 3 : 21
BI: 12
Comparing wheat varieties/unbalanced twoway
Performance trials were carried out to compare six varieties of wheat at 10 testing cent res in three regions of Scotland in 1977. A description of the experiment and the data is given by Patterson and Silvey (1980). It was not possible to test each variety at each centre. The average yields, in tons of grain per hectare, are given in table E.6 for each variety at each centre at wh ich it was grown. (a) Comment on the type of design used here. (b) Analyse and summarize the data. In particular, if you think there are differences between the varieties, pick out the one which you think gives the highest yield on average. From other more extensive data, Patterson and Silvey (1980) suggest that the *Fictitious data based on a real problem.
Exercise E.4 Table E.6
Failure times/censored data
143
Yield (in tons of grain per hectare) of six varieties of winter wheat at 10 centres Centre
Variety Huntsman Atou Armada Mardler Sentry Stuart
EI
E2
N3
N4
N5
N6
W7
E8
E9
NlO
5.79 5.96 5.97 6.56
6.12 6.64 6.92 7.55
5.12 4.65 5.04 5.13
4.50 5.07 4.99 4.60
5.49 5.59 5.59 5.83
5.86 6.53 6.57 6.14
6.55 6.91 7.60 7.91 7.34 7.17
7.33 7.31 7.75 8.93 8.68 8.72
6.37 6.99 7.19 8.33 7.91 8.04
4.21 4.62 3.99 4.70
standard error of the average yield for one variety at one cent re is known to be around 0.2 (tons of grain per hectare), and you may use this information when assessing the data.
Exercise E.4
Failure times/censored data
The data in table E.9 show the results of an initial testing programme which was designed to investigate the effects offour variables on the failure times of different rigs. The four variables are load, promoter, precharging and charging potential. The practical details were clarified at the time the data arose, but do not need to be explained here. However, all tests were truncated at 20 hours and NF is used to indicate that no failure occurred in the first 20 hours. Identify the design and analyse the data. Which of the variables are important in determining the length of failure time and how can it be maximized? Table E.9
Failure times (in hours) from a lifetesting experiment Charging potential (V) 0.7
Load
Precharging Promoter (ho urs)
0.9
1.2
1.45
Replication (rig) 2
2
2
2
400
0 0 5 5
0 0.25 0 0.25
NF NF NF NF
NF NF NF NF
NF NF NF 11.3
NF 3.9 2.0 2.7
4.2 NF 2.4 1.8
NF NF 1.9 2.0
NF NF 2.2 5.3
3.6 3.6 2.2 1.0
450
0 0 5 5
0 0.25 0 0.25
NF NF NF 2.9
NF NF NF NF
18.1 3.0 9.5 3.0
NF 9.1 1.8 1.4
7.0 8.2 3.1 2.0
4.9 NF
4.6 3.2
3.2 2.5 2.6 0.7
1.7
1.7
2.0
1.0
144
Analysing more structured da ta
Table E.9 continued Charging potential (V) 0.7
Load 475
500
600
750
Precharging Promoter (ho urs) 0 0 5 5
0 0.25 0 0.25
0 0 5 5
0 0.25 0 0.25
0 0 5 5
0 0.25 0 0.25
0 0 5 5
0 0.25 0 0.25
0.9
1.2
1.45
Replication (rig) 2 17.5
NF NF NF NF NF NF NF
8.3
NF NF 3.0
NF NF NF
2
2
2
2.5 1.5 2.3 2.0
3.5 1.4 2.0 0.9
NF 4.2 1.9 1.1
1.7 2.0 1.1 0.5
NF NF
2.9 9.2 1.3 1.5
2.0 3.6 1.0 0.6
NF 1.3 1.0 1.4
NF
2.7
NF 1.9 0.9
4.0 4.1 1.3 0.7
2.8
2.2 1.6
2.3 2.5 1.7 1.2
4.1 8.4
1.9 2.1 2.5 2.5
1.8 3.1 1.2 1.0
1.6 1.2 0.8 0.5
1.8 1.2 0.9 2.0
2.2 1.0 0.9 1.9
1.0 3.7 0.9 0.7
1.7 2.0 0.6 0.6
2.0 1.0 2.5 1.6
1.5 0.5 1.5 1.5
1.2 1.6 0.9 0.9
0.6 0.6 0.7 0.2
1.4 0.4 0.7 0.6
0.7 0.7 0.2 0.0
1.6 1.3 0.8 0.3
0.4 0.5 0.2 0.0
2.8
NF
4.5
1.4 0.7
NF denotes no failure by 20 hours.
NOTES ON EXERCISE E.1
The easiest question to answer is (c)  there is no rule which says you have to answer the questions in the order presented! All the row and column sums are integers. This is so unlikely that it is clear that the data are fictitious. They are in fact adapted from an artificial exercise in a textbook on design. No background information was given and the units of measurement were not stated. Any analysis will be what Professor D. J. Finney has called an analysis of numbers rather than data  a useful distinction. Nevertheless, the numbers given in table E.1 provide a useful tutorial exercise in assessing row and column effects. The answer to (a) is clearly meant to be a randomized block design, although no mention of randomization is made in the question. In practice you would want to find out if randomization really had been used (as well as getting background information on units of measurement, objectives etc. !). When analysing the results of a randomized block experiment, the first step is to calculate the row and column means (which have already been added to table E.1), and then carefully examine them. In this case the differences between column means are relatively large, and those between row means relatively small. While it may be hard to assess the residual variation without removing the row and column effects, I hope that it is intuitively obvious to you in this case that the treatment means are
Solution to Exercise E.2
145
significandy different, but that block means are not. Of course the significance of block effects is not very important as blocks are used to control variation and there is usually relatively litde interest in the size ofblock differences, except insofar as they affect the efficiency ofthe design (was it worth blocking in the first place?). The 'proper' analysis here would be to carry out a twoway ANOV A which partitions the total variation into components due to variation between treatments (columns), between blocks (rows) and the residual variation, giving the results shown in table E.2. The Fratio for treatments is 49.6 (= 480/9.67) on 7 and 14 DF and is highly significant. In contrast the Fratio for blocks is not significant. Table E.2
Source Treatments Blocks Residual Total
The ANOVA table for the da ta of table E.1 Sum of squares
Degrees of freedom
Mean square
F
3360.0 48.0 135.36
7 2 14
480 24 9.67
49.6 2.5
3543.36
23
As the results of these significance tests are really obvious in advance, the main use of the ANOV A here is to provide an unbiased estimate of the residual variance (or mean square) namely 5 2 =9.67. This may be useful in a variety of followup procedures. In particular, having found a significant difference between treatments, the ANOV A would normally be followed by an examination of the treatment (column) means to see which is 'best'. Unfortunately, we do not know if a high or low observation is 'good' in this case, and so we simply note that treatments 2 and 5 give the highest results and treatment 4 the lowest. The standard error of the difference between two treatment means is J[9.67G+~)1 = 2.5, and so the extreme treatment means clearly differ significantly from the next highest or lowest treatment mean as appropriate. Illustrative commands for carrying out the ANOV A using the MINITAB and GLIM3 packages are given in Appendix B. The output from MINITAB is much easier to understand.
Moral With a oneway or twoway ANOV A, the results of any significance tests may be obvious in advance, and the main purpose ofthe ANOVA may then be to provide an estimate of the residual variance wh ich can be used to estimate confidence intervals for the differences between treatment means.
SOLUTION TO EXERCISE E.2
These data are taken from Chatfield (1983, p. 256) where they were presented as a techniqueoriented exercise with the instruction 'Test the hypothesis that there is no
146
Analysing more structured data
difference between burners'. Although the accompanying text in the above book makes dear that a full statement of the results of an analysis is required, rather than just saying if a result is significant or not, the brief solution provided in the answers section simply says that 'F = 19.8 is significant at the 5% level'. This tells us little or nothing in isolation. In the spirit of this book, the instructions for this example are quite different, namely to 'analyse the data'. You should therefore concentrate on understanding the data and aim for a more mature approach in which the ANOV A is relegated to a more subsidiary position. These da ta are more highly structured than those in Example E.1 in that each observation is indexed by a particular row (day), a particular column (engine) and a particular burner. It is the elegance and power of the Latin square design which enables us to separate the effects of these three factors and examine them all at the same time. Presumably the comparison of burners is of prime importance. This is difficult to do 'by eye' because of the Latin square design. It is therefore sensible to begin by calculating row, column and burner (treatment) means as shown in table E.4. Table E.4
Mean values for burner efficiency da ta
Day1=17.7 Day2=17.3 Day 3= 13.3
Engine 1 = 15.7 Hurner 1 = 14.3 Engine 2 = 16.7 Hurner 2=15.3 Engine 3= 16 Hurner 3=18.7 Overall mean = 16.1
Note that the day 3 mean is rather low and the burner 3 mean is rather high. It is still difficult to assess if these differences are significant. To assess the size of the residual variation we can remove the row, column and treatment effects to calculate the table of residuals. With an orthogonal design, this is easy to do by subtracting appropriate means. For example, the residual for the top left observation in table E.3 is given by (1616.1)
(17.716.1) (15.716.1) 
(14.316.1) =0.5
(Residual from  (Day effect)  (Engine effect)  (Burner effect) overall mean) The fulllist of re~iduals (reading by rows and calculated more accurately to two dccimal places) is: 0.56, 0.44, 0.11, 0.11, 0.56, 0.44, 0.44, 0.11 and 0.56. (Check that they sum to zero.) These residuals look sm all compared with the differences between burner means and between day means. However, there are only two degrees offreedom left for the residuals so that they are likely to appear to have smaller variance than the true 'errors'. It seems that there is no substitute for a proper ANOVA in this case to determine the significance ofthe burner effect. Using GLIM (see Appendix B), table E.5 shows the resulting ANOV A table. The Fratio for
Notes on Exercise E.3
147
burners looks high, but because the degrees of freedom are smalI, the result is only just significant at the 5 % level. We conclude that there is some evidence of a difference between burners, and in particular burner 3 looks best. It is helpful to use the residual MS to calculate the standard error of the difference between two burner means, namelY J[O.78G+~)1~O.7. We see that burner 3 is nearly five standard errors above the next burner. Table E.5
ANOV A tahle for the da ta in tahle E.3
Source Burners Days Engines Residual Total
SS
DF
MS
30.88 34.89 1.56 1.56 68.89
2 2 2 2
15.44 17.44 0.78 0.78
F 19.8 22.3 1.0
8
Of course this is a very small experiment and we should not read too much into it. Indeed you may think this a rather trivial da ta set. However, its small scale enables us to appreciate the problems involved, and I have deliberately chosen it for that reason. The reader could now go on to more complicated data sets such as the replicated (4 x 4) Latin square in data set 9 of Cox and Snell (1981). There it is necessary to take out a replicate (day) effect on one DF as weil as row, column and treatment effects.
Moral With a complicated design like a Latin square, an ANOV A may be indispensable in assessing the significance of the treatment effects and in providing an estimate of the residual variance for use in estimation.
NOTES ON EXERCISE E.3
(a) Regarding the wheat varieties as 'treatments' and the centres as 'blocks', and assuming that plots are allocated randomly to treatments at each centre, then we have a form of randomized block experiment. However, the experiment is incomplete as not all varieties are tested at each centre. Moreover, the experiment is unbalanced as the numbers of observations on each variety may differ. Unbalanced, nonorthogonal designs were generally avoided for many years as they lead to a more complicated analysis. However, the computing resources now available make this much less of a problem. Another drawback to unbalanced designs is that comparisons between different varieties may have different precisions. However, this will not be too much of a problem unless the design is 'highly unbalanced'. It is hard to say exactly what this
148
Analysing more structured da ta
means but if there are, say, four times as many observations on one treatment as another, or if different pairs oftreatments occur several times or not at all in the same block, then there may be difficulties. Reasonably sensible unbalanced designs are now increasingly used when practical problems make balanced designs difficult or impossible to achieve. (b) Let us start as usual by looking at the data. We see that there are large differences between centres (e.g. compare ES with NlO). This is to be expected given that crops generally grow better in a warmer climate. There are also so me systematic, though smaller, differences between varieties. In particular, we note that the Mardler variety gives the highest yield at seven of the 10 centres. It may help to calculate the row and column means, as given in table E.7, and add them to table E.6, although, with an unbalanced design, it should be borne in mind that they are potentially misleading. For example, the Stuart variety gets the highest row mean, but three ofits four readings are at three ofthe 'best' centres and so its row mean is artificially high. It mayaiso help to plot the data as in fig. E.1 where the centres have been reordered by region. TableE.7
(Unadjusted) row (a) and column (b) means ofwheat yields from table E.6 (b)
(a) Variety Huntsman Atoll Armada Mardler Sentry Stuart
Mean 5.73 6.03 6.40 6.78 6.98 7.16
Cent re E1 E2 N3 N4 N5 N6 W7 E8 E9 NI0
Mean 6.07 6.81 4.99 4.79 5.62 6.27 7.25 8.12 7.47 4.38
Given the external estimate of precision, it is quite clear that there are major differences between centres, and smaller, though nonnegligible, differences between varieties. Bearing in mind that the Sentry and Stuart varieties were only tested at cent res 710, the choice of 'best' variety seems to lie between Mardler, Stuart and Sentry. Finally, we ask ifthere is any evidence ofinteraction, by wh ich we mean do some varieties grow particularly well (or badly) at certain centres. There is so me mild evidence ofinteraction (e.g. Huntsman does rather well at N3), but the effects are generally small and the lines in fig. E.l seem reasonably parallel. There are certainly no obvious outliers. The above analysis may well bejudged ade qua te for many purposes. However, it would be helpful to have a 'proper' comparison of varieties which takes the
4
5
6
Figure E.l
Yield
7
8
9
E2
./
.//J
E8
Yields of winter wheat at 10 centres.
E1
Armada Atou'
//
,/ / / /",r /'/
~"/
Mardler tI'
,
/' ,,/'
E9 Centre
W7
N10
N3
{
N4
 {
Key
N5
Atou Sentry
~~~~a
Mardler Huntsman
N6
150
Analysing more structured data
unbalanced nature of the design into account. We therefore present the results of an analysis using GLIM. As the effects of variety and centre are not orthogonal, we can get two ANOV A tables depending on whether the variety or centre effect is fitted first (table E.8). Although 'different' in the top two rows the two ANOVAs proclaim the same general message. The centre effect gives a much larger Fratio than the variety effect, but the latter is still significant at the 1 % level whether it is fitted first or second. Table E.8
ANOV A tables for wheat yield data
(a) Fitting varieties first Source
SS
DF
MS
F
Varieties Cent res after varieties Residual
10.67 60.85 4.87
5 9 31
2.13 6.76 0.157
13.6 43.1
Total
76.39
45
SS
DF
MS
F
67.14 4.38 4.87
9 5 31 45
7.46 0.88 0.157
47.5 5.6
(b) Fitting cen tres first Source Cent res Varieties after cen tres Residual Total
76.39
The GLIM analysis assumes normal errors with homogeneous variance. The latter assumption is not unreasonable given the ranges of values at the different centres. Other assumptions can be checked in the usual way by looking at the residuals. One problem with an unreplicated twoway design is that it is difficult or impossible to tell if a large residual indicates an outlier or is caused by an inter action term. With only one replicate, it is not possible to fit all interaction terms and also get an estimate of the residual variance. For the model fitted in table E.8, the residual standard deviation is ~0.157~OA, which is larger than the external estimate of precision, namely 0.2. However, ifit were possible to take interactions into account, the value of OA would probably be reduced. The GLIM analysis also provides estimates ofthe row and column means adjusted to take account of the unbalanced design. For the varieties, if we leave the adjusted means for Huntsman and Atou unchanged, then Armada and Mardler decrease by 0.2 (because they are not included at the 'worst' NlO centre), while Sentry and Stuart decrease by 0.57. Thus the adjusted means for Mardler and Stuart are nearly identical and better than the rest. Given that Mardler is best at seven centres and has been tried more extensively, I would prefer Mardler, other things being equal.
Discussion of Exercise E.4
151
For the centres, if we leave the adjusted means for the first six centres unchanged, then centres 79 decrease by about 0.12, while centre 10 decreases by 0.06. Thus cent re 8 gives the best results and centre 10 the worst. An alternative, even more complicated, analysis is given by Patterson and Silvey (1980) and should certainly be considered by the specialist agricultural statistician. Other readers may find that analysis rather opaque and may weil find the analysis given he re not only easier to follow, but also easier to interpret and more suitable for providing a useful summary of the data.
Moral Even with an unbalanced design, an exploratory examination ofthe data may still be successful in highlighting their main features. DISCUSSION OF EXERCISE E.4
Two observations have been made at every possible combination of the different selected levels ofthe four variables, and this is called a twicereplicated (6 x 2 x 2 x 4) complete Jactorial experiment. The total number of observations is equal to 2 x 6 x 2 x 2 x 4 = 192. An important question not answered in the statement of the problem is whether the design has been randomized, both in the allocation of the rigs and in the order of the experiments. In fact the design was not properly randomized, although it was at least not fully systematic. However, we will proceed as if the data were reliable while bearing this drawback in mind. Before starting the analysis, it would be important in practice to get more background information and clarify the objectives precisely. Do we want a model describing behaviour over the whole range of experimental conditions or are we mainly interested in maximizing failure time? We concentrate on the latter. You may not have seen data like these before. Nearly a quarter ofthe observations are truncated at 20 hours. Data like these are called censored data (;see also Example B.5). What do we do about them? It might be possible to analyse the quantitative information in the lower part of table E.9, which is alm ost unpolluted by censored values, but this is the least interesting part of the data given our interest in maximizing failure time. The physicist who brought me the data had arbitrarily inserted the value 22 hours for each NF value and then carried out a [ourway ANOV A. This suggested that the main effects of load, promoter and charging potential are significant while precharging is not. However, this approach is obviously unsound, not only because censored values should not be handled in this way but also because inspection of the data suggests that the 'error' distribution is not normal. Rather it is skewed to the right with many observations between 0 and 5, relatively few between 5 and 20 and rather more exceeding 20. Looking at paired replicates it is also clcar that the residual variance is not constant. A distribution such as the gamma or Weibull distribution is indicated.
152
Analysing more structured data
Although the data are highly structured experimental data, you may have litde idea how to proceed. Rather than give up, you should approach the data as in section B by exploring them using common sense. Indeed perhaps this example should be in section B. Why not start in a simpleminded way by treating the data as binary (fail or notfail), even though this effectively throws away so me information. The main effects of the four factors can be assessed by finding the frequencies of notfails at each level of each of the factors. For example, the frequencies at the four levels of charging potential are given in table E.lO, and it can be seen that the observed frequencies are obviously significandy different, a view which can (and should?  see below) be confirmed by a X2 goodnessoffit test. Similarly, we find that the main effects of load and promoter are significant while precharging is not. In fact these results are the same as produced by the dubious fourway ANOV A mentioned above and it appears that the three significant main effects are so obvious that virtually any method will reveal them. Table E.I0
Frequencies ofNFs at different levels of charging potential Charging potential (V)
Frequency of notfail
0.7
0.9
1.2
1.45
28
7
5
5
The appropriate x2test for data such as that in table E.I0 may not be obvious. It is necessary to construct the full twoway table, including the frequencies of failures, as in table E.11, since the significance of the differences between notfails depends on the relative sample sizes. The null hypo thesis (Ho) is that the probability of a notfail (or equivalendy of a fail) is the same at each level of charging potential. There is a special test for testing equality of proportions but the standard X2 test is equivalent and easier to remember. As column totals are all equal, the expected frequencies in the four cells of the top row of table E.11 are all 11.25, while the expected frequencies in the bottom row are all (48 11.25) = 36.75. Then X2 = I:[ (obs. exp.?/exp.] = (28 11.2S?/11.2S + ... = 43.7. The distribution of the test statistic under Ho is X; where DF = (No. of rows 1) x (No. of columns 1) as usual for a twoway table. Since X;.O.OI = 11.34, we have strong evidence to reject Ho. Indeed the Table E.l1
Frequencies ofNFs at different levels of charging potential Charging potential (V)
Frequency of notfail Frequency of fail
0.7
0.9
1.2
1.45
28 20
7 41
5 43
43
5
Discussion of Exercise EA
153
contribution to the x2statistic from the top lefthand cell is so large by itself that there is really no need to calculate the rest of the statistic. By inspection, we see that failure time can be increased by decreasing load and promoter and by increasing charging potential. Put more crudely, the values of the factors which maximize failure time are in the top lefthand corner oftable E.9 and this seems obvious on reexamining the table. It seeems unlikely that the lack of randomization will affect these conclusions. The above analysis may weil be judged adequate for many purposes, although it makes no attempt to assess interaction effects. However, suppose you wanted to construct a formal model using full information from the censored data. This is a much more sophisticated task and it is unlikely that you will know how to proceed. Neither did I when first shown the data! By asking various colleagues, I eventually discovered that a standard solution is available for these sort of data using a loglinear proportional hazards model and a censoring variable (e.g. Appendix A.14 and Aitkin and Clayton, 1980). The data can then be analysed in a relatively straight forward way using the GLIM computer package. The details are outside the scope of this book.
Moral When faced with data of a type not seen before, 'play around' with them in order to get an idea of their main properties. Do not be afraid to ask other statisticians for help if this proves necessary.
F Timeseries analysis
A time series is a collection of observations made sequentially through time. One special feature of timeseries data is that successive observations are usually known not to be independent, so that the analysis must take into account the order of the observations. As timese ries analysis constitutes my particular research speciality, I cannot resist including a short section of timeseries problems (see also Exercise G.2). These exercises should not be attempted unless you have studied a basic course in timeseries analysis (except perhaps Exercise F.l). The first step as usual is to clarify the objectives of the analysis. Describing the variation, constructing a model and forecasting future values are three common aims. The second important step is to plot the observations against time to form what is called a time plot. The construction of a time plot is the main ingredient ofIDA in timeseries analysis and should show up obvious features such as trend, seasonality, outliers and discontinuities. This plot is vital far both description and model formulation and, when effects are 'obvious', may well render further analysis unnecessary. Apart from the time plot, the two main tools oftimeseries analysis are the autocorrelation function and the spectrum (Appendix A.14). Timeseries modelling based primarily on the autocorrelation function is often called a timedomain approach. Alternatively, with engineering and physical science data in particular, afrequencydomain approach based on spectral analysis may be appropriate, but this will not be illustrated here.
Exercise F.1
Forecasting sales data
Figure F.l shows the number of new insurance policies issued by a particular life office in successive months over seven years. You have been asked to produce forecasts far the next 12 months. How would you set about answering this question ?
Exercise F.2
Forecasting TV licence numbers
The numbers of TV licences known to be currently valid in the UK, in
Exercise F.3 Table F.t
155
Timeseries modelling
Numbers of TV licences (in 'Ooos) issued in the UK in successive months
Year 1
4308 4726
4407 4786
4504 4884
4581 5078
4624 5262
4676 5400
Year 2
5539 5980
5649 6044
5740 6140
5812 6291
5863 6433
5922 6570
Year 3
6757 7270
6863 7331
6966 7398
7050 7524
7119 7657
7170 7761
Year 4
7898 8295
7995 8345
8090 8424
8147 8571
8201 8731
8253 8899
Year 5
9044 9550
9153 9628
9255 9718
9347 9844
9413 9987
9495 10 114
Year 6
10 220 10 753
10368 10 817
10 470 10 880
10 569 10 963
10647 11028
10702 11 076
Year 7
11149 11485
11 187 11 522
11 286 11 553
11 321 11 602
11392 11 634
11441 11 657
Year 8
11 693 12040
11 745 12075
11834 12110
11 866 12167
11930 12224
11984 12231
Year 9
12290 12622
12375 12661
12443 12664
12484 12731
12542 12778
12570 12789
Year 10
12830 13 026
12863 13 057
12885 13061
12944 13097
12967 13146
13010 13155
Year 11
13 161 13436
13182 13428
13253 13 448
13 296 13 455
13 336 13 489
13 358 13516
Year 12
13502 13674
13 506 13 716
13567 13 755
13586 13782
13 606 13 888
13 641 13 919
Year 13
13 960 14556
13 908 14687
14267 14776
14392 14862
14463 14880
14510 14910
Year 14
15016 15315
15068 15331
15093 15324
15136 15377
15202 15399
15230 15506
Year 15
15439 15623
15488 15630
15509 15576
15528 15698
15559 15770
15595 15809
Year 16
15831 16100
15881 16124
15899 16183
16000 16254
16024 16292
16075 16188
successive months over 16 years, are given in table F.1 and plotted in fig. F.2. How would you set about producing forecasts for the next 12 months?
Exercise F.3
Timeseries modelling
Figure F.3 shows three time se ries which exhibit widely different properties. Discuss the sort of timeseries model which might be applicable to each senes.
2200 2000 1800
'" :~ Q)
1600
0
0
1400
~
Q)
c:
Ö Q)
1200
E :::>
1000
D
Z
800 600 400
4
3
2
5
6
7
Year
Figure F.l
Numbers of new insurance policies issued by a particular life office.
16000 14000
g 12000 0 0
:;; 10000 Q)
u
c:
Q) g 8000
>
I
Ö
e? Q)
D
6000 4000
,/
/
,../
/
,
I
E :::>
Z
2000 5
Figure F.2
Year
10
N umbers of TV licences issued in the UK.
15
Savings rate
~
c; ~ r____
,~==~
  "":2:.._
_~
=="' . .
.===
~
... _:= =
~=

~

::;~
~
:::::=.
~c.>====~,~r_~
Sales (X1 02)
158
Timeseries analysis
30
25
Ex Ql
"'0
c::
.;;; 15 .!.2
a::
10

. '
1950/ 1
~ 
_ .........
Year
1983/2
(cl
Figure F.3
Quarterly time series data: (a) sales of a particular type of shoe ; (b) California savings rate; (c) California consumer price index.
DISCUSSION OF EXERCISE F.t
In view of the sudden changes in sales towards the end of the series, you should not be tempted simply to apply a standard projection technique. Indeed you may think that you have been asked a silly or impossible question, and you would be correcr. Nevertheless, the author was asked this question, and so the aspiring statistician needs to learn how to deal with silly or impossible questions. I began by asking what had produced the sudden increases in sales figures. It turned out that there had been a sales drive towards the end of year 5 and anothe r towards the end of year 6. The next obvious question is whether further sales drive~ are planned for the future and whether sales are expected to continue at a higher level anyway . Any forecasts are crucially dependent on the answer. While the statistician may still be able to offer so me assistance, it is dear that informed guesswork is likely to be preferable to statistical projections and one should not pretend otherwise. This example illustrates the fact that the time plot is an indispensable stage of a timeseries analysis. The abrupt changes visible in fig . E1 tell us that we cannot expect any uni varia te model adequately to describe the whole series. The time plot is therefore invaluable both for describing the data, and also for model formulation, albeit in a negative way in this example.
Moral Always start a timeseries analysis by plotting the data, and asking questions to get background information.
N ates an Exercisc F.3
159
DISCUSSION OF EXERCISE F.2
As in Exercise F.1, the time plot of fig. F.2 is an invaluable part of the analysis, though in a completely different way. The series in fig. F.2 is much more regular than that in fig. F.1 and we expect to be able to produce reliable forecasts. There are three features to note in fig. F.2. First, note thc upward trend as more TVs are bought. Second, note the small but noticeable scasonal variation in the first five years' data. This seasonality appcars to vanish thereafter. Third, note thc sm all but noticeable jump in mean level around year 12. We dearly want a forecasting method which can cope with a trend. It is much less obvious as to whethcr it is better to fit a seasonal model to all the data or to fit a nonseasonal model to the data from year 5 onwards. The method we choose must also be able to deal with the change in mean around year 12. There is no 'best' forecasting method. Rather you should choose a 'good' method which you understand and for which computer pro grams are availablc. My own favourite for this type of data is the HoltWinters forecasting procedure (e.g. Chatfield, 1978) wh ich is based on fitting a model containing a local level, a local trend, seasonal terms (either additive or multiplicative) and an error term. The level, trend and seasonal terms are all updated by exponential smoothing. When applied to these data, I found that much bettcr forecasts of the last two years are obtained by fitting a nonseasonal model to the latter part of the data, rather than fitting a seasonal model to all the data. The possibility of ignoring the first part of the data (which horrifies so me people!) arises directly from the time plot.
Moral There are sometimes good reasons for ignoring part of a data set, particularly if external conditions have changed or it is otherwisc unreasonablc to expect a single model to describe all the data.
NOTES ON EXERCISE F.3
No single dass of models can deal satisfactorily with all the different types of time series which may arise. Some subjective judgement must be used. Se ries (a) is typical of many sales series in that there is a high seasonal component. In nearly every year sales are lo~est in the first quarter of the year and highest in the final quarter. There is also some evidence oftrend as sales increase towards the end of the series. A good timeseries model should explicitly model this trend and seasonal variation which forms a very high proportion of the total variation. Thus a trendandseasonal model is preferred to a BoxJenkins ARIMA model, where model fitting would involve differencing away the trend and seasonality and devoting most effort to modelling the autocorrelation in the resulting differenced series. The latter is relatively unimportant for this series. In contrast, series (b) has no seasonal component and little trend and yct there are shortterm correlation effects which do nced to be modelIed. An ARIMA model
160
Timeseries analysis
might well be fruitful here, particularly if the first year's atypical observations are removed. Series (c) is quite different again. Exponential growth characterizes many economic indices. This exponential growth needs to be explicitly modelIed, perhaps after taking logarithms. Neither a trendandseasonal model, nor an ARIMA model, will be appropriate here.
Moral Different classes of timeseries model are appropriate for different types of series.
G Miscellaneous
This section comprises seven exercises which are rather varied in nature and cannot be readily classified elsewhere. They range from a probability problem (G.l) through a variety ofstatistics problems to the improvement of communication skills.
Exercise G.1
Probability and the law
In a celebrated criminal case in California (People versus Collins, 1968), a male Negro and a Caucasian woman were found guilty of robbery, partly on the basis of a prob ability argument. Eyewitnesses testified that the robbery had been committed by a couple consisting of a Negro man with a beard and a moustache, and a Caucasian girl with blonde hair in a ponytail. They were seen driving a car which was partly yellow. A couple, who matched these descriptions, were later arrested. In court they denied the offence and could not otherwise be positively identified. A mathematics lecturer gave evidence that the six main characteristics had probabilities as follows: negro man with beard man with moustache girl with ponytail girl with blonde hair partly yellow car interracial couple in car
1/10 1/4 1/10 1/3 1/10 1/1000
The witness then testified that the product rule of prob ability theory could be used to multiply these probabilities together to give a probability of 1/12000000 that a couple chosen at random would have all these characteristics. The prosecutor asked the jury to infer that there was only one chance in twelve million of the defendants' innocence, and the couple were subsequently convicted. Comment on the above probability argument.
Table G.1 Numbers of daily admissions to a psychiatrie hospital in a oneyear period, starting on Sunday, 1 January. The lines indicate the end of a week and totals are also given March 1 3 14 1
January ()
1 4
3 3
7
3
21
0 2 1 4
3
4 0
3
6 1
17
0 1 5
0 3 0 2 3 5 4
1 1
19
0 1 3 1 2 1 2
0 4 2
2 1 2 4 1 2 0
February 3 2 4 1
17
1
2 1 1 1 3 2 2 1 3 0 6 3 5 2 1 2
3
3
10
14
3
7
3
2
3
10
20
1 1 1 0
5 3 5 5 2 3 2 3 2 5 2 0
August
8 2
2
3
3
23
4 1 3 1
0 5 8 1 3 2 3 2 0 4 8 1 2 2 2 2
15
22
0 4 2 3 3 4 2 2 4 0 5 6 4 2 1 4 3 4 2 2 1
11
19
24
17
18
23
17
18
1 2 3 6 3 5 4
14
21
24
1 0 3 2 3
3 2
1 1 3 4 3 5
1 3 4 3 1 7 0 2 1 3 2 4 0
17
17
22
14
5 1 3 2 2 1 2 2 2 1 4 1
1 1 1 3 2 3 2 1 2 5 0 4 2 4 0 3 1 4
3
3 7 1
18
1 3 5 2 4 8 1
24
1 3 3 1 2 2 4
16
0
3
12
4 2 3 7
Deeember 1 20
1
3
2 4 5
0 0 1 3 6 4 3
November 1 3 2 14 0 4 0
Oetober
4 4
1 2
0 5 1 2 7 2 0
3
2 2 3
2
3 0 3 5 4 1
20
1
June
2
20
0
April
1
12
1 2 4 1 3 7 2
12
3
3
2 1 2
September 18 1
2
0 0 3
12
3 16
19
July 1 3 2 1 0 1 4
3
4 4 1
2 0 2 3 6
2
11
1
16
1 1 2 2 3 5 0 0 1 1
3
0
(,
19
1 2
3
3
4 0
0 1 9 2
May 3 2 3 5 2
15
13
13
18
2 4 1 2 7 3 2
21
1 0 5 4 4 3 2
19
0 1 3 3 4 1 2
14
1 0 0 0 2 3 2
8
3 4
Exercise G.3 Exercise G.2
Testing randorn nurnbers
163
Hospital admissions/communication skills
This exercise requires you to reply to the following letter seeking advice: Dear Statistician, The data in table G.1 show the numbers of new patients arriving at a psychiatrie hospital during arecent oneyear period. We are interested in finding out if there are any systematic variations in arrival rate, especially any that might be relevant in planning future operations, as we are currently reconsidering the running of this unit. We are particularly interested in any evidence of cyclic behaviour. Please analyse the data for uso Y ours sincerely, The Assistant Planning Officer Area X Regional Health Authority The letter which you write in reply should not exceed three sides of paper, and should be written for a numerate layman rather than a statistician. It should be accompanied by tables and graphs as appropriate covering not more than four sides.
Exercise G.3
Testing random numbers
In a nationallottery, 125 ticket numbers were drawn by computer in one prize draw. The final six digits of each number were supposed to be random. These values are tabulated in table G.3. Are the numbers random? Table G.3
535850 603715 075779 724004 355865 771594 726862 593721 081625 461645 513613 677301 982835 149985 466322
The final six digits of the 125 winning prize numbers in anational lottery 842420 863855 048633 089507 001250 616635 768318 318619 046970 489085 937397 573766 079759 944572 573243
655257 754258 111337 552867 095391 992135 218966 908649 477814 619015 761133 432414 394619 002723 212831
469227 883261 346576 476843 934011 473416 928474 198296 516512 627585 117830 545267 082633 077330 886922
885878 571046 051352 025348 094093 021096 538201 122079 738317 222443 726151 428765 711609 328214 233534
164
Miscellaneous Table G.3 continued
676237 178050 410495 797412 729083 347732 317829 569158 940045 427035
Exercise G.4
303859 058331 742978 628442 195838 087664 922459 266110 029951 775419
796715 929288 544181 855302 801377 809011 196620 484856 240466 631855
308467 287017 499739 634667 151508 513146 535252 952900 876305 099017
264418 224397 445026 327110 760187 650496 038503 413744 621546 906895
Sentencing policy/twoway table?
A study was carried out on consecutive groupsof men and women who were convicted of theft and shoplifting in an English city. A total of 100 females and 100 males were examined and their sentences were classified as lenient or severe according to the average sentence given for the person's particular crime. The results were as follows:
Male Female
Lenient senten ce
Severe sentence
Total
40 60
60 40
100 100
Are females treated more leniently than males, and, if so, speculate as to why this should be so?
Exercise G.5
Drug aftereffects/logit analysis
Patients with a particular medical condition are treated by a new drug which unfortunately has so me undesirable aftereffects. It is thought that the chance of getting aftereffects may depend on the dose level, x, of the drug. At each dose level, the number of patients suffering aftereffects is noted from a batch of recent ca se histories giving the following data: Dose level,
X;
No. of patients, n; No. with aftereffects, ';
0.9
1.1
1.8
46 17
72
118
22
52
2.3
3.0
3.3
4.0
96
84
58
56
53 43
38 30
Analyse the da ta in whatever way you think is appropriate. Estimate the ED so ' which is the dose level giving aftereffects in 50% of patients.
Exercise G.6
Exercise G.6
Using a library
165
Using a library
(a) Finding a partieular statistie A wide range of published offlcial statistics are available in many libraries. A statistician should know how to find them if necessary. Find the most re cent published value of one (or more) of the following quantities for the country in which you live: (i) The total number of births. (ii) The total number of deaths in road accidents. (iii) The total number of university students (fulltime undergraduates only, or what?). (iv) The number of unemployed. (v) The average number of hours per week spent watehing TV. Y ou should specify where the statistic was found, say exactly what the statistic measures, and specify the appropriate year or part of year. (b) Using statistieal journals A statistician should not be afraid oflooking up re cent research in statistical journals or using index journals. Listed below (in alphabetical order) are the bibliographie details of so me generalinterest papers published in the last 15 years or so. Select a title which interests you, find the relevant journal in your library, make a photocopy ofthe paper, read it carefully, and write a critical summary of it in not more than about five sides. Using the Seienee Citation Index, or any other aid, find some (not more than three) of the journal articles which refer to your particular paper, and write a brief summary of what you find. Y our summaries should be written for the benefit of other statisticians who have not read the papers. Any mathematical symbols which you use should be defined, unless standard such as N(f1, (J2). Bear in mind that you are unlikely to understand every ward of the papers, and this is in any case unnecessary for getting the general 'message' ofthe papers. You should also realize that papers do sometimes get published which contain errors and/or views which are controversial to say the least. Andrews, D. F. (1972) Plots of highdimensional data. Biometries , 28, 12536. Broadbent, S. (1980) Simulating the ley hunter.]. R. Stat. Soe., Se ries A, 143, 10940. Bureh, P. R. J. (1978) Smoking and lung cancer: problem of inferring cause.]. R. Stat. Soc., Series A, 141,43777. Chatfield, C. (1978) The HoltWinters forecasting procedure. Appl. Stat., 27, 26479. Chernoff, H. (1973) The use of faces to represent points in kdimensional space graphically.]. Am. Stat. Assoe., 68, 3618.
166
Miscellaneous Oldham, P. D. and Newell, D. J. (1977) Fluoridation of water supplies and cancer  a possible association? Appl. Stat., 26, 12535. Preece, D. A. (1981) Distributions of final digits in data. The Statistician,
30, 3160.
(Note: Teachers can readily supplement the lists in both (a) and (b) so as to give all students in a dass a different subject. See also Hawkes (1980) for some further suggestions for (b).)
Exercise G. 7
Final miscellanea
(a) Quality control A company which manufactures packets and boxes of rubber bands has called you in to discuss quality control, because of increasing complaints from purchasers. These complaints about product quality indude 'too many bands are broken' and 'some packets are nearly empty'. Discuss the questions which need to be answered and the problems which are likely to arise in implementing a new quality control scheme. (b) Understanding the x2test Given a twoway table of frequencies, the x2test statistic, for testing that rows and columns are independent, is dearly nonnegative. If all the observed frequencies are identical, then the expected frequencies will also be identical and the X2 statistic will be zero. Construct a (2 x 2) table of frequencies, which are all unequal, for which the X2 statistic is also exactly zero. If you really understand the X2 test, you will find this easy. DISCUSSION OF EXERCISE G.I
The argument is completcly nonsensical and you will be pleased to leam that the vcrdict was ovcrturned by the Califomia Supreme Court. Four reasons were given for overtuming the verdict. (1) No empirical evidence was given to support the suggested prob ability values, such as prob. (partly yellow car) = 1/10. (2) The use of the product rule assurnes independence. This is plainly false. For example 'growing a beard' and 'growing a moustache' are positively correlated. (3) The prob ability calculations do not take account of the possibility of witnesses being mistaken or Iying, or of the criminals wearing disguise. (4) Even if the probability calculations were correct, they give an (unconditional) probability that a 'random' couple will have all thc characteristics. What is required is the (conditional) probability that a couple having all these characteristics is innocent. If there are say 24 million couples in the suspect population, then we expect to find two couples with all the characteristics so that the conditional probability that either is innocent is 1/2, not 1/12000000. A similar catastrophe is discussed by Huff (1959, p. 113). Regrettably the product law is widely misused. In addition, there are many other examples which
Notes on Exercise G.2
167
demonstrate the importance of computing a conditional probability rather than an unconditional probability, and moreover of conditioning on the right event. I recall discussing the case of a woman who had given birth to five girls and wanted to know the prob ability ofhaving a boy next time. As the prob ability ofhaving six girls in a row is only 1/2 6 , it was falsely argued that the chance of having a boy must be (1 1/2 6) = O.985!! The conditional prob ability , based on empirical evidence, turns out to be very dose to 1/2, which is of course also the probability assuming equally likely outcomes. Conditional on what, I leave to the reader. One of the earliest uses of prob ability in law was in the notorious Dreyfus ca se in France in 1899, when sampies ofhandwriting had to be compared. When the case was reviewed several years later, it became apparent that the mathematical arguments which had been used were in fact false, but that none of the lawyers involved had understood them at the time. Nevertheless, the judges allowed themselves to be impressed by the 'scientific' nature of the evidence. Statistics and probability may well have an increasingly important role to play in courts oflaw. However, it is imperative that they be used both correctly and in an intelligible way, in order to prevent statistics from being discredited in the public eye. Arecent interesting paper, which reviews the difficulties of introducing statistics into courts of law, is given by Meier (1986). NOTES ON EXERCISE G.2
There are two aspects to this exercise. First, the analysis which may be regarded as reasonably straighforward, even if not easy. Second, the ability to be able to communicate the results to a nonstatistician. Students typically get little help in the latter important skill. We look at the analysis first. The request to 'analyse the data' is typically vague. The first thought which occurs to you may be that arrivals are complctely random (i.e. form a Poisson process). However, the exact time of admission is unknown, although we know that most admissions are at planned times during the day. In any case a cursory examination of the data shows that admissions are not equally likely on different days of the week. In particular there are few arrivals on Sundays, as might be expected. The two main sources of variation in daily admission rates are likely to be dayoftheweek variation and timeo(theyear variation, and they each need to be assessed and described. For dayoftheweek variation, the frequency distribution of admissions on different days of the week should be constructed. Then a l goodnessoffit test could be carried out to test the null hypo thesis that arrivals are equally likely on different days of the week. In order to get equal expected frequencies and simplify the analysis, it is helpful to 'drop' the last two days of the year (it is a leap year) so that there are exactly 52 weeks. The device of omitting a small part of the data to simplify the analysis (or even improve it  see Example F.2) is widely used in practice but rather neglected in textbooks. The total number of arrivals through the 52 weeks is 878, but only 45 arrive on Sunday. The latter
168
Miscellaneous
compares with an expeeted value of 878/7 = 125.4 and eontributes (45 125.4)2/ 125.4 = 51.4 to the l statistic. Even by itself this eontribution is so large, eompared with thc degrees of freedom (7 1 = 6), that the overall test statistie is highly signifieant leading to rejeetion of the null hypothesis. In such a situation it is unnceessary to ealculate all of the X2 statistie and this is another widely used short cut whieh is rarely mentioned in textbooks. Admissions are clearly less likely on Sundays. Indeed the redueed number of arrivals on Sunday is so obvious that it may be more fruitful to apply the X2 test to the remaining six days. Turning to timeoftheyear variation, the 'obvious' analysis is to find the numbers of admissions in different months. Howevcr, different months are of different lengths and eontain different numbers of Sundays. It is mueh easier to split the year into 13 fourweek periods giving the frequeney distribution shown in table G.2. Under the null hypothesis of a eonstant arrival rate, the X2 statistie of9.4 is less than its degrees of freedom, namely (13 1) = 12, and so is clearly not signifieant. Thus there is no evidenee of variation through the yeaL Finally we eonsider the question as to whether there is any eyclie behaviour other than the regular withinweek variation. The latter makes it more difficult to spot longerterm eyclie behaviour in daily admission rates. We could apply correetion faetors to daily rates, but it is probably easier to earry out a timeseries analysis of weekly totals. Using MINITAB or some similar paekage, the autoeorrelation funetion of sueeessive weekly totals may be ealculated to see if sueeessive values are eorrelated. It turns out that a11 the eoeffieients are 'small' . For example, the firstorder autoeorrelation eoeffieient is only 0.01. With aseries length 52 (weeks), the modulus of an autoeorrelation eoefficicnt must exeeed 2/)52 = 0.28 to be significant at the 5% level, although, when testing a large number of eoefficients, the level of signifieanee needs considerable adjustment to avoid spurious signifieant results. In this ease only the coeffieient at lag 6, namely  0.26, is nearly signifieant. The lack of structure in the autocorrelation function means that there is no point in earrying out a speetral analysis, and so we eonclude that there is no evidenee of eyclie behaviour. Thc reader may like to fit a Poisson distribution to the frequeney distribution of weekly totals and show that a very good fit arises (and I emphasize that these are real data). The results ofthe analysis now need to be deseribed in a clear commonsense way to someone who will probably not know what is meant by 'Poisson proeess', 'X2test', and (espeeially) 'autoeorrelation'. This is exee11ent praetiee and I suggest you read the general adviee given in Chapter 11 ofPart 1. There is little point in my attempting a definitive deseription as it is you that needs the praetiee! However, I offer a few tips. You should always make a special effort to make a11 graphs and tables clear and selfexplanatory. Here you want to present the frequeney distribution of admission numbers through the week and through thc yeaL Whether they are best presented in a table (as in table G.2) or in a bar ehart is debatable. Try both. As to eyclie behaviour, it is probably best simply to say that a timeseries analysis revealed no evidenee thereof, rather than to try to explain the meehanies involved. Summarize your conclusions briefly, particularly the form of the withinweek
N oters on Exercisc G.3
169
variation. When you think you have fmished, read through your letter one last time and try to imagine that you are the (nonstatistical) recipient of the letter. As well as summarizing the data, you mayaiso wish to raise additional queries. For example, how much notice do psychiatrists take of the number of empty beds when deciding to admit patients? The data set given here is somewhat similar to Example A ofCox and Snell (1981) wh ich considers arrival times at an intensive care unit and provides a detailed analysis. There the exact time of arrival is recorded so that timeofday variation mayaiso be investigated. Two morals
1. 2.
When describing the variation in a set of data, the likely sources of variation should be listed and then assessed. Being a good analyst is only part of the statistician's task. It is equally important to be able to communicate the results effectively. Table G.2 Observed and expected numbers of arrivals at a psychiatrie hospital in successive fourweek periods Weeks
14 58 912 1316 1720 2124 2528 2932 3336 3740 4144 4548 4952
l
Observed
Expected*
74
67.5 67.5 67.5 67.5 67.5 67.5 67.5 67.5 67.5 67.5 67.5 67.5 67.5
64
54 72
62 72
67 76 73 66 58 78 62
test
X" = ~)obs. exp.)"/exp. =637.2/67.5 =9.4 c.f. Xi2.005 = 21.03
*Each expected frequency = 878/13.
NOTES ON EXERCISE G.3
What is meant by a sequence of'random numbers' as applied to a sequence of digits which may take any integer value from zero to nine? The sequence is said to be
170
Miscellaneous
random if all 10 possible integers are equally likely to occur, and if different outcomes are mutually independent. The problem of testing random numbers has a long history and has many applications, particularly now that random digits are generated by computers for a variety of purposes, particularlY for simulation (e.g. Morgan, 1984). Simulation is used to solve problems which are difficult or impossible to solve analytically by copying the behaviour ofthe system under study by gene rating appropriate random variables. It must be appreciated that it is impossible to test all aspects of randomness with a single overall test. Rather there are numerous tests for examining different aspects of randomness (Morgan, 1984, Chapter 6). There is an added complication here in that the socalled random digits come in groups of size six. The first 'obvious' property to test is that all 10 possible digits are equally likely. Count the number of zeros, the number of ones, and so on, either in the whole data set or in part of it. Then compare the observed frequencies with the expected frequencies, namely n/lü where n is the number of digits examined. A l goodnessoffit test on nine degrees of freedom is appropriate. We must also check that the order of digits is random. At its simplest, this means checking adjacent pairs of digits. Consider nonoverlapping pairs of digits (to ensure independence between pairs) and compute the (10 x 10) matrix containing the observed frequencies with which digit j is followed by digit k. With m nonoverlapping pairs, the expected frequencies under randomness are m/l 00 and a X2 test on 99 DF is appropriate. A difficulty here is that m needs to exceed 500 in order to get expected frequencies exceeding five so that a X2 approximation is valid. Otherwise cells will need to be grouped. As weIl as ordering, we mayaIso be interested in other features of the data. In particular, we may wish to examine so me special properties of the groups of six numbers. For example, we may suspect a tendency for digits to be repeated, and can easily devise a suitable test. If thc digits are really random, the probability that a sixdigit number contains at least two consecutive identical digits is (1  0.9 5) = 0.410 and this can be compared with the observed proportion of groups having this property. One problem with trying several different tests on the same data is that a significant result is likely to occur eventually even if the da ta really are random. It is therefore wise to adjust the required level of significance or repeat a test on a new set of numbers. For the data in table G.3, I see no reason to question the randomness hypothesis. SOLUTION TO EXERCISE G.4
The reader may be tempted to treat the data as a (2 x 2) contingency table and carry out a X2 test to see if rows and columns are independent. The result is significant and so it does appear at first sight that women are treated more leniently than men. It is easy to speculate why this should be so. For example, the courts may be sympathetic
Notes on Exercise G.5
171
to women who have young children to look after or who have acted under male coercion. However, there is more in this exercise than meets the eye. Whenever two variables appear to be associated it is always advisable to ask if this association can be explained by a common association with a third variable. In this case, it is worth considering 'number of previous convictions' as a third variable, and this reveals the surprising result that women are actually treated less leniently than men. A threeway table offrequencies, categorized by sex, severity of senten ce and number of previous convictions, was constructed and is shown in table G.4. Women generally have fewer previous convictions but it can be seen that they are treated less leniently than men, in each row of the table, even though they appear to fare better overall. This curious effect is related to the socalled Simpson's paradox and you may have to look at the table for so me time to convince yoursclf that you are not 'seeing things'. A related example is given by Hooke (1983, Chapter 13). Table GA
Threeway table of frequeneies Female
Male No. of previous eon vietions
0 12 ~3
Overall
Sampie size
10 20 70 100
No. of severe sentenees
0 8
52 60
Sampie size
No. of severe sentenees
70 20 10 100
20 10 10 40
Table G.4 demonstrates that collapsing data onto two dimensions can have a misleading effect. This is something to watch out for when considering twoway tables or scatter diagrams obtained from multivariate data. (Note: The data are partially fictitious, as they have been adapted from real data to make the point more forcibly.)
Moral An association between two v~riables does not prove causation, and in particular may sometimes be explained by a common association with a third variable. NOTES ON EXERCISE G.5
You should start by querying the form of the data. Why have the given dose levels been chosen and why are they not equally spaced? What determines the dose level for a particular patient and in wh at units is it measured? Why are the sampie sizes unequal ? Will the use of unrandomized case histories bias the results? In the absence of dear answers and accepting the data (with caution) as they stand,
172
Miscellaneous
the 'obvious' first step is to plot the proportions with aftereffects (rJn) against dose level on ordinary graph paper. The proportion appears to rise with dose level. Fitting a smooth curve by eye, we would guess that the ED so is 'near' 2.0. The fitted curve is probably Sshaped (as proportions are constrained to lie between 0 and 1) and this is hard to fit, especially as the sam pIe sizes vary so that the points are not equally important. If, instead, the data are plotted on normal probability paper, then we would expect the relationship to be roughly linear. This graph 'stretches' the proportion scale in a nonlinear way. Fitting a straight line by eye, the ED so is between about 1.9 and 2.0. This crude analysis may be adequate for so me purposes, but if you have studied logit analysis (Appendix A.9) then the GLIM package can be used (Appendix B) to get a model relating the probability of getting aftereffects, say p, to dose level, x. We cannot fit a linear model as p is constrained to lie between 0 and 1. Instead we fit logit (P) = log[Pj(1 p)] as a linear function of x, assuming that the number of patients with aftereffects has a binomial distribution. This is then a generalized linear model. We find logit (P) = 1.52 + 0.781x A probit analysis could also be carried out and this gives probit (P) = 0.931 +0.477x The two fitted models look quite different but actually give very similar results over a wide range of xvalues. In particular both have a positive gradient with x and give ED so values equal to 1.95. The standardized residuals at each dose level may be examined to check that there is no evidence of model inadequacy. There are various benefits of model fitting here. In particular the results take account of the different sampIe sizes at different xvalues, and it is possible to get a confidence interval for the ED so . We may summarize the results by saying that the chance of aftereffects increases from about 30% at the lowest dose level ofO.9 to over 80% at the highest dose level of 4.0, with an ED so at x = 1.95.
Moral This exercise provides a simple example of the benefits which may arise from fitting a generalized linear model. NOTES ON EXERCISE G.6
(a) Sources of statistics are briefly discussed in Chapter 9 ofPart I. You should have relatively little trouble in finding most of these statistics. (b) Not all journals are taken by alllibraries, so you may have trouble locating a particular journal. If so, try a different reference or ask for a photocopy using the interlibrary loan system.
Brief notes on Exercisc G.7
173
Bear in mind that the reader wants an overview of the paper and the subsequent references. Do not get bogged down in mathematical detail. On the other hand, you should carefully explain any important mathematical concepts in the given paper. Remember to say what you think of the paper. Is it clearly written? Do you agree with it? A critical summary should review and evaluate a paper so that a prospective reader can assess ifit is worth reading in detail. The objectives ofthe paper should be stated together with an assessment of how weil the author has succeeded. The summary might also explain the methods used and indicate any underlying assumptions and any limitations (see also Anderson and Loynes, 1987, section 4.6.3). BRIEF NOTES ON EXERCISE G.7
(a) When this problem came to me, I was relatively inexperienced and diffident about giving advice. Nevertheless, I asked to be shown around the factory to see how the rubber bands were made, and then asked to inspect their quality control procedure. It turned out that they had no quality control whatsoever. Anything I suggested would be an improvement! Having found out how the factory operated, and wh at was feasible statistically (there was no statistical expertise on hand at all), I recommended a very simple plan. Sam pIes of packets and boxes were weighed at regular intervals and the weights plotted on a simple chart. Warning and action lines were insertcd. The contents of sam pIe bags were also inspected at regular intervals, but less frequently. It takes a long time to count the number of complete rubber bands in a packet size 500! (b) There is only one degreeoffreedom when testing a (2 x 2) table. This suggests that we can make three of the four frequencies anything we like. Let us assume the table is, say
~ ~ To make the X2 statistic zero, we simply take the missing frequency to be the same multiple of 10, as 12 is of 4, namely 12 x 10/4 = 30. The null hypo thesis of rowandcolumn independence im plies that ratios of expected frequencies are the same in each pair of rows (and each pair of columns). This is why linear models are fitted to the logarithms of expected frequencies in contingency tables, giving rise to wh at are called loglinear models (Appendix A.9).
H Collecting data
This chapter contains three exercises which cover so me aspects of data collection. Some readers may wish to study this section before so me of the earlier exercises, as statistical analysis depends on having reliable, representative data. However, so me aspects of data collection are appreciated more after data analysis has been studied. Thus this section is placed at the end of Part Ir. (Which comes first  the chicken or the egg?) Some general remarks on data collection are made in Chapter 4 ofPart I and in sections 10 and 11 of Appendix A. Because of the timeconsuming nature of datacollection exercises, this section is shorter than might be expected from its relative importance. It is certainly true that the main way to appreciate the difficulties of da ta collection is by doing it, and it is to be hoped that students have carried out so me experiments to collect data in earlier courses (e.g. Scott, 1976), even if they were only very simple experiments like coin tossing. Experiments help to stimulate student interest, and help develop statistical intuition.
Exercise H.l
SampIe surveys/questionnaire design
Suppose you want to carry out a survey to investigate one of the following topics. In each case carefully identify the target population, design a suitable questionnaire, and discuss how you would get a representative sample from the given target population. (a) The financial problems of students at a particular college or university. (b) The popularity of the government of your country among adults. (c) The views of the 'general public' on capital punishment, abortion, tax evasion or some similar controversial ethical topic. How could you get the views of a more specialized population such as doctors or policemen ? (d) Agricultural production of a particular crop in a particular region.
Exercise H.2
Designing a comparative experiment
This is an artificial exercise on constructing experiments to compare a set of treatments. Suppose there are t treatments and m experimental units and that
Notes on Exercise H.l
175
one (and only one) treatment can be applied to each unit. Further suppose that the experimental units can be divided into b reasonably homogeneous blocks of size k, such that m = bk. Construct a suitable design if:
(a) t=4,m=12,b=3,k=4. (b) t=4, m=12, b=4, k=3.
(c) t=4, m=12, b=6, k=2. (d) t=5, m=15, b=5, k=3.
For convenience denote the treatments by A, B, C, D, ... and the blocks by block I, block II, .... For design (a), describe how you could use a table of random digits to allocate the treatments randomly to the experimental units.
Exercise H.3
A study on longevity
In a study on longevity, a doctor decided to compare the age at death of marriage partners to see if the pairs of ag es were correlated. He questioned 457 consecutive patients who were over 40 years old and had co me to consult hirn for some medical reason. Each patient was asked if his/her parents were alive or dead, and, if the latter, how old they were when they died. After omitting patients who still had one or both parents alive, the doctor found a positive correlation between the ages of dead parents. Comment on this conclusion and on the way the da ta were collected. NOTES ON EXERCISE H.l
These notes concentrate on problem (a). The obvious target population is 'all students at the particular college', but this is not as precise as it appears. It is probably sensible to exclude parttime students (who may have special problems and be difficult to contact) and postgraduate students. Students who are temporarily absent from college (e.g. on a spell of industrial placement) should also be excluded. Can you think of any other possible exclusions? To get a representative sampie, we must not interview students at random on the campus. Rather a quota or random sam pie must be chosen. For a random sampie, we need a list of students (the sampling frame), from which a random sam pie may be selected using random numbers. However, a simple random sampie would be too difficult and expensive to take. It is much better to use a form of stratified, multistage random sampling. Divide the departments ofthe college (e.g. physics, history) into strata, which could be 'sciences', 'arts' and 'technologies'. Take a random sampie (perhaps size one) of departments from each stratum. Obtain lists of students for each year (or from randomly selected years) in the selected departments, and choose a random sam pie from each year. This will produce some clustering which may lead to correlated responses, but also means that sampling is inexpensive so that a larger sampie may be taken for the same cost. Alternatively students may be
176
Collecting data
approached at random on the campus until certain quotas have been satisfied. For example, a representative proportion of men and women is needed, as weil as a representative proportion of firstyear students, seien ce students and so on. Designing the questionnaire is not easy and will take longer than most students expect. It should start with questions on basic demographie details (name, department, year, sex, etc.). Then the student's income needs to be assessed as weil as any fixed expenditure. It is probably unwise to ask questions such as 'How much do you spend on entertainment per week?' as this is nonessential expenditure which varies considerably from week to week and is difficult to estimate. A void open questions such as 'How would you describe your overall financial situation?', as the answers may be difficult to code and analyse unless a dearly specified list of alternatives is given, such as (a) more than adequate, (b) just about adequate, (c) rather inadequate, (d) grossly inadequate. Wherever possible, questions should have a dearly defined answer (e.g. What finance, if any, do you receive from the government?). Avoid leading questions such as 'Would you agree that students do not receive enough financial help from the government?' Allow for the possibility of 'Don't know' or 'Not applicable' in the suggested answers to some questions. A small pilot survey is essential to try out the questionnaire. Y ou may be amazed at what can go wrong, when this is not done. When the author tried out his first attempt at a questionnaire, he found that some students did not know the size of the grant they received, or what their parents contributed. Students who paid their rent monthly found difficulty in assessing the corresponding weekly rate. And so on. For problem (b), the target population is apparently all adults who are resident in the country who are eligible to vote. Can you think of any problems with this definition? Neutral questions are essential. Then the country should be divided into reasonably homogeneous strata, so that smaller areas may be randomly selected from each stratum and then further subdivided if necessary. Eventually a random selection of adults can be taken from the electoral register. Alternatively, a quota sam pIe could be taken by selecting quota sam pies in different representative areas of the country (not an easy task!). For problem (c), it is very difficult to compose neutral questions. You should also bear in mind that some respondents may be unwilling to cooperate or may not give their true opinions. For problem (d), a major difficulty is that of getting a sampling frame.
NOTES ON EXERCISE H.2
(a) When the block size is the same as the number of treatments, it is sensible to make one observation on each treatment in each block. If the treatments are applied randomly, then we have what is called a randomized block design. If block I contains four experimental units, denoted by g, h, i, j, then the four treatments A, B, C, D may be allocated to them, using random digits in many different ways to ensure fair allocation. One crude method is to ignore digits 4 to 9,
Notes on Exercise H.3
177
and to allocate A to g, h, i or j according as to wh ether the first usable digit is 0, 1,2 or 3. And so on. A more efficient method can easily be devised. (b) When the block size is less than the number oftreatments, we have an incomplete block design. When b=t and k=t1, the 'obvious' way to proceed is to omit each treatment from just one of the blocks. This leads to a balanced incomplete block design since each pair of treatments occurs within the same block the same number oftimes. Thus we could have: block I  ABC: block 11  ABD : block III  ACD : block IV  BCD. Randomize the allocation within blocks. (c) A balanced incomplete block design is also possible here. (d) It is not possible to construct a balanced design. Use you common sense to construct an incomplete block design with as much symmetry as possible. Alternatively consult a textbook (e.g. Cochran and Cox, 1957) which gives partially balanced designs. It only took me a few minutes to construct the following design by trialanderror so that each treatment occurs three times and so that each pair of treatments occurs within the same block either on ce or twice. Advances in computing software means that balance, although still desirable, is not as important as it used to be. Block 1 ABE : block II  CI)E : block III  ABD : block IV  BCE : block V  ACD. NOTES ON EXERCISE H.3
It is important to realize that the sam pie collected here is not a random sam pie from the total population of all married couples. The patients are all over 40 years old, they are all ill (which is why they are seeing a doctor), and they all have both parents dead. Thus the sampie is biased in three different ways. This means that the results should not be taken as representative of the population as a whole. Many results which are reported in the media or in the published literature are based on biased sampies, although the bias is not always as obvious as it is here. Another common form ofbiased sam pie is that formed by people who write letters to newspapers, to radio programmes or to elected representatives. These people feel strongly about a particular topic and are selfselecting. Any survey which produces a low response rate mayaiso give biased results because the people who res pond may be different to those who do not respond. Moral
Be on the lookout for biased sampies.
PART 111
Appendices
APPENDIX A
A digest of statistical techniques
This appendix is a concise reference with advice and warnings, containing brief notes on a variety of statistical topics. These topics are a personal selection and are not intended to be comprehensive. Important definitions and formulae are given together with some brief advice and warnings. Key references are given for further reading. This appendix is intended as an aidememoire for students and researchers who have already studied the topics. It is not suitable for learning topics and is not intended for use as a 'cookbook' by statistical novices. It will need to be supplemented by other textbooks for further details.
Standard abbreviations pdf cdf CI SS DF MS ANOV A
probability density function cumulative distribution function confidence interval sum of squares degrees of freedom mean square analysis of variance
General reJerences There are many good, introductory textbooks on statistics, at varying degrees of mathematical difficulty, which cover basic statistical methods. They include: Box, G. E. P., Hunter, W. G. and Hunter,]. S. (1978) StatisticsJor Experimenters, Wiley, New York. Chatfield, C. (1983) Statistics Jor Technology, 3rd edn, Chapman and Hall, London. Snedecor, G. W. and Cochran, W. G. (1980) Statistical Methods, 7th edn, Iowa State University Press, Iowa. A good referencc book on more theoretical topics is: Cox, D. R. and Hinkley, D. V. (1974) Theoretical Statistics, Chapman and Hall, London'.
182
A digest of statistical techniques
There are many more advanced books on a variety of topics, some of which are referred to in the appropriate section below. In addition it may sometimes help to consult a dictionary or encyclopedia such as: Kendall, M. G. and Buckland, W. R. (1982) A Dictionary of Statistical Terms, 5th edn, Longman, London. Kotz, S. and Johnson, N. L. (eds) (1982) Encyclopedia of Statistical Sciences, (in 8 volumes), Wiley, New York. Kruskal, W. H. and Tanur, J. M. (eds) (1978) International Encyclopedia of Statistics, (in 2 volumes), CollierMacmillan, New York.
A.l
Descriptive statistics
The calculation of summary statistics and the construction of graphs and tables are a vital part of the initial examination of data (IDA). These topics are discussed in section 6.5 of Part I and in the exercises of Part II (especially Chapter A). The important advice given there will not be repeated here. Given a sampIe of n observations, say Xl' . . . , X n ' the three main measures of location are the (arithmetic) mean, X = LxJn, the median  the middle value of {xJ when they are arranged in ascending order of magnitude (or the average of the middle two observations if n is even), and the mode  the value which occurs with the greatest frequency. An alternative measure of location is the trimmed mean, where some of the largest and smallest observations are removed (or trimmed) before calculating the mean. For example, the MINITAB package routinely calculates a 5% trimmed mean where the smallest 5% and largest 5% ofthe values are removed and the remaining 90% are averaged. This gives a robust measure which is not affected by a few extreme outliers. The two common measures of spread, or variability, are:
1. 2.
standard deviation=s=~[L(xiW/(n1)] range = largest (x;)  smallest (x;).
The range is easier to understand than the standard deviation but tends to increase with the sampIe size, n, in roughly the following way for normal data:
s::::::.range/~n s::::::. range/4 5::::::' 5::::::'
range/5 range/6
for for for for
n < about 12 20
These rough guidelines can be helpful in using the range to check that a sam pIe standard deviation has been calculated 'about right'. Despite the dependence on sam pIe size, the range can also be useful for comparing variability in sam pIes of roughly equal size. An alternative robust measure of spread is the interquartile range given by
Descriptive statistics
183
(Q3  Q!) where Q!, Q3 are the lower (first) and upper (third) quartiles respectively. Thus, Q! for example is thc value below which thcre are a quarter of the observations. The median can be regarded as the second quartile. Some sort of interpolation formulae may be needed to calculate Q! and Q3' The quartiles are special types of percentile  a value which cuts off a specified percentagc of the distribution. Tukey (1977) has introduced several descriptive terms which are growing in popularity. Hinges are very similar to the (upper and lower) quartiles, while the Hspread is similar to the interquartile range. A step is 1.5 times the Hspread. An inner Jenee is one step beyond the hinges, while an outer fence is two steps beyond the hinges. Observations outside the outer fences can be regarded as extreme outliers. Let mk=L(x;xl/n denote the k th moment about the mean. A coefficient of skewness, which measurcs the lack of symmetry, is given by m3/mi/ 2 • A coefficient of kurtosis is given by [(m 4 /mi)  3]. This measures whether the observed distribution is too peaked or too heavytailed as compared with a normal distribution. Both these shape coefficients should be dose to zero for normal data. Various types of graph are discussed in section 6.5.3 of Part I. Most will be familiar to the reader, but it seems advisable to indude further details here on probability plotting. The general idea is to rank the data in ascending order of magnitude and plot them in such a way as to show up the underlying distribution. Given a sam pie size n, denote the ordered data (i.e. the order statistics) by x(!)' X(2)' . . . x(n)' Traditional prob ability plots are obtained by plotting the ranked data against the sampie (or empirical) cumulative distribution function (cdf) on special graph paper called prob ability paper. The proportion of observations less than or equal to x(i) is i/n, but this can provide a poor estimate of the underlying cdf. (For example it implies that the chance of getting a value greater than x(n) is zero.) Three common estimates ofthe cdfat x(;) are i/(n+ 1), (i ~)/n and (i ~)/(n +~) and there is little to choose between them. If we simply plot say i/rn + 1) against x(i) on graph paper with linear scales, we typically get an Sshaped curve called the sampIe cdf. (If the two axes are transposed we have what is called a quantile plot. The pth quantile, Q(P), (or the 100 pth percentile) of a distribution is the value below which a proportion p of observations from that distribution will lie.) This graph may be helpful but the information is easier to interpret if plotted on special graph paper constructed so that the sam pIe cdf is approximately a straight line if the data come from the particular distribution for wh ich the graph paper is constructed. For example, normal probability paper has one sc ale linear while the other scale is chosen in a nonlinear way so as to transform the normal cdf to a straight line. Weibull paper is also widely used in reliability work. Note that percentiles can be easily estimated from plots of this type. Computers generally construct probability plots in a rather different way by plotting the ordered observations against the expected values of the order statistics for a random sampIe ofthe same size from the specified distribution ofinterest (often normal). Expected order statistics can be found for a particular distribution by an appropriate calculation or approximation. For the normal distribution, for example,
184
A digest of statistical techniques
MINIT AB calculates what it calls normal scores (sometimes called rankits) by calculating I[(i~)/(n+~)l where denotes the cdf of the standard normal distribution. Thus computer probability plots are likely to have two linear scales. Such plots are sometimes called theoretical quantilequantile plots as they essentially plot the observed quantiles (the ranked data) against theoretical quantiles. Departures from linearity in a probability plot suggest the specified distribution is inappropriate, but if the departures cannot easily be spotted by eye, then they are probably not worth worrying about. For small sam pies (e.g. n < about 20), linearity can be hard to assess and it can be helpful to simulate plots from the specified distribution to see what sort of variability to expect.
A.2
Probability
Asound grasp ofbasic probability theory and simple probability models is needed both to und erstand random events and as a basis for statistical inference. Probability and statistics are complementary, but inverse, subjects in that statistics is concerned with inductive inference (DAT A + MODEL), while probability is concerned with deductive inference (MODEL+behaviour ofsystem). There are so me philosophical problems in deciding what is meant by 'probability' , and in establishing a set of sensible axioms to manipulate probabilities. Different types of probability include equallylikely probabilities, objective longrun frequentist probabilities, and subjective probabilities (see also section 7.4), but they will not be discussed here. Suppose you are interested in finding the probabilities ofthe different outcomes of one or more experiments or trials. Begin by finding the set of all possible outcomes of the experiment, called the sampIe space. An event is a subset of the sampie space. If EI and E2 denote two events, and P(E) denotes the prob ability of event E, then the two most important rules for manipulating probabilities are: P(E I uE2 ) = P(E I ) + P(E2)

P(E l nE2)
(A.2.1)
the general addition law, and (A.2.2) the general multiplication law. Ifthe two events are mutually exclusive, then P(E l nE2) =0 and (A.2.1) simplifies to (A.2.3) the addition law for mutually exclusive events. If EI and E2are independent, then the conditional probability p(E2IEI) is the same as the unconditional probability P(E2) and (A.2.2) reduces to (A.2.4) the product law for independent events.
Probability distributions
185
A clear introduction to probability is given by Chung (1979) and by many other authors. The classic text by FeIler (1968) is for the more advanced reader.
A.3
Probability distributions
A random variable, X, takes numerical values according to the outcome of an experiment. A random variable may be discrete or continuous according to the set of possible values it can take. A discrete distribution is usually defined by a point probability function, P(X = r) or P(r), while a continuous distribution may be described by a probability densityJunction (abbreviated pdf),j(x), or equivalently by a cumulative distribution Junction (abbreviated cdf), F(x), such that
F(x) = Prob(X ~ x)
f
=
J(u) du.
00
The inverse relationship is J(x) = dF(x)jdx. The mean (or expected value or expectation) of a random variable, X, is given by E(X)
={
Lrp(r)
J~oo xJ(x)dx
in the discrete case in the continuous case·
The expectation operator can be defined more generally by Efg(X)] = {
Lg(r)p(r) Sg(xl{(x)dx
in the discrete case in the continuous ca se
where g denotes a function. In particular the variance of a probability distribution is given by E[(XJi)2] where Ji=E(X). There are numerous rules for manipulating expectations, such as E(XI + X 2) = E(XI ) + E(X2) wh ich applies to any two random variables, Xl and X 2 (with finite means). The properties of some common distributions are listed in Table A.1. Note that the probability generatingJunction of a (nonnegative) discrete distribution is defined by L::'oP(X = r)s'. If the random variable X has a normal distribution with mean Ji and variance (12, then we write X", N(Ji, (12), where '" means 'is distributed as'. The standard normal distribution arises when Ji = 0, (1 = 1. Some useful results are as follows: 1.
If X"'N(O, 1), then Y=X 2 is said to have a chisquared (l) distribution with one degree of freedom (DF). If Xl' ... X n are independent N(O, 1) variables, then n
LX ;=1
2 ",
j
x~.
r=O, 1, ...
r=O, 1, ...
r=O, 1, ... ,
Geometrie
Negative binomial
Hypergeometrie
O
r 1) . r p"(lp)'
(:)
(~I)(mn~:I)
O
p(lp)'
x>O
x>O
x>O
Gamma
Weibull
a
 OCJ
Exponential
Normal
Uniform
np(l p)
np
P. A>O
A>O, r>O m>O, A>Ü
Ae Ax AeAx(hyI jr(r) mh m  ' exp[ AX"]
rjA
1jA
(a+ b)j2 (1)0
1j(ba)
mln m
rjA'
1jA'
(1'
(ba)'/12
rn'(m1)
mln(rnm,)(mn)
I'
I
b
for m > 1 (W cibull)
for r> 1 (Gamma)
I0f"..I
~ I
I a
General shape
p'j[l (1 p)s]'
pj[l  (1  p)s]
(1 _ p)jp' k(l p)jp'
psj[l(lp)s]
(1 _ p)jp'
1jp (lp)jp k(l p)jp
e!J(.'I)
(lp+ps)"
(lp+ps)
Probability generating funetion
p.
p.
p(lp)
Varianec
p
Mean
exp[ (xp.l' j2O"]j.J (2nO")
(b) Some common families of continuous distributions Density function
min(n, m l )
p(lpy'
r=l, 2, ...
Geometrie
(k+
p.>0
ePp.'jr!
r=O, 1, ...
Poisson O
O
G}'(l P)''
r=O, 1, ... ,n
Binomial
Restrietions
O
Point probability
(a) Some common fami/ies of discrete distributions p'(lp)'' Bernoulli r=O or 1
Possible values
The properties of so me eommon distributions
Distribution
Table A.l
Probability distributions 2. 3.
187
If Y has a gamma distribution with parameters rand Je = 1, then 2 Y has a X2 distribution with 2r DF or X~, for short. If X", N(O, 1) and Y '" X~, and X and Y are independent, then the random variable X
t=
.j(Y/v)
4.
is said to have atdistribution with v DF. If X j , X2 are independent X2 random variables with vj ' v2 DF respectively, then the random variable Xj/Vj
F=
X 2/V 2
5. 6. 7.
is said to have an Fdistribution on (vl' v2) DF. A variable, Y, has a lognormal distribution if X = In Y", N(p" 0"2). Then E (Y) = exp(p, + 0"2/2). The exponential distribution is a special ca se of both the gamma distribution (with r=1) and the Weibull distribution (with m=1). The Erlang distribution is a special ca se of the gamma distribution with r a positive integer.
There are many other families of probability distributions which are applicable to particular problems. A catalogue of distributions (discrete, continuous and multivariate) and their properties is givell, for example, by Johnson and Kotz (1969, 1970, 1972). Multivariate distributions may be defined by the joint probability function in the discrete ca se (e.g. Prob (X = x, Y = y) = probability that X takes value x and Y takes value y) or the joint pdf in the continuous case. The multivariate normal is a particularly useful family with some remarkable properties which can be illustrated for the bivariate case. Suppose the random variables (X, Y) are bivariate normal. This distribution is specified by five parameters, namely the mean and variance of each variable and the correlation coefficient of the two variables (see section A.6). The formula for the bivariate normal is not particularly enlightening and will not be given here. It can be shown that the marginal distributions of X and Y are both univariate normal, as is the conditional distribution of X given Y = Y or of Y given X=x, for any value ofy or ofx. If a vector random variable, X, of length p is multivariate normal, we write
X", Np(p, L) where P is the (p X 1) me an vector ofX and L is the (p X p) covariance matrix ofX whose (i,j)th element denotes the covariance between the ith andjth elements ofX, namely E[ (Xj  p,) (Xj  p,)] in an obvious notation. Here the expectation operator is extended to a function oftwo random variables in an 'obvious' way. For example, in the discrete ca se E[g(X, Y)] = Lg(x, y)P(X=x, Y=y). x.y
188
A digest of statistical techniques
Note that if X, Y are independent random variables, then Prob(X=x, Y=y)= Prob(X=x)Prob(Y=y) and it can then be shown that E[f(X)h(Y)]= E[f(X)]E[h(Y)]. Another general useful rule for any two independent random variables is Var(X + Y) = Var(X) + Var(Y). An appropriate entertaining end to this section is provided by W. J. Youden's illustration of the normal curve: THE NORMAL LAW OF ERROR STANDS OUT IN THE EXPERIENCE OF MAN KIND AS ONE OF THE BROADEST GENERALIZATIONS OF NATURAL PHllOSOPHY • IT SERVES AS THE GUIDING INSTRUMENT IN RESEARCHES IN THE PHYSICAl AND SOCIAl SCIENCES AND IN MEDICINE AGRICUlTURE AND ENGINEERING. IT IS AN INDISPENSABLE TOOl FOR THE ANALYSIS AND THE INTERPRETATION OF THE BASIC DATA OBTAINED BY OBSERVATION AND EXPERIMENT
 W. J. Vouden
A.4
Estimation
The two main branches of statistical inference are estimation and hypothesis testing. The latter is considered in the next section, A.5. Suppose we have a random sampIe Xl' ... , X n from a population whose distribution depends on an unknown parameter f). A statistic is a function of the s:mple values, say T(Xl' ... , XJ One problem is to find a suitable statistic, say f)(X I , • • . , X.) which pro vi des a good point estimator of f). The realization of this statistic for a particular sam pIe, say {}(x l , . . • , x n ) is called a point estimate. A point estimator is unbiased if E({}) = f). A statistic, T, is said to be sufficient if the conditional distribution ofthe sampIe given T does not depend on f), so that the statistic contains all the information about f) in the sampIe. An estimator is said to he consistent if it tends to get closer and closer to the true value as the sam pIe size increases and eJficient if it has relatively low variance (these are not rigorous definitions!). There are several general methods of finding point estimators. A method of moments estimator is found by equating the required numher of population moments (which are functions of the unknown parameters) to the sam pIe moments and solving the resulting equations. The method of maximum /ikelihood involves finding the joint probability (or joint pdf in the continuous case) of the data given the unknown parameter, f), and then maximizing this likelihood function (or usually its logarithm for exponential families) with respect to f). (Note that the EM algorithm is an iterative twostage (the Estep stands for expectation and the Mstep for maximization) computational procedure for deriving maximum likelihood estimates when the observations are incomplete in some way. For example, there
Estimation
189
may be missing values (which must be missing at random) or so me observations may be censored (e.g. Cox and Oakes, 1984, Chapter 11).) The method of least squares estimates the unknown parameters by minimizing the sum of squared deviations between the observed values and the fitted values obtained using the parameter values. The method of least absolute deviations (or LI estimation) minimizes the corresponding sum of absolute deviations. The latter approach can be analytically difficult but is increasingly 'easy' on a computer. Whichever approach is adopted (and there are several not mentioned here), the wide availability of computer programs means that less attention need be paid to the technical details. In addition to point estimates, many packages also calculate confidence intervals which provide an internal within which the unknown parameter will lie with a prescribed confidence (or probability). Interval estimates are usually to be preferred to point estimates. Beware of interval estimates presented in the form a ±b where it is not clear if b is one standard error, or two standard errors, or gives a 95% confidence interval or what. We consider one particular problem in detail, namely that of estimating the mean of a normal distribution. Suppose a random sample, size n, is taken from anormal distribution with mean J1. and variance (J2. Denote the sample mean and variance by x, 52 respectively. The sample mean is an intuitive point estimate of J1. and we write
p,=x
n
where the 'hat' over J1. means 'an estimate of. This es ti mate arises from several different estimation approaches including maximum likelihood, least squares and the method of moments. It can be shown that the sampling distribution of the sample mean, which would be obtained by taking repeated samples of size n, is N(J1., (J2/n), where (J /J n is called the standard error of the sample mean. If (J is known (rare in practice), then it can be shown that the 95% confidence interval (CI) for J1. is given by
x± 1.96(J/Jn
(AA.l )
while if (J is unknown, we use s/J n as the estimated standard error of x to give
x± t0025.nls/ J n
(AA.2)
where t denotes the appropriate upper percentage point of atdistribution with (n 1) degrees of freedom such that 2~% of the distribution lies above it. We consider two more problems in brief. The sample variance, 52, is an unbiased point estimate of (J2 and the 95% CI for (J2 is given by
(n1)s2 2 XO.025,n 1
to
(nl)s2 2
XO. 975,nl
(AA.3)
The second situation involves the comparison of two groups of observations. A random sample, size n1, is taken from N(J1.1' (J2) and a second random sample, size n2, is taken from N(J1.2' (J~. Note that the population variances are assumed equal. Ifthe
190
A digest of statistical techniques
sampie means and variances are XI' x2 ' sf, s~ respectively, then (X I X2) is a point estimate of (JlI  Jl2) and its standard error is
Then a 95% CI for (JlI Jl2)' assuming
(T
is known, is given by
(X I X2) ± 1.96(T
J[~n + ~J n l
2
.
(A.4.4)
If (T is unknown, then the combined estimate of (T2 is given by S2 =
and the 95% CI for (JlI 
[(ni l)s; + (n 2 1)silj(n 1 + n2  2) Jl2)
(XI 
is
x2) ± tO.025.n1 +.2 2S J[~ + ~J n\ n2
(A.4.5)
.
(A.4.6)
An important dass of estimators contains those which are insensitive to departures from the assumptions on which the model is based. In particular, they usually accommodate (or even reject) outlying observations by giving them less weight than would otherwise be the case. A trimmed mean, for example, omits a specified percentage of the extreme observations while a Winsorized mean replaces outlying values with the nearest retained observation. One important dass of robust estimators are Mestimators (see Hoaglin et al., 1983, Chapters 9 to 12), which minimize an objective function which is more general than the familiar sum of squared residuals. For a location parameter (J, we minimize
L p(x
j 
(J) with respect
;=1
to (J, where p is a suitably chosen function which is usually symmetrie and differentiable. For example, p(u) = u2 gives a leastsquares estimate, but we arrange for p(u) to increase at a lower rate for large u in order to achieve robustness. If "'(u) = dp(u)jdu, then the Mestimator is equivalently obtained by solving
L "'(Xi 
(J) =
O. Another dass of estimators are Lestimators which are a weighted
;=1
average ofthe sampie order statistics (which are the sampie values when arranged in order of magnitude). A trimmed mean is an example of an Lestimator. There are a number of estimation techniques which rely on resampling the observed data to assess the properties of a given estimator (e.g. Efron and Gong, 1983). They are useful for providing nonparametric estimates of the bias and standard error of the estimator when its sampling distribution is difficult to find or when parametrie assumptions are difficult to justify. An old idea is to split the observed sam pie into a number of subgroups, to calculate the estimator for each subgroup and to use the variance of these quantities to estimate the variance of the overall estimator. The usual form ofjackknj.fing is an extension of this idea. Given a sam pIe of n observations, the observations are dropped one at a time giving n
Significance tests
191
(overlapping) groups of (n 1) observations. The estimator is calculated for each group and these values provide estimates of the bias and standard error of the overall estimator. A promising alternative way of reusing the sam pie is bootstrapping. The idea is to simulate the properties of a given estimator by taking repeated sam pies of size n with replacement from the observed empirical distribution in which x p x 2 ' ... , xn are each given probability mass 1In. (In contrast jackknifing takes sam pie size (n 1) without replacement.) Each sam pie gives an estimate of the unknown population parameter. The average of these values is called the bootstrap estimator, and their variance is called the bootstrap variance. A close relative of jackknifing, called crossvalidation, is not primarily concerned with estimation, but rather with assessing the prediction error of different models. Leaving out one (or more) observations at a time, a model is fitted to the remaining points and used to predict the deleted point(s). These withinsample prediction errors provide an assessment of the model's prediction quality. The sum of squares ofthese errors can also be used to make a choice between different models or procedures as in the PRESS approach (predicted residual sum of squares). Bayesian inference relies on Bayes theorem which, as applied to inference says that if E denotes so me event and D some data, then p(EID)  the posterior probabilty of E  is proportional to p(E)p(DIE), where the prior probability, P(E), may express our prior degree of belief about E.
A.5
Significance tests
This section intro duces the terminology of hypothesis testing and also describes some specific types of test. The difficulties and dangers of the procedures are discussed in section 7.2 of Part 1. A hypothesis is a conjecture ab out the population from which a given set of data are to be drawn. A significance test is a procedure for testing a particular hypo thesis called the null hypothesis, which is customarily denoted by Ho' The test consists of deciding whether the da ta are consistent with Ho. The analyst should normally specify Ho and an alternative hypothesis, denoted by H j , before looking at the data. In particular this specifies whether a onetailed test (where we are only interested in departures from Ho 'in one direction') or a twotailed test is appropriate. A suitable test statistic should be selected to show up departures from Ho. The sampling distribution of the test statistic, assuming that Ho is true, should be known. The observed value of the test statistic should then be calculated from the data. The level of significance of the result (the Pvalue) is the probability of getting a test statistic which is as extreme, or more extreme, than the one observed, assuming that Ho is true. If P < 0.05, we say that the result is significant at the 5 % level and that we have some evidence to reject Ho. If P<0.01, the result is significant at the 1 % level and we have strong evidence to reject Ho Some 'shading' is advisable in practice given doubts ab out assumptions. Thus the values P=0.049 and P=0.051 should both be seen as on the borderline of significance rather than as 'significant' and 'not significant'. If P>0.05, we accept Ho, or rather 'fail to reject' Ho ifthere are still
192
A digest of statistical techniques
doubts about it (e.g. if Pis only just greater than 0.(5). Note that Pis NOT the probability that Ho is true. An error of type I (or of the first kind) is said to occur when Ho is rejected incorrectly (i.e. when Ho is actually true) because of an extremeIooking sampIe. An error of type II occurs when Ho is incorrectly accepted (when H 1 is actually true) bccause of a sam pie which happens to look consistent with Ho. The power of a test is the probability of correctly rejecting Ho and so equals [1 prob ability (error of type II)]. With small sam pIe sizes, the power may be disturbingly low and this aspect of tests deserves more attention. Some specific tests in brief are as follows:
A.5.1
TESTS ON A SAMPLE MEAN
Suppose we have a random sam pIe size n from N(Jl, ( 2) and wish to test Ho :Jl=k
against H 1 : Jl >
k}
onetailed
or H 1 :Jl
twotailed
A suitable test statistic is Z={xk)Jn/Cf ort=(xk)Jn/s
if Cf is known
(A.5.1)
if(Jisunknown
(A.5.2)
where x, 5 are the sam pIe mean and standard deviation. If Ho is true, then Z ~ N(O, 1) and t~ tn _1' the latter statistic giving rise to wh at is called attest. Note the assumptions that observations are independent and (approximately) normally distributed. Ifthese assumptions are unreasonable, then it may be safer to carry out a nonparametric test (see below), though most parametric tests are robust to moderate departures from distribution al assumptions.
A.5.2
COMPARING TWO GROUPS OF OBSERVATIONS
First decide if a twosample test or a paired comparison test is appropriate. The latter test arises when each observation in one group has a natural paired observation in the other group so that a onesample test can be carried out on the paired differences. Then x and 5 in formula (A.5.2) are the mean and standard deviation of the differences. The null hypothesis is usually that the population mean difference is zero (so that k is zero in (A.5.2)). A twosample ttest is illustrated in Exercise B.l.
Significance tests A.5.3
193
ANALYSIS OF VARIANCE (ANOVA)
The oneway ANOV A generalizes the two':'sample ttest to compare more than two groups and is described later in section A.7. More generally, ANOV A can be used to test the effects of different infiuences in more complicated data structures. ANOV A generally involves one or more Ftests to compare estimates of variance. If are two independent estimates of variance based on vj , v2degrees of freedom (DF) respectively, then the ratio may be compared with the appropriate percentage points of an Fdistribution with (v j , v2 ) DF to test the hypothesis that the underlying population variances are equal.
s;, si
A.5.4
s;/si
THE CHISQUARED GOODNESSOFFIT TEST
This is applicable to frequency or count data. The observed frequencies in different categories are compared with the expected frequencies which are calculated assuming a given null hypothesis to be true. The general form of the test statistic is
l=
(observed frequency  expected frequency)2 all catcgorics
expected frequency
(A.5.3)
If Ho is true, then this test statistic has an approximate X2distribution whose degrees of freedom are (number of categories1number of independent parameters estimated from the data). The test is illustrated in Exercises B.4, E.4, G.2, G.3, G.7. Note that the x2test is always onetailed as all deviations from a given Ho will lead to 'large' values of X2• Also note that categories with 'small' expected frequencies (e.g. less than 5) may need to be combined in a suitable way. When a significant result is obtained, the analyst should inspect the differences between observed and expected frequencies to see how Ho is untrue. The analysis of categorical data is a wide subject of which the x2test isjust apart. There are, for ex am pie, special types of correlation coefficient for measuring the strength of dependence between two categorical variables with frequency data recorded in a twoway contingency table. A loglinear model (see section A.9) may be used to model systematic effects in structured frequency data. A.5.5
NONPARAMETRIC (OR DISTRIBUTIONFREE) TESTS
Nonparametric tests make as few assumptions as possible ab out the underlying distribution. The simplest test ofthis type is the sign test which essentially looks only at the sign (positive, negative, or zero for a 'tie') of differences. This throws away some information and is therefore not efficient, but it can be useful in an IDA as it can often be done 'by eye'. For example, in a paired comparison test the analyst can look at the signs of the differences. If they nearly all have the same sign, then there is strong evidence to reject the null hypothesis that the true average difference is zero. The binomial distribution can be used to assess results by assuming that the probability of a positive difference under Ho is 1/2, ignoring ties.
194
A digest of statistical techniques
It is more efficient to look at the magnitude of differences as weil as the sign. Many tests are based on ranks, whereby the smallest observation in a sampie is given rank one and so on. Equal observations are given the appropriate average rank. The Wilcoxon signed rank test is a substitute for the onesample ttest and is particularly suitable for paired differences. The absolute values of the differences, ignoring the signs, are ranked in order of magnitude. Then the signs are restored to the rankings and the sums of the positive rankings and of the negative rankings are found. The smaller of those two sums is usually taken as the test statistic and may be referred to an appropriate table of critical values. Values of the test statistic less than or equal to the critical value imply rejection of the null hypothesis that the median difference is zero. The twosample Wilcoxon rank sum test (or equivalently the MannWhitney Utest) is a substitute for the twosample ttest for testing that two populations have the same median. The two sam pies are combined to give a single group and then ranks are assigned to all the observations. The two sam pies are then reseparated and the sum of the ranks for each sampie is found. The smaller of these two sums is usually taken as the test statistic and may be referred to tables of critical values. The equivalent MannWhitney approach orders all the observations in a single group and counts the number of observations in sam pie A that precede each observation in sampie B. The Ustatistic is the sum of these counts and mayaiso be referred to a table of critical values. The KruskalW allis test is the generalization of this test for comparing k( > 2) sam pies and is therefore the nonparametric equivalent of a oneway ANOVA. There are various other nonparametric tests and the reader is referred to a specialized book such as Hollander and Wolfe (1973). When should a nonparametric approach be used? They are widely used in the social sciences, particularly in psychology, where data are often skewed or otherwise nonnormal. They can be rather tricky to perform by hand (try finding a rank sum manually!) but are easy enough to perform using a computer package. They also have ni ce theoretical properties because they are often nearly as efficient as the corresponding parametric approach even when the parametric assumptions are true and they can be far more efficient when they are not. Despite this, they are often avoided in so me scientific areas, perhaps because test statistics based on means and variances are intuitively more meaningful than quantities like rank sums.
A.6
Regression
Regression techniques seek to establish a relationship between a response variable, y, and one or more explanatory (or predictor) variables Xl' X 2 ' .••• The approach is widely used and can be useful, but is also widely misused. As in all statistics, if you fit a silly model, YOll will get silly results. Guard against this by plotting the data, using background information and past empirical evidence. I have had so me nasty experiences with data where the explanatory variables were correlated, particularly
Regression
195
with timeseries data where successive observations on the same variable mayaiso be correlated. A.6.1 1.
2.
3.
PRELIMINAR Y QUESTIONS
Why do you want to fit a regression model anyway? What are you going to do with it when you get it? Have models been fitted beforehand to other similar data sets? How were the da ta collected? Are the xvalues controlled by the experimenter and do they cover a reasonable range? Begin the analysis, as usual, with an IDA, to explore the main features of the data. In particular, plot scatter diagrams to get a rough idea ofthe relationship, if any, between y and each x. Are there obvious outliers? Is the relationship linear? Is there guidance on secondary assumptions such as normality and wh ether the conditional 'error' variance is constant? This is straightforward withjust one xvariable, but with two or more xvariables you should be aware of the potential dan gers of collapsing multi varia te data onto two dimensions when the xvariables are correlated.
A.6.2
THE LINEAR REGRESSION MODEL
The simplest case arises with one predictor variable, say x, where the scatter diagram indicates a linear relationship. The conditional distribution of the response variable, y, here a random variable, for a given fixed value of x has a mean value denoted by E(ylx). The regression curve is the line joining these conditional expectations and is here assumed to be of the form:
E(ylx) = a + ßx
(A.6.1)
where 0( is called the intercept and ß is the slope. The deviations from this line are usual1y assumed to be independent, normal1y distributed with zero mean and constant variance, (12. These are a lot of assumptions! Given n pairs of observations, namely (xl' Yt), ... , (x n , yJ, the least squares estimates of 0( and ß are obtained by minimizing L(observed valuc of y  fitted value)2 =
n
L (y; 
0( 
ßx;f
i=l
This gives
&=yßX ß=L(X;X) (y;y)/L(X;X)2. Note that ß may be expressed in several equivalent forms such as L(X;X)yj L(X;X)2, since L(X;X) = YL(X;X) =0. Having fitted a straight line, the residual sum of squares is given by L(y;  & ßX)2
196
A digest of statistical techniques
which can be shown to equal (Syy  fJ2s x J where Syy, for example, is the total corrected sum of squares of the y's, namely I:(Yj  yf The residual variance, (12, is usually estimated for all regression models by 52 = residual SS/residual DF = residual mean square which can readily bc found in the output from most regression packages, usually in the ANOVA table (see below). The residual DF is given generally by (n  number of estimated parameters), which for linear regression is equal to (n2). Other useful formulae include: (i) 100(1 ao)% CI for a is IX ± t
N
~I,
'2 '
n_2 5
J[ + l n

x2
I:(xjx)
2
]
•
(Note: The probability associated with a confidence interval (CI) is usually denoted by a, but we used ao where necessary to avoid confusion with the intercept, a.) (ii) 100(1a)% CI for ß is ß±taI2,n_25/.jI:(XjW.
(iii) 100(1 ao)% CI for a + ßxo is IX + ßxo± t"012 n_25 ,
xt
J[~n + I:(xjx) (x o2] .
The latter CI is for the mean value of y, given x = x o' Students often confuse this with the prediction interval for y, for which there is a probability (1 ao) that a future single observation on Y (not the mean value!), gi yen x = x o, willlie in the interval
.
~
a+ßxo±t"',12,n_25
J[
1 (XOX)2] 1 +~+I:(XjX)2 .
The analysis of variance (AN OVA) partitions the total variability in the yvalues into the portion explained by the linear model and the residual, unexplained variation. The ANOV A table is shown in table A.6.1. Table A.6.1
ANOV A table for linear regression
50urce Regression Residual Total
55
DF
[PS xx by su btraction
n2
Syy
nl
M5 [PS" 52
E(M5) (12
+ ß2 Sxx (12
Numerous tests of significance are possible and many are performed routinely in computer output. Note that many arise in different, but equivalent, forms. For example, to test if the true slope of the line is zero (i.e. is it worth fitting a line at all ?), you can perform attest on ß, an Ftest on the ratio of the regression MS to the residual MS, or simply see if the above CI for ß includes the value zero.
Meancorrected model There are some computational advantages in using a meancorrected form of (A.6.1) namely
Regression E(ylx) = cx* + ß(x  i).
197 (A.6.2)
The slope parameter is unchanged, while cx arid cx* are related by cx*  ßi = cx. Then &* = Yand the fitted line is of the form y  y= ß(x ~ i). Meancorrected models are used routinely in many forms of regression.
A.6.3
CUR VI LINEAR MODELS
A linear model may be inappropriate for external reasons or because the eye detects nonlinearity in the scatter plot. One possibility is to transform one or both variables so that the transformed relationship is linear. The alternative is to fit a nonlinear curve directly. The commonest dass of curvilinear models are polynomials such as the quadratic regression curve
Similar assumptions about 'errors' are usually made as in the linear case. Polynomial models can be fitted readily by most computer packages.
A.6.4
NONLINEAR MODELS
These are usually defined to be nonlinear in the parameters and are thus distinct from curvilinear models. An example is
E(ylx) = 1/(1 + Ox). Models of this type are trickier to handle (e.g. Draper and Smith, 1981, Chapter 10; Ratkowsky, 1983). Whereas leastsquares estimation for a linear model can be achieved by matrix manipulation, nonlinear models require the solution of simultaneous nonlinear equations. Some sort of optimization is then required, but the availability of direct search methods in computer packages makes this increasingly feasible.
A.6.5
MULTIPLE REGRESSION
With k explanatory variables, the multiple linear regression model is
E(ylx l ,
•.• ,
x k) = cx + ßIXI + ...
+ ßkXk
together with the usual assumptions about the errors. If the data are given by (YI' x ll ' . . . , x kl ), . . . , (Yn' X ln ' • . . , x kn ), the model may easily be fitted by least squares using a computer. Curvilinear terms may be introduced, for example, by letting x2 =x;' The xvariables are usually centred (or meancorrected), and numerical considerations also suggest scaling the xvariables to have equal variance by considering (xji)/sj where Sj= standard deviation of observed values of Xj. Then the fitted slope is scaled in an obvious way but other expressions, such as sums of squares of the yvalues, are unchanged. More complicated transformations of the
198
A digest of statistical techniques
explanatory variables, such as the BoxCox transformation (see section 6.8 of Part I), will occasionally be needed. Estimates of ßl' ... , ßk will only be uncorrelated if the design is orthogonal. This will happen if the xvalues are chosen to lie on a symmetric, regular grid, or more mathematically if
L (x,,x) (xljx,)
for all
5,
t such that 5 # t.
j~l
When the xvariables can be controlled, it is helpful to choose them so as to get an orthogonal design. This not only simplifies the analysis but also enables the effect of each xvariable to be assessed independently of the others. It is also desirable to randomize the order of the experiments so as to eliminate the effects of nuisance factors. However in practice, multiple regression is more often used on observational data where the xvariables are correlated with each other. In the past, explanatory variables were often called independent variables, but this misleading description has now been largely abandoned. With correlated xvariables, it is not safe to try and interpret individual coefficients in the fitted model, and the fitted model may be misleading despite appearing to give a good fit (Exercise C.3). Sometimes the xvariables are so highly correlated that the data matrix gives rise to a matrix of sums of squares and crossproducts (section A.8) wh ich is illconditioned (or nearly singular). Then it may be wise to omit one or more suitably chosen x's, or consider the use of special numerical procedures, such as ridge regression. Multicollinearity problems (e.g. Wetherill, 1986, Chapter 4) arise because of ne ar or exact linear dependencies amongst the explanatory variables. They may be caused by the inclusion of redundant variables, by physical constraints or by the sampling techniques employed. With timeseries data, particularly those arising in economics, there may be correlations not only between different series, but also between successive values of the same series (ca lied autocorrelation). This provides a further complication. A multiple regression model may then include lagged values of the response variable (calIed autoregressive terms) as weil as lagged values of the explanatory variables. Some explanatory variables may contribute little or nothing to the fit and need to be discarded. Choosing a sub set of the xvariables may be achieved by a variety of methods including backward elimination (where the least important variable is successively removed until all the remaining variables are significant) or forward selection (where the procedure begins with no xvariables included). Sometimes there are several alternative models, involving different xvariables, which fit the data almost equally weil. Then it is better to choose between them using external knowledge where possible, rather than relying completely on automatic variable selection. In particular, there may be prior information about the model structure as, for example, that some variables must be included. It is often tempting to begin by including a large number of xvariables. Although the fit may appear to improve, it may be spurious in that the fitted model has poor predictive performance over a
Regression
199
range of conditions. As a crude ruleofthumb, I generally suggest that the number of variables should not exceed one quarter of the number of observations and should preferably not exceed about four or five. In an exploratory study it is perhaps reasonable to include rather more variables just to see which are important  but do not believe the resulting fitted equation without checking on other data sets. It has been suggested (Preece, 1984) that there are about 100000 multiple regressions carried out each day, ofwhich only 1 in 100 are sensible. While this may be a slight exaggeration, it does indicate the overuse ofthe technique and statisticians should be just as concerned with the silly applications as with the sensible ones. A.6.6
COEFFICIENT OF DETERMINATION
This is useful for assessing the fit of all types of regression model and is usually defined by R 2= explained 55/total 55. The total (corrected) 55, namely L(Yi )1)2, is partitioned by an ANOVA into the explained (or regression) 55 and the residual 55 where residual 55 = L (observed Y fitted yf Thus R 2 must lie between 0 and 1. The better the fit, the closer will R 2 lie towards one. In simple linear regression, it can be shown that R 2 = (correlation coefficient)2 (see below). More generally, R 2 is the square of the correlation between the observed and fitted values of y. Thus R is sometimes called the multiple correlation coefficient. One problem with interpreting R 2 is that it always gets larger as more variables are added, even if the latter are of no real value. An alternative coefficient produced by many packages is R 2 (adjusted) which adjusts the value of R 2 to take the number of fitted parameters into account. Instead of R2= 1 
residual 55 , calculate total 55
residual M5 R 2(adjusted) = 1     total M5 which is always sm aller than R 2 and less likely to be misleading. A 'high' value of R 2is commonly taken to mean 'a good fit' and many people are impressed by values exceeding 0.8. Unfortunately it is very easy to get values exceeding 0.99 for timeseries data which are quite spurious. Armstrong (1985, p. 487) gives a delightful set of rules for 'cheating', so as to obtain a high value of R 2. They include the omission of outliers, the inclusion oflots of variables and the use of R 2 rather than R 2(adjusted). A.6.7
MODEL CHECKING
After fitting a regression model, it is important to carry out appropriate diagnostic checks on the residuals (c.f. section 5.3.3 ofPart I). If outliers are present, then thcy
200
A digest of statistical techniques
may be adjusted or removed, or alternatively so me form of robust regression could be used. More generally it is of in te rest to detect infiuential observations whose deletion results in substantial changes to the fitted model. Formulae regarding residuals, influence, outliers and leverage can be presented more conveniently in the matrix notation of the general linear model and will therefore be deferred to section A.8. It mayaIso be necessary to test the data for normality (Wetherill, 1986, Chapter 8) and for constant variance (Wetherill, 1986, Chapter 9). If the conditional variance is found not to be constant, thcn the da ta are said to be heteroscedastic and it may be appropriate to fit a model by weighted least squares where the less accurate observations are given less wcight (section A.8). For example, in linear regression it may be reasonable to assume that thc conditional variance increases linearly with the value of the explanatory variable.
A.6.8
CORRELA TrON COEFFICIENT
This is a (dimensionless) mcasure of the linear association between two variables. The usual product moment correlation coefficient is given by
It can be shown that 1 shown that r2= 1 
~r~
+ 1. If a linear regression model is fitted then it can be
residual SS total SS
= coefficient of determination
Other measures of correlation are available for other types of data, such as discrete and ranked data. An example is Spearman's rank correlation coefficient which is given by
where dj is the difference in the rankings of the i th x and yobservations. Rank correlations require less in the way of assumptions than productmoment correlations and should probably be used more often than they are. Tables of critical values are available to help decide which correlations are significantly large, but an adequate approximation for most purposes is that values outside the range ±2/Jn are significant. It is harder to say how large a correlation needs to be in order to bejudged 'interesting'. For example, ifJrJ < 0.3, then the fitted line explains less than 10% of the variation (r 2 = 0.09) and will probably be oflittle interest even ifthe sampie size is large enough to make it significant. More generally it can be hard to assess and interpret correlations as illustrated by Exercises C.l and C.2. Finally we note that the most common mistake in interpreting 'large' correlations is to suppose that they demonstrate a causeandeffect relationship.
Further reading
201
Although a correlation coefficient can be calculated when x is a controlled variable, it is usually more meaningful when both x and y are random variables whose joint distribution is bivariate normal. Then r provides a sensible estimate of the population correlation coefficient which is usually denoted by p. In the bivariate normal case it can also be shown that the regression curves of y on x and of x on y are both straight lines and can be estimated by the regression techniques described above. To estimate the regression curve of x on y, all formulae are 'reversed', changing x to y and vice versa. The larger the correlation, the smaller will be the angle between the two regression lines. A.6.9
LOGISTIC REGRESSION
This is a special type of regression which may be appropriate when the response variable is binary. It is briefty discussed in section A.9. A.6.10
NONPARAMETRIC REGRESSION
There is much current interest in this form of regression where the observations are assumed to satisfy
where the form of the function g is determined from the data by smoothing rather than by being specified beforehand. The residual variation is usually assumed to have constant variance (although this assumption can be relaxed). There are several approaches which generally trade off goodnessoffit with some measure of smoothness. For example the spline smoothing approach (e.g. Silverman, 1985) chooses g so as to minimize ~[Yi g(x)f in conjunction with a 'roughness' penalty, depending on the second derivative ofg, which ensures thatg is 'reasonably smooth'. This leads to a function, g, called a cubic spline which has the properties that it is a cubic polynomial in each interval (Xi' X i + 1) and thatg and its first two derivatives are continuous at each Xi' The xrvalues are called knots. A.6.11
CALIBRATION
Regression is concerned with predicting y for a given value of x  often called the prediction problem. The reverse problem, called the calibration problem, is to decide which value of x leads to a specified mean value of y, say Yo' The 'obvious' classical estimator in the linear case is given by x = (Yo  &)/ ß, but note that this is not an unbiased estimator. There are several alternative approaches (e.g. Miller, 1986). FUR THER READING
Most textbooks provide an introduction to regression. Weisberg (1985) provides a more thorough introduction. Wetherill (1986) deals with many of the practical
202
A digest of statistical techniques
problems involved in multiple regression. Draper and Smith (1981) is the acknowledged reference text on regression. Cook and Weis berg (1982) and Atkinson (1985) provide a detailed treatment of residuals and inftuence in regression. The following poem provides a salutary end to this section. The Ballade of Multiple Regression
If you want to deal best with your questions, Use multiregression techniques; A computer can do in aminute What, otherwise done, would take weeks. For 'predictor selection' procedures Will pick just the ones best for you And provide the bestfitting equation For the data you've fitted it t:o. But did you collect the right data? Were there 'glaring omissions' in yours? Have the on es that score highly much meaning? Can you tell the effect from the cause? Are your 'cause' factors ones you can act on? If not, you've got more work to do; Your equation's as good  or as bad  as The data you've fitted it to. Tom Corlett, Applied Statistics, 1963, 12, p. 145 (first two verses only)
A.7
Analysis of variance (ANOVA)
ANOV A is a general technique for partitioning the overall variability in a set of observations into components due to specified inftuences and to random error (or haphazard variation). The resulting ANOV A table provides a concise summary of the structure of the data and a descriptive picture of the different sources of variation. In particular an estimate of the error variance (the residual mean square) is produced. For a general linear model with normal errors (see sec ti on A.8) this in turn allows the estimation of effects and the testing ofhypotheses about the explanatory inftuences, usually by means of Ftests. ANOV A can be applied to experimental designs of varying complexity, and we have al ready seen it applied to a linear regression model (see section A.6) where the explanatory inftuence is the effect of the predictor variable. Here we consider the 'oneway' ca se in detail and refer briefty to more complicated designs. Suppose we have k groups of observations with nj observations in group i such that n = nj , and let Yij denote thejth observation in group i. Denote
L
Analysis of variance (ANOVA) the ith group mean by
Yj= LYij/n
j
and the overall mean by
203
y. The purpose ofthe
j
experiment is probably to assess the differences between groups. (How large are the differences? Are they significantly large?) In order to do this, we start with an IDA, calculating summary statistics and plotting aseries ofbox plots. Ifthe results are not obvious, or if precise estimates of the residual variation and of CIs are required, then a formal ANOV A is needed which compares the variability between groups with the variability within groups. The total corrected sum of squares of the yvalues, namely L (Yijyf is partitioned into the betweengroups SS and the residual i,j
withingroup's SS in the ANOVA table A.7.1 below. By writing the 'total' deviation
as the sum of a 'within' and a 'between' deviation, squaring and summing over all i, j, it can be shown that the sums of squares 'add up'. Table A.7.1 also shows the appropriate degrees of freedom (DF) and the mean squares (MS = SS/DF). The residual MS, S2, provides an estimate of the underlying residual variance and can be used in an Ftest (to see if the betweengroup MS, s~, is significantly large) and/or to estimate confidence intervals for differences between group means. (This emphasizes that ANOVA is not just used for testing hypotheses.) Table A.7.1
Oneway ANOV A table 55
DF
M5
Betweengroups
Inj(jijW
kl
sB?
Withingroups (or residual)
L (Yijji;?
nk
52
Total
I
nl
50urce of variation
iJ
iJ
(YijW
The Ftest is based on the ratio S~/S2 and assumes the following model: (i=l, ... , k;j=l, ... , n;)
(A.7.1)
where JI. = overall mean tj = effect of the ith group and Bij = random error for the jth observation in group i. The errors are assumed to be normally distributed with mean zero and constant variance, (12, and also to be independent. Note the large number of assumptions! Jf the group effects {tJ are regarded as fixed, with ~tj = 0, then we have what is ca lied a fixedeffects model. However if the {tJ are assumed to be a random sam pie from
204
A digest of statistical techniques
N(O, (J;), then we have what is called a randomeffects model (or variancecomponents model). In a oneway ANOV A, the null hypothesis for the fixedeffects model is that there is no diffcrence between groups so that all the t; are zero. Then E(s~) = E(S2) = (J2 under Ho. Ho is rejected at the 5(10 level if the observed Fratio = S~/S2 is significantly large compared with FO.05.k1.,.k. A point estimate for t; is given by (y; y). Perhaps of more importance are the differences between groups and a point estimate of, say, (t1 t;> is given by (Y1  yJ Confidence intervals mayaiso be found using s/Jn; as the estimated standard error ofY;. The general process of seeing which group means differ significantly from each other is referred to as making multiple comparisons. The simplest method, called the least signijicant difference approach, should only be used when the ANOV A Ftest is significant. It says that the means of groups i and j differ significantly (at the 5 % level), if their absolute difference exceeds
A twoway ANOV A is appropriate for data such as that ansmg from a randomized block design (section A.11) where there is one observation on each of k treatments in each of r blocks and model (A.7.1) is extended to (A.7.2) where bj denotes the effect of the jth block. Then the total variation is partitioned into sums of squares due to treatments, to blocks and to the residual variation. In model (A.7.2) the t; and bj are sometimes referred to as the main effects. In a replicated complete factorial experiment (section A.11), it is also possible to estimate interaction terms which measure the joint effect of the levels of two or more effects. Then model (A.7.2) would be extended to include terms of the form Yij' which denotes the joint effect of factor I at the ith level and factor Ir at the jth level. If there are no interaction terms, the two factors are said to be additive. It is impossible and unnecessary to give detailed formulae for all the many types of design which can arise. What is important is that you should: 1. 2.
3.
understand the main types of experimental design (section A.11) understand the underlying model which is applicable to the given data structure  for example, you should know if a fixedeffects or randomeffects model is appropriate and know which main effects and/or interaction terms are included be able to interpret computer output; in particular you should be able to pick out the residual mean square and understand its key role in estimation and significance testing.
The use of ANOV A is illustrated in several exercises in Part Ir including B.2 (a oneway ANOVA), B.9 (a oneway ANOVA after transformation), E.1 (a two
The generallinear model
205
way ANOV A), E.2 (a threeway ANOV A for a Latin square) and E.3 (an unbalanced twoway ANOVA). FURTHER READING
Numerous textbooks cover ANOV A. One rather unusual book (described as 'the confessions of a practising statistician'), which gives much useful advice is that by Miller (1986), entitled Beyond ANO VA, Basics of Applied Statistics.
A.8
The general linear model
This general dass of models indudes regression and ANOV A models as special cases. By using matrix notation, many general results may be expressed in a relatively simple way. Let y denote the (n x 1) vector of observations on a response variable y, and ~ denote a (p xl) vector of (usually unknown) parameters. Then the general linear model can be written as (A.8.1)
y=X~+e
where X is an (n x p) matrix ofknown quantities and e is an (n xl) vector of random error terms. Note that each element of y is a linear combination of the (unknown) parameters plus an additive error term. In regression, the elements of X will indude the observed values of the k predictor variables. If the regression model also indudes a constant term, then X will indude a column of ones and we have p = k + 1. In ANOV A the elements of X are chosen to indude or exdude the appropriate parameters for each observation and so are usually 0 or 1. Each column of X can then be regarded as an indicator variable and Xis usually called the design matrix. In the analysis ofcovariance, which can be regarded as a mixt ure of regression and ANOVA, X will indude a mixture of predictor and indicator variables. Here the predictor variables are sometimes called covariates or concomitant variables. It is often assumed that the elements of e are independent normally distributed with zero mean and constant variance, (J2. Equivalently, using the multivariate normal distribution, we can write e '" N(O, (J2IJ where 0 is an (n xl) vector of zeros, In is the (n x n) identity matrix and (J2In is the variancecovariance matrix of e. The least squares estimate of~ is chosen to minimize the residual sum of squares and is obtained by solving the normal equations
p
(A.8.2) If X is of full rank p (assuming n > p), then (XTX) nonsingular and can be inverted to give
IS
square, symmetrie and (A.8.3)
206
A digest of statistical techniqucs
The GaussMarkov theorem says that these estimates are the best (i.e. minimum variance) linear unbiased estimates. The least squares estimates are also maximum likelihood estimates if the errors are normally distributed. The generalized leastsquares estimate of P is appropriate when the variancecovariance matrix of eis ofthe more general form 0"2 L , where L is (n x n) symmetrie and positive definite. Then (A.8.3) becomes (A.8.4) In particular if L is diagonal so that the errors are uncorrelated but perhaps have unequal variances, then we have weighted least squares. Note that (A.8.3) is a special case of (A.8.4) and that Var (~) =0"2(X TL 1X)I. In regression, the solution of (A.8.3) and (A.8.4) is numerically more stable if meancorrected values ofthe predictor variables are used (equation A.6.2), and then (X T X) is called the corrected crossproduct matrix. It mayaIso help to scale the predictor variables to have unit variance. The solution of (A.8.3) and (A.8.4) will become unstable if the predictor variables are highly correlated, so that there is near linear dependence between them. Then (XTX) may be illconditioned (or nearly singular) so that there is difficulty in finding its inverse. Ways of overcoming this problem were discussed briefly in section A.6. We return now to the case where Var(e) = 0"21n • Then the vector offitted values, y is given by
y=x~ =X(XTX)IXTy =Hy
(A.8.5)
where H = X(X T X)  I X T is called the hat matrix because it predicts the fitted (hat) values of y from the observed values. The vector of raw residuals is given by
and an unbiased estimate of 0"2 is given by
82 = eTe!(n  p) = (residual SS)!(residual DF).
The diagonal elements of H, namely {h j ,}, are useful in a variety of ways. The effect of the ith observation is more likely to be 'large' if h jj is 'large' and so hjj is ca lied the leverage or potential of the ith observation. The raw residuals can be misleading because they have different standard errors. Most computer packages therefore standardize to give scaled or studentized residuals, which will have common variance equal to one ifthe model is correct. The internally studentized residuals are given by
The generalized linear model
207
for i = 1, 2, ... , n where ej is the raw residual for the ith observation and overall estimate of (1. The externally studentized residuals are given by
8 is the
tj =
eJ[ 8(I)J (1 h
j;)]
where 8(j) denotes the estimate of (1 obtained without the ith observation. As a rough rule, studentized residuals which are larger than about three (positive or negative) are 'large' and worthy of further investigation. They may indicate an outlier. One measure of the influence of the ith observation is Cook's distance which is given by
Values of Dj which exceed unity are generally regarded as indicating the presence of an influential observation. Note that it is possible for an observation with high influence to yield a high (standardized) residual and low leverage (an outlier in regard to the y value) or a low residual and high leverage (an observation which appears to fit in with the model as fitted to the rest of the data but has a large effect perhaps because the xvalues are a 'long way' from the rest of the data. Such an observation may be regarded as an outlier in the xvalues). Thus influential observations may or may not give a yvalue which is an outlier, and may, or may not, cast doubts on the analysis. It is always wise to find out why and how an observation is influential.
A.9
The generalized linear model
Many statistical models can be written in the general form: observation = systematic component + random component or in symbols: (A.9.1)
where Y; denotes the ith observed random variable, Jl.j = E(Y;) and E(e) = 0. In the general linear model, Jl.j is assumed to be a linear function ofthe explanatory variables, namely Jl.j=X;~
where x j is the (p x 1) vector of explanatory variables for the ith observation, and ~ is a (p x 1) vector of parameters. The errors are assumed to be independent N(O, er) variables. In the generalized linear model, the error distribution is allowed to be more general and so me function of Jl.j is assumed to be a linear combination of the ß's. More precisely, the generalized linear model assurnes that:
208 1.
2.
A digest of statistical techniques
The random variables {y;} are independent and have the same distribution which must be from the exponential family. The exponential family of distributions includes the normal, gamma, exponential, binomial and Poisson distributions as special cases. There is a link function, g (which must be a monotone differentiable function), such that
g(p,) =x;~
(A.9.2)
is a linear function of the xvalues. The quantity 1'/ i = x;~ is sometimes called the systematic linear predictor, and then g(Ji) = 1'/ i' If the Y's and the e'S are normally distributed and gis the identity link function, then it is easy to see that the generalized linear model reduces to the general linear model and so includes ANOVA, regression, etc. However, the generalized model can also describe many other problems. For example, if the Ys follow a Poisson distribution, and g is the logarithmic function, then we have what is called a loglinear model. This is widely applied to count (or frequency) data in contingency tables. If Jiij is the expected frequency in the ith row and jth column, then (log Ji) is modelled by the sum of row and column terms (the main effects) and perhaps also by interaction terms. The model can be motivated by noting that Jiij=npij where n is total frequency and that, ifrows and columns are independent, then Pij= Pi Pj in an obvious notation, so that log Pij = log Pi. + log Pj is the sum of row and column effects. The loglinear model emphasizes that it is the ratio of frequencies which matters (Exercise G.7(b)). Another important application is to binary data. Suppose each Y; follows a binomial distribution with parameters ni and Pi and that Pi depends on the values of the explanatory variables. Then Ji i = npi and there are two link functions in common use. The logit transformation of p, is defined by 10g[PJ(1 p)] and this equals the corresponding link function which is
g(Ji) = 10g[JiJ (ni  Ji)] =log[npJ(ni  np)] =log[PJ(lp)].
(A.9.3)
Ifg(Ji) = x;~ is a linear function of the predictor variables, then the resulting analysis is called logistic regression or logit analysis. An alternative link function, which often gives results which are numerically very similar, is the probit transformation given by (A.9.4)
where denotes the cdf ofthe standard normal distribution. I(p) is usually called the probit of Pi and the resulting analysis is called a probit analysis. To illustrate the use oflogistic regression or probit analysis, suppose we observe the proportion of rats dying at different doses of a particular drug (see also Exercise G.5). Suppose that ni rats receive a particular dose level, say Xi' and that ri
Further reading
209
rats die. Then we would expect r i to follow a binomial distribution with parameters n" Pi where Pi is the population proportion which will die at that dose level. Clearly Pi will vary nonlinearly with Xi as Pi is bounded between 0 and 1. By taking a logit or probit transform of Pi' we can fit a generalized linear model. For logistic regression we havc logit(p) = log[p,/ (1  p) 1= IX + ßx i • The median lethaI dose, denoted by LD sll ' is the dose level, die (i.e. P = 0.5). Then, since logit (0.5) = 0, we find
X,
at which half the rats
LD so = IX/ß· The GLIM package (Appendix B) allows the user to fit four distributions (normal, gamma, Poisson, and binomial) and eight link functions in certain combinations which make practical sense. The link functions include square root, exponent and reciprocal transformations as weil as the identity, logarithmic. logit and pro bit transformations. As for a gencrallinear model, the generalized linear model allows the explanatory variables to be continuous, categorical or indicatortype variables. After fitting the model, the user should look at the standardized residuals (i.c. raw residuals divided by the corresponding standard error) and assess the goodnessoffit in a somewhat similar way to that used for the general linear model. However, goodnessoffit is assessed, not by looking at sums of squares, but by looking at log likelihood functions, and ANOV A is replaced by an analysis of deviance. Suppose we are interested in a proposed model with r( < n) parameters. Its log likelihood is compared with the log likelihood of the 'ful!' model, containing n parameters for which Ili = Yi for all i. The (scaled) deviance of the model is defined to be twice the difference in log likelihoods, namely . [Iikelihood of proposed mOdel] DCVlance =  2 log . likelihood of full model
(A.9.5)
The null model is defined as the model containing no explanatory variables for which Ili is a constant (so r= 1). This model has the largest possible deviance and is often fitted first for comparison purposes. In order to compare a given model I with a more complicated model II containing m extra parameters, it can be shown that deviance (model I)  deviance (model II) ~ X~ if the model I with fewer parameters is adequate. For the general linear model, note that the deviance simply reduces to the (scalcd) residual sum of squares. FURTHER READING
Dobson (1983) provides a readable introduction, while McCullagh and NeIder (1983) give a more thorough advanced treatment.
210
A digest of statistical techniques
A.10
SampIe surveys
Sampie surveys are widely used in areas such as market research, sociology, economics and agriculture, both to collect (objective) factual information and (subjective) personal opinions. The idea is to examine a representative subset (called the sampie) of a specified population. The (target) population is the aggregate of all individuals, households, companies, farms, or of whatever basic unit is being studied, and needs to be carefully defined. The basic units which comprise the population are variously called sampling units, elementary units, elements or simply units. A complete list of the sampling units is called aframe. This list should be as accurate as possible, but may weIl contain errors so that the sampled population differs somewhat from the target population. For example, electoral registers, which are often used as frames, are always outofdate to so me extent. If a survey covers virtually all the units in the population, then we have what is called a complete survey or census. However, on grounds of speed, accuracy and cost, a sampie is nearly always preferred to a census. Ifthe sampling units are selected by appropriate statistical methods, we have a sampIe survey. When the sampling units are human beings, the main methods of collecting information are: 1. 2. 3. 4.
facetoface interviewing postal surveys telephone surveys direct observation.
Facetoface interviewing is used widely, can give a good response rate, and allows the interviewer some flexibility in asking questions. Field checks are advisable to ensure that interviewers are doing their job properly. Postal surveys are much cheaper to run but generally give a much lower response rate (perhaps as low as 10%). Followup requests by post or in person ma y be necessary. Telephone surveys are increasingly used for many purposes because they are relatively cheap and yield a much higher response rate. However, the possibility of bias must be kept in mind because not everyone has a telephone, and the selection of 'random' telephone numbers is not easy. A pilot survey usually plays a vital role in planning a survey. This is a smaIlscale version ofthe survey as originally planned. It is essential for trying out the proposed questions and eliminating tee thing problems. It should answer the following questions: (a) Is the questionnaire design adequate? (b) How high is the nonresponse rate? (c) How variable is the population? (d) Should the objectives be changed in any way? The answers to (b) and (c) can be useful in determining the sam pie size required for a given accuracy. There are many sources of error and bias in sampie surveys and it is essential to anticipate them and take precautions to minimize their effect. The simplest type of error, called sampling error, arises because a sampie is taken rather than a census.
Sampie surveys
211
Errors of this kind are relatively easy to estimate and control. However, other sources of error, called nonsampling error, are potentially more damaging. Possible sources indude: 1. 2. 3. 4. 5.
the use of an ihadequate frame a poorly designed questionnaire interviewer bias recording and measurement errors nonresponse problems.
A.I0.1
TYPES OF SAMPLE DESIGN
There are many different types of sampie design. The main aim is to select a representative sam pIe, avoid bias and other nonsampling errors, and achieve maximum precision for a given outlay of resources. The two most important types of sampling procedure are random and quota sampling. In random sampling, the sam pIe is preselected from the entire population using a random selection procedure wh ich gives every me mb er of the population a nonzero, calculable chance ofbeing selected. However, in quota sampling, the choice of sampling units is left to the interviewer subject to 'quota controls' designed to ensure that characteristics such as age and social dass appear in the sam pie in a representative way. A simple random sampie involves taking a sam pIe of n units from a population of N units without replacement in such a way that all possible sampies size n have an equal chance ofbeing selected. It can then be shown that the sampie mean, x, of a particular variable, x, is an unbiased estimate of the underlying population mean with variance where f = n/ N is the sam pling fraction and (Tz is the population variance of the xvalues. The factor (1f) is called the finite population correction to the 'usual' formula for the variance of a sam pIe mean. Simple random sampIes may be theoretically appealing, but are rarely used in practice for a variety of practical reasons. In stratified random sampling, the population is divided into distinct subgroups, ca lied strata and then a simple random sam pie is taken from each stratum. The two main reasons for doing this are: (a) to use one's knowledge about the population to make the sam pIe more representative and hence improve the precision of the results (b) to get information about subgroups of the population when these are of interest in themselves. If the same sampling fraction is taken from each stratum, we have what is called proportional allocation.
212
A digest of statistical techniques
Multistage sampling arises when the population is regarded as being composed of a number of firststage units (or primary sampling units or psu's), each of which is composed of a number of secondstage units, and so on. A random sampie of firststageunits is selected, and then a random sampie of secondstage units from selected firststage units and so on. This type of sampie is generally less accurate than a simple random sam pIe of the same size but has two important advantages: (a) It permits the concentration of field work by making use of the natural grouping of units at each stage. This can reduce costs considerably and allows a large sam pIe for a given outlay. (b) It is unnecessary to compile a frame for the entire population. Cluster sampling is a special type of multistage sampling in which groups or clusters of more than one unit are selected at the final stage and every unit is examined. Quota sampling does not involve truly random selection and can give biased sampies. However its simplicity means it is widely used. The cost per interview is lower, the sampie can be taken quicker, and no sampling frame is required. However, there is no valid estimate of error and bias may be unwittingly introduced. A widely used rule of thumb is to suppose that the standard error of a value derived from a quota sam pIe is twice as large as the standard error that would result for a simple random sam pIe of the same size. Two other types of sampling are judgemental and systematic sampling. In the former, an 'expert' uses his/her knowledge to select a representative sampie. This procedure is not random and can be dangerously biased. In systematic sampling, the elements in the sampling frame are numbered from 1 to N. The first unit in the sampie is selected at random from the first k units. Thereafter every kth element is selected systematically. The value ofk is N/n. This procedure is also not random, but can be very convenient provided that there is no possibility of periodicity in the sampling frame order.
A.I0.2
QUESTIONNAIRE DESIGN
The first requirement is to define carefully the objectives of the survey and write down the information that is required. The temptation to include too many questions should be resisted. It is one thing to say that all questions should be clear, concise, simple and unambiguous, but quite another to achieve this in practice. Of course all questions should be rigorously tested and a pilot survey is essential. The design depends to so me extent on whether it is to be filled in by the respondent, or by an interviewer, and on whether the data are to be coded at the same time as they are recorded. Most questions may be classified as factual or opinion. The latter are harder to construct as they are much more sensitive to sm all changes in question wording and in the emphasis given by the interviewer. Most questions may also be classified as open (or freeanswer) or closed. In the latter case, the respondent has to choose from a limited list of possible answers. The following principles are worth noting in regard to question wording:
Further reading 1. 2. 3. 4.
5.
213
Use simple, concise everyday language. Make your questions as specific as possible. A void ambiguity by trying them out on several different people. Avoid leading questions and the use of unfairly 'loaded' words. In fact it is quite difficult to make questions completely neutral. The use of an implied alternative should be avoided. Thus the last two words of the question 'Do you think this book is wellwritten or not?' are vital as without them the alternative is only implied, and too many people will tend to agree with the interviewer. Do not take anything for granted, as people do not like to admit ignorance of a subject.
A.I0.3
THE PROBLEM OF NONRESPONSE
It is often impossible to get observations from every unit in the selected sampie and this gives rise to the problem of nonresponse. This can arise for a variety of different reasons. If nonrespondents have different characteristics to the rest of the population, then there is liable to be bias in the results. In postal surveys, if reminders are sent to people who do not reply to the first letter, then the results from the second wave ofreplies can be compared with those who replied at once. In personal interviews, nonresponse may arise because a respondent refuses to cooperate, is out at the time of call, has moved horne, or is unsuitable for interview. There are several methods of coping with 'notathomes', including calling back until contact is made, substituting someone else such as a nextdoor neighbour, and subsampling the nonrespondents rather than trying to contact them all. A.lO.4
CONCLUDING REMARKS
The usefulness of sam pIe surveys is not in question, but it still pays to view survey results with so me scepticism. Different sampie designs and slightly different questions may yield results which differ substantially, particularly for sensitive opinion questions. Nonsampling error is generally more important than sampling variation. Indeed there is a wellknown saying in the social seien ces that 'any figure which looks interesting is probably wrong!'. While this may be an exaggeration, opinion survey results should be regarded as giving orders of magnitude rather than precise estimates, especially when nonresponse is present. As one example, a survey of patients at a local hospital produced a response rate of only 30% from a random sam pie size 500. This low response rate made valid inference ab out population parameters very difficult. Nevertheless, the overwhelming reported dissatisfaction with one aspect of patient care was enough to justify immediate action. FUR THER READING
There is much to be said for consulting the classic texts by Cochran (1963) for the theory ofsampling, by Moser and Kalton (1971) for many practical details, and by
214
A digest of statistical techniques
Kish (1965) for a long reference source on both theory and practice. There are several more specialized books on recent practical developments such as telephone surveys, the use of longitudinal surveys, and the use of consumer panels.
A.l1
The design of experiments
Experiments are carried out in all branches of science. Some are weil designed but others are not. This section gives brief guidance on general principles, particularly for avoiding systematic error and increasing the precision of the results. Clinical trials are discussed as a special case in section A.12. Comparative experiments aim to compare two or more treatments, while in factorial experiments, the response variable depends on two or more variables or factors. The value that a factor takes in a particular test is called the level, and a treatment combination is a specific combination of factor levels. The experimental unit is the object or experimental material on which a single test, or trial, is carried out. Before designing an experiment, clarify the objectives carefully and carry out thorough preliminary desk research. Choose the treatments which are to be compared or the factors which are to be assessed. Choose the experimental units which are to be used. Select a suitable response variable which may be a function of the measured variables. Decide how to apply the treatments or treatment combinations to the units, and decide how many observations are needed. If the observations can only be taken one at a time, then the order of the tests must also be decided. Ifthe same experimental unit is used for more than one test, then carryover effects are possible and so me sort of serial design may be desirable. One important general principle for eliminating unforeseen bias is that the experimental units should be assigned by a procedure which involves so me sort of randomization. If the same unit is used for several tests, then the randomization principle should also be applied to the order ofthe tests. Complete randomization is often impossible or undesirable and so me sort of restricted randomization (e.g. using blocking  see below) is often preferred. A second useful principle is that of replication in that estimates will be more precise if more observations are taken. Observations repeated und er asnearidentical conditions as possible are particularly helpful for assessing experimental error. A third general principle is blocking (see below) whereby any natural grouping of the observations is exploited to improve the precision of comparisons. The analysis of covariance (seetion A.8) is a fourth general approach to improving precision which makes use of information collected on 'concomitant' variables (i.e. variables which 'go together' with the response variable  often measurements made on the experimental units before a treatment is applied). The simplest type of comparative experiment is the simple (oneway) randomized comparative experiment in which a number of observations are taken randomly on each treatment. The ensuing analysis aims to compare the resulting group means, usually by means of a oneway ANOV A.
The design of experiments
215
The most common type of comparative experiment is the randomized block design. The tests are divided into groups or blocks of tests which are 'dose together' in so me way, as for example tests made by the same person or tests made on a homogeneous group of experimental units. Then an equal number of observations are taken on each treatment in each block. The order or allocation within blocks is randomized. This design allows betweenblock variation to be removed so that a more precise comparison ofthe treatment means can be made. The ensuing analysis usually involves a twoway ANOV A. There are various, more complicated, types of comparative experiment such as balanced incomplete block designs and Latin square designs (Exercise E.2), but we do not have space to describe them here. One important type offactorial experiment is the completeJactorial experiment in which every possible combination offactor levels is tested the same number oftimes. There are various more complicated types of factorial experiment which may involve the idea of conJounding. Two effects are said to be confounded if it is impossible to distinguish between them (or estimate them separately) on the basis of a given design. In a confounded complete factorial experiment the tests are divided into blocks and it is desirable to arrange the design so that the block effects are confounded with (hopefully unimportant) higherorder interactions. In a fractional factorial experiment, a fr action (e.g. one half or one quarter) of a complete factorial is taken and then every effect is confounded with one or more other effects, which are called aliases of one another. The general idea is to choose the design so that it is possible to estimate main effects and important loworder interactions, either uniquely (in confounded complete factorials) or aliased with unimportant higherorder interactions (in fractional factorials). Fractional factorials are involved in the socalled Taguchi methods which aim to maximize performance variation at the product design stage. Splitplot designs arise when there are blocks, called whole plots. which can be subdivided into subplots. The levels of one factor or treatment are assigned to the whole plots so that the effect of this factor is confounded with the block effect. The levels of the other factors are applied to specified subplots. A complete factorial experiment is called a crossed design since every level of one factor occurs with every level of a second factor. An important alternative dass of designs are nested (or hierarchical) designs. With two factors, denoted by A and B, factor B is nested within factor A if each level of B occurs only with one level of factor A. Then model (A.7.2) would be changed to (A.l1.1) where for example
Yijk = kth observation on response variable with factor A at ith level and factor B at jth level for a specified value of i, Bj(i) = effect of factor B at jth level for a specified value of i. (The bracket in the subscript indicates nesting.) Notice that model A.l1.1 contains no terms of the form {BJ
216
A digest of statistical techniques
There are a number of speciahzed designs, such as composite designs, which are used in the study of response suifaces, where the response variable, y, is an unknown function of several predictor variables and particular interest lies in the maximum (or minimum) value of y. Optimal designs are concerned with allocating resources so as to achieve the 'best' estimate of the underlying model. 'Best' may be defined in several ways but usually involves the precision of estimated parameters. For example, a Doptimal design is chosen to minimize the determinant of (X T X) 1, in the notation of section A.8, since the variancecovariance matrix of6 in equation (A.8.3) is proportional to (X T X) 1. Although optimal designs are of much theoretical interest, they see m to be litde used in practice because the theory requires the precise prior formulation offeatures (such as the underlying model) wh ich are usually known only partially. The design and analysis of experiments still rehes heavily on the elegance of the solution of the least squares equations arising from balanced, orthogonal designs. In fact the computing power now available allows most 'reasonable' experiments to be analysed. While it is still desirable for designs to be 'nearly' balanced and 'nearly' orthogonal, the requirement is perhaps not as compelling as it used to be (e.g. Exercise G.3). In a practical situation with special constraints, it is usually possible to use common sense to construct a sensible design with or without reference to standard designs such as those hsted in Cochran and Cox (1957). FURTHER READING
The two classic texts by Cochran and Cox (1957) and Cox (1958) remain useful today. Other more recent texts include Box, Hunter and Hunter (1978), John and Quenouille (1977) and Montgomery (1984). Steinberg and Hunter (1984) review re cent developments and suggest directions for future research. Hahn (1984) presents some nice examples to illustrate how the general principles of data collection need to be tailored to the particular practical situation.
A.12
Clinical trials
A clinical trial may be described as any form of planned study which involves human beings as medical patients. They are widely used by pharmaceutical companies to develop and test new drugs and are increasingly used by medical researchers to assess a wide variety of medical treatments such as diets, surgical procedures, the use of chemotherapy and different exercise regimes. Tests on healthy human volunteers and on animals have many features in common with clinical trials. The his tory of clinical trials is fascinating (see Pocock, 1983, Chapter 2). One of the first diagnostic uses of statistics ca me in 1835 when a Frenchman Pierre Louis showed that bloodIetting was harmful rather than beneficial to patients. More fundamentally, he argued that all therapies should be open to scientific evaluation. It
Clinical trials
217
may seem incredible that blood letting could have been used so widely without supporting evidence, but how many treatments in use today are based on medical folklore rather than hard fact? For example, one expensive drug, with rather nasty side effects, was enthusiastically prescribed by doctors in the 1950s for a certain condition (retrolental fibroplasia) because it appeared to give a 75% 'cure' rate. However, a proper clinical trial showed that 75% ofpatients were 'cured' with no treatment whatsoever. The use of the drug was discontinued. After the thalidomide tragedy in the 1960s, many governments have laid down much stricter regulations for testing new drugs. Partly as a result, clinical trials and toxicological tests now constitute an increasingly important area of experimental design, but are still less familiar to many statisticians. Thus this section is longer than might be expected! In my limited experience, all experiments on human beings are tricky to handle and doctors have much talent for 'messing up' experiments! Drug trials are carried out in different phases. For example, there may be preclinical trials on animals, followed by Phase I which is concerned with drug safety in human beings. Phase 11 is a smallscale trial to screen drugs and select only those with genuine potential for helping the patient. Phase III consists of a fullscale evaluation of the drug. Phase IV consists of postmarketing surveillance (or drug monitoring) to check on such matters as longterm side effects and rare extreme reactions. Phase III is what many people think of as the clinical trial. There should be a control group for comparative purposes which receives the current standard treatment or alternatively a placebo. The latter is an inert substance which should have no effect other than that caused by the psychological influence of taking medicine. The statistician should not be content with simply analysing the results, but rather help in planning the whole trial. It is becoming standard practice to develop a written protocol which documents all information ab out the purpose, design and conduct of the trial. It should describe the type of patient to be studied, the treatments which are to be compared, the sampIe size for each treatment, the method of assigning treatments to patients (the design), the treatment schedule, the method of evaluating the patient's response to the treatment and the procedure far carrying out interim analyses (if any). The absence of a proper protocol is a recipe for disaster. Deviations from the protocol are in any case likely to occur (e.g. ineligible patients are included), and commonsense decisions have to be made as to what to do about them in order to avoid getting biased results. Patient withdrawals can also be a problem. The design of the trial is crucial. Randomization should always be involved so that each patient is randomly assigned to the new or standard treatment. There are many potential biases without randomization which may not be apparent at first sight, and nonrandomized studies, such as historical retrospective trials, are more likely to give (spurious) significant results. In particular, the systematic alternation of treatments may not give comparable groups, particularly ifthe doctors know which treatment is being allocated. Many drug trials are carried out in a 'doubleblind' way
218
A digest of statistical techniques
so that neither the patient nor the doctor knows wh ich treatment the patient is receiving. The doctor'sjudgement of the effect of the treatment is then less likely to be biased. When each patient receives just one treatment, there are various ways of carrying out randomization. Treatments can be allocated randomly over the wh oIe sampIe or within smaller homogeneous groups (blocks or strata). There are also more specialized designs such as the twoperiod crossover trial where each patient receives two treatments, one after the other, with the order being randomized. The information for each patient is usually recorded on a form. The design of a 'good' form is important. In particular, the da ta should be suitable for transfer to a computer. It is unwise to try to record too much information. As results become available, the possibility of carrying out interim analyses and then adjusting the sam pIe size needs to be considered and sequential designs are of much current interest. The final analysis should not concentrate too much on testing the effect ofthe new treatment. It is more important to estimate the size ofthe effect as weIl as to assess any sideeffects. In publishing results, it is sad that published work hardly every reports nonsignificant results or the outcomes of confirmatory studies. It appears that publication is inftuenced by finding significance. One wellknown treatment for cancer is still being used because of one historical nonrandomized study which gave a significant result even though four subsequent clinical trials have shown it to have little or no effect. Carrying out tests on human beings inevitably raises many ethical questions. One has to balance the welfare ofindividuals (or animals) in the trial against the potential benefit to the whole population in the future. When do we need to get 'informed consent' from patients? When is randomization ethical? Some doctors are still strongly opposed to randomized trials which take away their right to prescribe a patient the treatment wh ich is believed to be 'best'. However, in seeking to give the 'best' treatment, some doctors will worry that so me patients will not get the 'marvellous' new treatment, while others conversely will worry that so me patients get the 'dubious' new treatment before it has been fully tested. The issue can only be resolved by a proper randomized trial, which, although difficult and costly, can be very rewarding. While a few treatments (e.g. penicillin) give such striking results that no clinical trial is really necessary, many treatments introduced without proper evaluation are eventually discarded because they are found to have no effect or are even harmful. Clinical trials are just one way of carrying out research into epidemiology, which is the study of health and disease in human populations. Other types of epidemiological study are often observational in nature. For example, cohort studies follow a group of subjects through time to see who develops specific diseases and what risk factors they are exposed to. FUR THER READING
Pocock (1983) and Friedman, Furberg and Demets (1985) provide excellent introductions to clinical trials. Gore and Altman (1982) discuss the use of statistics in
Multivariate analysis
219
medicine more generally, and there are hopeful signs that sound statistical ideas are starting to permeate the medical profession. For example, Altman et al. (1983) lay down guidelines for the use of statistics in medical journals.
A.13
Multivariate analysis
Let X denote a pdimensional random variable with mean vector JI, where X T= [Xl' ... , X p ] and JlT = [E(X1), ••• , E(Xp )] = [J.ll' ... , J.l p]. The (p xp) covarian ce (or dispersion or variancecovariance) matrix ofX is given by
so that the (i, j)th element ofL is the covariance of X; with Xi" The (p x p) correlation matrix ofX, denoted by P, is such that the (i,j)th element measures the correlation between X; and Xi" Thus the diagonal terms of P are all one. One important special case arises when X has a multivariate normal distribution (section A.3). Suppose we have n observations on X. Denote the (n x p) data matrix by X. (Note that vectors are printed in bold type, but matrices are not.) The sampie mean vector, i, the sampie covariance matrix, S, and the sam pie correlation matrix, R, may be calculated in an obvious way. For example
where 1 denotes an (n x 1) vector of ones. A common objective in multivariate analysis is to simplify and understand a large set of multivariate data. Many techniques are exploratory in that they seek to genera te hypotheses rather than test them (section 6.6 of Part I). Some techniques are concerned with relationships between variables, while others are concerned with relationships between individuals or objects (i.e. between the rows of X). There is also an important distinction between the case where the variables arise on an equal footing and the case where there are response and explanatory variables (as in regression) . This section is primarily concerned with some specific multivariate techniques. However you should begin, as always, by 'looking' at the data. Examine the mean and standard deviation of each variable. Plot scatter diagrams for selected pairs of variables. Look at the correlation matrix. If, for example, most of the correlations are dose to zero, then there is little linear structure to explain and you may be able to look at the variables one at a time. Principal component analysis is concerned with examining the interdependence of variables arising on an equal footing. The idea is to transform the p observed variables to p new, orthogonal variables, called principal components, which are linear combinations of the original variables (a TX = La;X) and wh ich are chosen in turn to explain as much of the variation as possible. Thus the first component, aJX, is chosen to have maximum variance, subject to aJa 1 = 1, and is often some sort of average of the original variables. Mathematically it turns out that a 1 is the
220
A digest of statistical techniques
eigenvector of the covariance matrix of X (or more usually of the correlation matrix) wh ich corresponds to the largest eigenvalue. More generally thc coefficients of the different principal components are the eigenvectors of S (or of R) and the varianee of eaeh eomponent is given by the corresponding eigenvalue. Now the sum of the eigenvalues of a square, positive semidefinite real matrix is equal to the sum of the diagonal terms (called thc trace). Thus for a correlation matrix, whose diagonal terms are all unity, the traee is p so that the proportion of the total variance 'explained' by the first principal component is just Al/p, where Al is the largest eigenvalue of R. It is often found that the first two or three components 'explain' most of the variation in the original data, so that the effective dimensionality is much less than p. The analyst then tries to interpret the meaning of these few important components. I have also found it helpful to plot a scatter diagram of the first two components for different individuals in order to try and identify clusters and outliers of individuals. Note that if the X's are linearly dependent (e.g. if say X 3 = Xl + X 2), then S (or R) will be singular and positive sem idefinite (rather than positive definite) so that one or more eigenvalues will be zero. Factor analysis has a similar aim to prineipal component analysis, namely the reduetion of dimensionality. However it is based on a 'proper' statistieal model involving m(
L AjJ+ej" j=l
The weights {Aj ;} are called the faetor loadings. The {J;} are ealled the eommon faetors and the {eJ the specific faetors. The portion of the variability in X j explained by the eommon factors is called the communality of the jth variable. The details of model fitting will not be given here. Note that many computer programs allow the user to rota te the factors in order to make them easier to interpret. However, there is a danger that the analyst will try different values of m (the underlying dimensionality) and different rotations until he gets the answer he is looking for! Also note that faetor analysis is often confused with principal component analysis, particularly as the latter is sometimes used to provide starting values in factor analysis model building. In my experience, social scientists often ask for help in earrying out a factor analysis, even though they have not looked at the correlation matrix and do not really understand what is involved. Tf most of the correlations are 'smalI' , then the variables are essentially independent and so there is no point in carrying out a factor analysis (or a prineipal eomponent analysis where thc eigenvalues are likely to be nearly equal). If on thc other hand, all the correlations are 'large', then I would be suspicious that the variables are all measuring the same thing in slightly different ways. (This often happens with attitude questions.) Yet another possibility is that the variables split into groups such that variables within a group are highly correlated but variables in different groups are not. Such a description may be a better way of understanding the data than a faetor analysis model. Only if the correlation matrix
Multivariate analysis
221
contains high, medium and low correlations with no discernible pattern would I consider carrying out a factor analysis (and perhaps not even then !). Multidimensional scaling is concerned with the relationship between individuals. The idea is to produce a 'map', usually in two dimensions, of a set of individuals given some measure of similarity or dissimilarity between each pair of individuals. Classical scaling is appropriate when the dissimilarities are approximately Euclidean distances. Then there is a duality with principal component analysis in that classical scaling is essentially an eigenvector analysis of XX T (ignoring meancorrection terms) whereas principal component analysis is an eigenvector analysis of X T X. Ordinal scaling, or nonmetric multidimensional scaling, only uses the ordinal properties of the dissimilarities and involves an iterative numerical procedure. Having produced a map, one aim is to spot clusters andjor outliers (as when plotting the first two principal components, but using a completely different type of data). Cluster analysis aims to partition a group of individuals into groups or clusters which are in some sense' close together'. There is a wide variety of procedures which depend on different criteria, on different numerical algorithms and on different objectives. For example, some methods allocate individuals to a prescribed number of clusters, while others allow the number of clusters to be determined by the data. Other procedures aim to find the complete hierarchical structure of the data in a hierarchical tree or dendrogram. As a further source of confusion, I note that cluster analysis is variously called classification and taxonomy. In my experience the clusters you get depend to a large extent on the method adopted and the criteria employed. Correspondence analysis is primarily a technique for displaying the rows and columns of a twoway contingency table as points in duallowdimensional vector spaces, but may be extended to other nonnegative data matrices of a suitable form. Given an (n x p) nonnegative data matrix, find the row and column sums and let R be the (n x n) diagonal matrix of row sums and C the (p x p) diagonal matrix of column sums. Then R 1 X and C 1 x T are the rowprofile and columnprofile matrices where we simply divide each observation by the appropriate row or column sumo The analysis then essentially consists of an eigenvector analysis of (R IX) (C1X T) or (C1X T) (R IX), whichever is the smaller matrix. There are many techniques based on the assumption that X has a multivariate normal distribution which are often natural generalizations of univariate methods based on the normal distribution. For ex am pIe, to compare a sam pIe mean vector i with a population mean vector J1 o' the test statistic, called Hotelling's T 2, is given by n(i  J1(/ S 1 (i  J1o) and is the 'natural' generalization of the square of the univariate tstatistic. The multivariate analysis of variance (MANOVA) is the natural extension of ANOV A. The problem then consists of comparing matrices containing sums of squares and crossproducts by 'reducing' each matrix to a single number such as its determinant or its trace (the sum of the diagonal values). I have never found this appealing. When MANOVA rejects a null hypothesis, a technique ca lied canonical variates analysis can be used to choose those linear compounds ofthe form a TX which best show up departures from the null hypo thesis when a univariate ANOV A is carried out on the observed values of a TX. Discriminant
222
A digest of statistical techniques
analysis is concerned with finding the 'best' linear compound a TX for distinguishing between two populations. The multivariate analysis of covariance (MANOCOV A) is the multivariate generalization of the (univariate) analysis of covariance. The method of canonical correlations is used to examine the dependence between two sets ofvariables, say Xl' and X 2 • Let U =a TX I and V= b TX 2 be linear compounds of Xl and X 2 and let p denote the correlation between them. Then, for example, the first pair of canonical variates, say aiX I and biX 2 are chosen so as to maximize p and the resulting value is called the (first) canonical correlation. Further canonical variates can be found, orthogonal to each other, in a similar way. Canonical correlation has never found much support amongst users because the highest correlations can relate uninteresting linear compounds which make little contribution to the total variability of the data. Several new methods are now being investigated to study the relationships between two data matrices. One important idea which arises, directly or indirectly, in many multivariate techniques is the use ofthe singular value decomposition. This says that if Ais an (n x p) matrix of rank r, then A can be written as
A=ULVT where U, V are matrices of orders (n x r) and (p x r) respectively which are column orthonormal (i.e. UTU = VTV = Ir) and L is an (r x r) diagonal matrix with positive elements. In particular if Ais the (n xp) data matrix, with (i,j)th element xij' then x~ equals the sum of the squared diagonal elements of L. If the latter are arranged
L iJ
in descending order of magnitude, it may be possible to approximate the variation in the data with the two or three largest elements of L together with the corresponding rows of U and V. If instead A is a (square, symmetrie) variancecovariance matrix, then U and V are identical and are formed from the eigenvectors of A, while the diagonal elements of L are the eigenvalues of A. This decomposition is used in principal component analysis. FURTHER READING
Two general introductory textbooks are by Chatfield and Collins (1980) and Mardia, Kent and Bibby (1979). A short, readable nonmathematical introduction is given by Manly (1986). The book by Greenacre (1984) is recommended for its treatment of correspondence analysis.
A.14
Timeseries analysis
A time se ries is a set of observations made sequentially through time. A time series is said to be continuous when observations are taken continuously through time, but is said to be discrete when observations are taken at discrete times, usually equally spaced, even when the measured variable is continuous. A continuous series can
Timeseries analysis
223
always be sampled at equal intervals to give a discrete se ries which is the main topic of this section. The special feature of timeseries analysis is that successive observations are usually not independent and so the analysis must take account of the order of the observations. The main possible objectives are (a) to describe the data, (b) to find a suitable model and (c) to forecast future values and/or control future behaviour of the series. If future values can be predicted exactly from past values, then the series is said to be deterministic. However, most series are stochastic in that the future is only partly determined by past values. The first step in the analysis is to construct a time plot of each series. Features such as trend (longterm changes in the mean), seasonal variation, outliers, smooth changes in structure and sudden discontinuities will usually be evident. Simple descriptive statistics mayaIso be calculated to help in summarizing the data and in model formulation. An observed time series may be regarded as a realization from a stochastic process, which is a family of random variables indexed over time, denoted by {X,} in discrete time. A stationary process has constant mean and variance and its other properties also do not change with time. In particular the autocovariance of X, and X'H' given by (A.14.1)
where fJ.=E(X,), depends only on the time lag, k, between X, and X'H. The autocorrelation coejficient at lag k is given by Pk = Yk/Yo and the set of coefficients {Pk} is called the autocorrelation function (abbreviated acf). The spectrum of a stationary process is the discrete Fourier transform of {Yk}' namely (A.14.2)
Note that this can be written in several equivalent ways, and that Yk is the inverse Fourier transform of j(w). For multivariate time series, there is interest in relationships between se ries as weIl as within series. For example the crosscorrelation function of two stationary series {X,} and {y,} is a function PXy(k) which measures the correlation between X, and Y,H. The crossspectrum is the discrete Fourier transform of pXy(k). There are many useful classes of model, both stationary and nonstationary. The simplest model, used as a building brick in many other models, is the purely random process, or white noise, which is henceforth denoted by {Z,} and defined as a sequence of independent, identically distributed random variables with zero mean and constant variance. The acf of {Z,} is given by
k=O otherwise which corresponds to a constant spectrum.
224
A digest of statistical techniques
An autoregressive process of order p (AR(P)) is defined by
X,=
(A.14.3)
where
Timeseries analysis
225
space models for which optimal forecasts may be computed using a recursive estimation procedure called the KaIman filter. The latter is widely used in control engineering. Unfortunately, there is no standard notation. The simple uni varia te statespace model considered here assumes that the observation Y, at time t is given by (A.14.7) where 9, denotes what is called the state vector, which describes the 'state of nature' at time, t, h, denotes a known vector and n, denotes the observation error. The state vector cannot be observed directly (i.e. is unobservable) but is known to be updated by the equation
9,=G,9'_1 +w,
(A.14.8)
where the matrix G, is assumed known and w, denotes a vector of deviations. Equation (A.14.7) is called the observation (or measurement) equation, while (A.14.8) is called the transition (or system) equation. The error, n" is assumed to be N(O, O"~), while w, is assumed to be multivariate normal with zero mean and known variancecovariance matrix W, and to be independent of n,. One simple example is the linear growth model, sometimes called a structural trend model, for which
(A.14.9)
where and
Here the state vector 9; = (p" ß,) consists of the locallevel, p" and the local trend, ß" even though the latter does not appear in the observation equation. We find h; = (1,0) and
G,=[~ ~J are both constant through time. The statespace model can readily be generalized to the case where Y, is a vector, and many standard timeseries models, such as regression, ARIMA and structural models, can be put into this formulation. Let 0'1 denote the minimum mean square error estimator of 9'1 based on information up to and including Y,I' with variancecovariance matrix P'I' The KaIman filtering updating procedure has two stages which may be derived via leastsquares theory or using a Bayesian approach. The prediction equations (stage I) estimate 9, at time (t 1) in an obvious notation by
0'1'1 = G,Ot_l with When Y, becomes available, the prediction error is .
T
9~
e,= Y,h, G,
,I
226
A digest of statistical techniques
and the stage II updating equations are:
9, = 9'1'1 + Ke, and
where K,= P'I'_lh,/(h;P'Hh,
+ a~)
is called the KaIman gain matrix, which in the univariate case is just a vector. The choice of an appropriate model depends on prior information, the objectives, the initial examination of the time plot and an assessment of various more complicated statistics such as autocorrelations. The sampIe autocorrelation function (aet) or correlogram is defined by for k=O, 1,2, ... where nk
ck =
L (x, 
x) (x'+k  x)jn.
,~1
Roughly speaking, values of 'k which exceed 2jJn in absolute magnitude are significantly different from zero. The DurbinWatson test essentially looks at 2(1  ',) and so has expected value 2 for random series. The partial acf is another useful diagnostic tool. The partial autocorrelation at lag k is the correlation betwcen X, and X'+k in excess of that already explained by autocorrelations at lower lags. An analysis based primarily on the correlogram is sometimes called an analysis in the time domain. In particular the BoxJenkins approach, based on fitting ARIMA models, involves a threestage modelbuilding procedure namely (a) identifying a suitable ARIMA model (by looking at the correlogram and other diagnostic tools), (b) estimating the model parameters, and (c) checking the adequacy ofthe model (primarily by looking at the onestepahead errors). An analysis in the frequency domain is based primarily on the sampIe spectrum which can be obtained either by taking a truncated weighted Fourier transform of the acf or by smoothing a function called the periodogram wh ich is obtained from a Fourier analysis of the observed time series. For long series the Fast Fourier Transform can be used to speed calculations. Note that the correlogram and sam pIe spectrum can be tricky to interpret, even by an 'expert'. Multivariate diagnostics are even harder to interpret. Before carrying out a timese ries analysis, it may be advisable to modify the data either by applying a power transformation (e.g. Y, = log(X,)) or by applying a linear CjX,_j where the {c) are the filter weights). In particular a digital filter (e.g. Y, =
L j
variety offilters are available for detrending or deseasonalizing time series. Note that
Timeseries analysis
227
lowpass filters are filters which remove highfrequency variation while highpass filters remove lowfrequency variation. Firstorder differencing, for example, can be regarded as a filter with Co = 1 and CI = 1. This filter removes trend and is of highpass form. Our final topic is forecasting. There are many different procedures available which may be categorized as univariate (or projection) or multivariate (or causal) or judgemental. A univariate forecast is based only on the present and past values of the time series to be forecasted. Forecasts mayaiso be categorized as automatie or nonautomatie. Most univariate forecasting procedures can be put into automatie mode as may be necessary when forecasting large numbers of se ries in stock contro!. The choice of method, and hence of underlying model, depends on a variety of practical considerations including objectives, prior information, the properties of the given data as revealed by the time plot(s), the number of observations available, and so on. Exponential smoothing is one simple widelyused projection method. The forecast onestep ahead made at time n is denoted by X(n, 1) and is a geometrie sum of past observations namely X(n, 1) = ctXn + ct(l ct)Xn _
1
+ ct(l 
ct)2Xn _ 2
+ .... This can be rewritten in a more useful updating form as X(n, 1)=ctXn +(1ct)X(n1, 1)
recurrence form, or
X(n, 1)=X(n1, l)+ctcn
errorcorrection form
where cn =Xn X(n1, 1) is the onestepahead forecasting error. In Winters (or HoltWinters) forecasting, the locallevel, local trend and local seasonal factor are all updated by exponential smoothing. Variants of exponential smoothing are used more in practice than other methods such as BoxJenkins and statespace forecasting. Multivariate forecasting methods, based on multivariate ARIMA, regression or econometric models are much more difficult to handle than univariate methods and do not necessarily give better forecasts for a variety of reasons (e.g. the underlying model changes or the predictor variables may themselves have to be forecasted). Of course forecasts involve extrapolation and should be regarded as conditional statements ab out the future assuming that past trends continue. Hence the wellknown definition: 'F6recasting is the art of saying what will happen and then explaining why it didn't'! It is salutary to end with the following rhyme: A trend is a trend is a trend. The question is, will it bend? Will it alter its course Through so me unforeseen cause And come to apremature end?  Alex Cairncross
228
A digest of statistical techniques
FURTHER READING
Chatfield (1984) gives a general introduction. Granger and Newbold (1986) discuss forecasting. Priestley (1981) also covers more advanced topics such as multivariate series, control, KaIman filtering and nonlinearity.
A.15
Quality control and reliability
These two topics are an important part of industrial statistics. Statistical quality control is concerned with controlling the quality of a manufactured product using a variety of statistical tools such as the Shewhart control chart. One important dass of problems is acceptance sampling which is concerned with monitoring the quality of manufactured items supplied by a manufacturer to a consumer in batches. The problem is to decide whether the batch should be accepted or rejected on the basis of a sam pIe randomly drawn from the batch. A variety of sampling schemes exist. If the items in a sam pIe are dassed simply as 'good' or 'defective', then we have what is called sampling by attributes. However, if a quantitative measurement (such as weight or strength) is taken on each item, then we have what is called sampling by variables. In a single sampling attributes plan, a sam pIe size n is taken and the batch is accepted if the number of defectives in the sam pIe is less than or equal to an integer c called the acceptance number. Double sampling is a twostage extension, while in sequential sampling adecision is taken after each observation as to whether to accept or reject or continue sampling. The performance of an attributes sampling scheme may be described by the operating characteristic (or OC) curve which plots the prob ability of accepting a batch against the proportion of defectives in the batch. The proportion of defectives in a batch wh ich is acceptable to the consumer is called the acceptable quality level (AQL). The prob ability of rejecting a batch at this quality level is called the producer's risk. The percentage of defectives in a batch which is judged to be unacceptable to the consumer is called the lot tolerance percent defective (L TPD). The probability of accepting a batch at this quality level is called the consumer's risk. Some sampling schemes allow rejected batches to be subject to 100% inspection and rectification. Then the average outgoing quality (AOQ) for a particular underlying value of p is the overall average proportion of defectives in batches actually received by the consumer. The AOQ limit (AOQL) is the highest (worst) value of the AOQ. So me schemes allow for different levels of inspection (e.g. normal, tightened or reduced) according to the recent quality observed. A second important branch of statistical quality control is concerned with process control. The problem is to keep a manufacturing process at a specified stable level. SampIes are taken regularly so as to detect changes in performance. The causes of these changes should then be found so that appropriate corrective action can be taken. The most commonly used tool is the (Shewhart) control chart, on which a
Quality control and reliability
229
variable, which is characteristic ofthe process quality, is plotted against time. Ifthe observed variable has target value T and residual standard deviation (J when thc process is under control, then the graph may have warning lines inserted at T ±2(J and action lines at T ± 3(J. Rather than plot every single observation, it is often convenient to plot the results from regular small sam pies of the same size n (usually between about 5 and 30). The average quality can be checked by plotting successive sam pie means on a control chart, called an Xchart. This will have action Iines at T ± 3(J n. The quality variability can also be checked by plotting successive sam pie ranges on a control chart, called an Rchart. An alternative type of control chart is the cumulative sum or cusum chart. If X, denotes the process quality characteristic at time t, then the cumulative sum of deviations about the target, T, is given by
/J
and S, is plotted against t. The local mean of the process corresponds to the loca! gradient ofthe graph which should be about zero when the process is under contro!. There are also special control charts for sampling by attributes, such as the pchart which plots the proportion defective in successive sam pIes. As in most areas of statistics, basic theory needs to be supplemented by an understanding of practical problems. For example in acceptance sampling, batches are often not homogeneous and it may be difficult to get random sampIes. Some checks on quality inspectors mayaIso be desirable as in my experience inspection standards may vary considerably, either because of negligence or because the inspectors genuinely apply different standards as to what constitutes a defective. There has recently been a change in emphasis with the realization that statistical techniques alone will not solve quality problems. Rather there has to be a 'thought revolution in management'. The success of Japanese industry depends to a large extent on the successful implementation of a complete quality control 'package'. It is important to get managers and workers working together and to create an environment in which people are not afraid to tell their superiors about faults and problems.]apanese authors advocate the use of'quality circles' in which all levels of employees get together to discuss improvements. The emphasis should be on good product design, and on preventing faults rather thanjust monitoring them. There is much to be said, where appropriate, for using simple statistical methods which are intelligible to engineers and t~chnicians. The topic of reliability is of vital importance in manufacturing industry. The reliability of an item is a measure ofits quality and may be defined in various ways. It is usually the probability that it will function successfully for a given length of time under specified conditions. Orte way of measuring reliability i5 to test a batch of items over an extended period and to note the failure times. This process is called life testing. The distribution of failure times may then be assessed, and in particular thc mean life and the proportion failing during the warranty period may be estimated. Many similar problems arise in epidemiology in medical experiments on anima!s or
230
A digest of statistical techniques
humans where survival analysis examines lifetimes for different medical conditions and/or different drug treatments. Suppose an item begins to function at time t = 0, and let the random variable, T, denote the lifetime (or survival time or timetofailure). The cdf and pdf of Twill be denoted by F(t),J(t) respectively. The function R(t) = 1 F(t) = Prob(T> t) is called the reliability function (or survivor function). The function h(t) = f(t)/R(t) is called the hazard function (or conditional failure rate function since h(t)At is the conditional probability of failure in the interval t to t + At given that the item has survived until time t). The four functions are equivalent complementary ways of describing the probability distribution offailure times (i.e. the distribution of T). There are various prob ability distributions wh ich may or may not be suitable to describe the distribution of T. In the notation of table A.1, a simple model is that T has an exponential distribution in which ca se it can be shown that h(t) is a constant, A. A more realistic assumption is that h(t) increases with time and then the Weibull distribution may be appropriate with m > 1, since it can be shown that h(t) = mAt m  1• The normal and lognormal distributions are also sometimes used. There are many types of life test. For example N items may be tested, with or without replacement, until a specified number of failures have occurred (sampletruncated) or until a specified time has elapsed (timetruncated). Some sort of sequential procedure may be sensible. Failure times are rarely independent. There may be initial failures due to 'teething' problems or wearout failures due to fatigue. If so me items do not fail before the end of the test, then the resulting lifetimes are truncated or censored and they may complicate the analysis. The empirical reliability function (i.e. the observed proportion lasting to at least time t) is useful to plot the data graphically but may need to be modified with censored data or when there may be failures for more than one reason (a competing risk situation). The KaplanMeier estimate of the reliability function is a nonparametric estimate which takes account of censored observations in an 'obvious' way, namely, if dj deaths occur at time t. (where t1 < t2 < ... ) and there are n subjects at risk at t., then R(t) = (nj  d)/nj' There are various ways of plotting empirical functions, such as
An}
}
}
the reliability function or hazard function, on various types of prob ability paper. For example, Weibull paper is designed so that the empirical cdf is approximately linear if the underlying distribution is Weibull. A more sophisticated analysis may try to relate the reliability or hazard function to the values of one or more explanatory variables. For example, the loglinear proportional hazards model, or Cox's model, assurnes that
where ho(t) is a baseline hazard function under standard conditions, x is the vector of explanatory variables and P is a parameter vector to be estimated from the data. A different type of reliability problem is to estimate the reliability of a system given the reliabilities of the components which make up the system. For example, the reliability of aspace rocket (e.g. the prob ability that it completes a mission
Further reading
231
successfully) has to be estimated from the reliabilities of the individual components. A set of components is said to be connected in se ries if the failure of any componcnt causes failure of the system. A set of components is connected in parallel if the system works provided that at least one ofthe components works. Redundant components are often connected to a system in parallel in order to improve the reliability of thc system. It is relatively easy to evaluate a system's reliability whcn its components function independently but unfortunately this is often not the case. FURTHER READING
There are many books on quality control, such as that by Duncan (1974). A nontechnical account of thc Japanese approach to quality management is given by Ishikawa (1985). A somewhat similar message about the need for radical change is given by Deming (1982). Deming proposes 14 guidelines for top management and they are listed by Barnard (1986) in an entertaining paper on the history of industrial statistics and methods for improving the quality performance of manufacturing industry. Deming's guidelines include such maxims as 'Drive out fear, so that everyone may work effectively for the company' and 'Institute a vigorous program of education and retraining'. There are several good books on reliability and life testing including those by Barlow and Proschan (1975), Mann, Schafer and Singpurwalla (1974) and Nelson (1982).
Tailpiece I am an observation. I was captured in the field. My conscience said 'cooperate' My instinct said 'don't yield'. But I yielded up my data Now behold my sorry plight I' m part of a statistic Which is not a pretty sight. The Bootstrap and the Jacknife Oh, the tortures I've endl.lred They analyse my variance Until my meaning is obscured. But I've a plan to beat them 1'11 climb up in the trees Pretend I am a chisquare And get freedom by degrees.  after T.P.L.
APPENDIX B
MINIT AB and GLIM
This appendix gives brief reference notes on two important packages, called MINIT AB and GLIM. Full details may be found in the appropriate reference manuals. Note that both packages will be updated in due course, in which case so me commands may change. Of course many other packages could have been included, but the author has concentrated on two packages he is familiar with.
B.t
MINITAB
MINIT AB is a generalpurpose, interactive statistical computing system which is very easy to use and which is widely used in teaching. These notes refer to Release 5.1 (see also the helpful book by Ryan et al., 1985). Log in to your computer and go into the MINITAB package.
Data Data and other numbers are stored in columns, denoted by cl, c2, ... , clOOO, in a worksheet consisting of not more than 1000 columns. It is also possible to store up to 100 matrices, denoted by ml, m2, ... , m100, and up to 1000 constants, denoted by kl, ... , k997 (k998k1000 store special numbers). The total worksheet size available depends on the computer, but is usually 'large'.
Commands When you want to analyse data, you type the appropriate commands. There are commands to read, edit and print data, to manipulate the columns of da ta and do arithmetic, to plot the data, and to carry out various statistical analyses such as regression, ttests and ANOV A.
Help The command HELP HELP gives information about help commands. HELP OVER VIEW gives general help. A command such as HELP SET gives help on a particular command such as SET.
MINITAB
233
Prompts Before punching a command, wait for the computer to prompt you with MTB > . When entering data with the SET, READ, or INSERT commands, the prompt is DATA>.
Examples 1.
Suppose you want to add some lengths held in cl to the corresponding breadths . held in c2 and put the sums into a new column, say c3. The command is simply: letAc3 = cl + c2 (where Adenotes aspace). For example, if cl = (1, 2), c2=(5, 9), then c3=(6, 11).
2.
Let c4=2*cl Here, each element of cl is multipIied by 2 and the results are put into c4.
3.
(a) Mean cl This finds the average of the values in cl and prints it. (b) Mean cl, k2 Finds the average of the values in cl and puts it into k2. (c) Mean cl [, k2] The two commands in (a) and (b) can be written in a shorthand way using the square brackets which indicate that wh at is inside is optional. The brackets themselves are never punched.
Termination 01 lines Alliines are terminated and sent to the computer by the Return key.
Error messages If a command is not properly formulated, you will be so informed.
Finishing When you have finished using Minitab, type STOP. This command exits you from Minitab.
More on commands There are about 150 recognized commands and only an important sub set will be reviewed here. The computer accepts commands of various lengths but only looks at the first four letters (or fewer ifthe command is less than four letters long such as LET). Thus the commands HISTOGRAM and HIST are equivalent. Commands may be in upper or lower case or a mixture (e.g. LET or let or Let). If you ever get the prompt CONTINUE?, say YES. Many commands have subcommands to
234
MINITAB and GLIM
increase their versatility. Then the main command must end in a semicolon and the subcommand in a fuII stop. The prompt for a subcommand is SUBC>.
Data entry 1.
Data can be entered using the set command. Here is an example: MTB> set cl DATA> 2.63.94.95.7 DATA> 1.72.85.23.9 DATA> end This specifies the first eight elements of cl. Numbers are entered in free format separated by a blank(s) or a comma or both. Do not try to get more than ab out 60 characters on a line. It is better to put the same number of numbers on each line in the same format so as to help check the input. There are various shortcut tricks such as: MTB> set cl4} . . DATA>12:48 Puts con~ecut1ve mtegers from DATA>end 12 to 48 mto cl4.
2.
To enter a data matrix, each column (or row) can be entered separately using the SET command. Altematively, one could use the READ command, e.g. READ cl c2 c3  and then each row of data should contain three numbers, one for each column. Alternatively you can use e.g. READ 8 3 m8, to read in a (8 x 3) matrix in eight rows of three elements into matrix m8.
3.
SET 'filename' cl  reads a file named filename into cl. Note the apostrophes.
Editing and manipulating data If you make amistake in entering data, or wish to manipulate the data in some way, there are various useful commands such as LET, INSERT, DELETE, COPY, STACK, UNSTACK, CODE, SORT. For example:  changes 3rd element of cl to 1.3 let cl (3) = 1.3 insert 9 10 c41
 aIIows you to insert data between rows 9 and 10 ofc41; you get the DATA> prompt; finish with END
insert c41
 aIIows you to add data to end of c41
delete 2: 9 cl2
 deletes rows 2 to 9 of cl2
copy cl c2
 copies cl into c2 (there are several possible subcommands to aIIow the inclusion or exclusion of specified rows)
stack cl c2 c3
 joins c2 onto cl and puts into c3
sort cl c2
 puts elements of cl in rank order in c2
MINITAB
235
Output print c2c4 c7
 prints c2, c3, c4, c7 in columns
write 'filename' c2c5
 puts c2c5 into a file called filename. This can be read in again using READ 'filename' clc4 for example. The filename must be kept exactly the same but the column numbers can be changed.
Save and retrieve You can write the entire worksheet to a file with the SA VE command, e.g. SA VE 'DATA'  note apostrophes, but no column numbers. This file can be retrieved by RETR 'DATA'. When you list your files, you will find it called DAT A.MTW. Do not try to print it as it is in binary and can only be read by RETR.
More on the LET command If e denotes either a column or a constant, then the general form is: LET e=arithmetic expression which may involve +, , *, /, **, brackets, and functions such as sqrt, loge, sin, absolute value (or ab so) , etc. For example: let c5 = (cl mean(cl)) let k2=3*kl +mean(cl)
More commands k denotes a constant (e.g. k7), c a column, and e either. Square brackets denote an optional argument. Commas are included iri so me commands but are optional. There should be at least one space between each item. ERASE c3cS
 erases c3, c4, cS
RESTART
 erases the whole worksheet
INFORMATION
 gives current status of worksheet including all columns 10 use
Functions func e, e
 evaluates the function of e and puts into e. Available functions include SQRT (square root), LOGT (log to base 10), LOGE, ANTILOG, EXPO, ROUND, SIN, COS, TAN, ASIN, NSCORE, etc.; often easier to use LET command e.g. LET cl = SQRT (c2)
Column operations count cl, [kl]
 counts number of elements in cl and (optionally) put into kl
236
MINITAB and GLIM
Other commands of similar form are MEAN, SUM, STDEV (st. deviation), MEDI(an), MAXI(mum), MINI(mum). The DESCRIBE command is also very useful. It gives summary statistics, e.g. DESCRIBE cl.
Tables Data must be integervalued. T ALL Y cl
 prints a discrete frequency distribution
T ABLE c3 c4
 gives a twoway table
T ABLE c, ... , c
 gives a multiway table
Graphs HIST cl
 histogram of cl
STEM c5
 stemandleaf plot
BOXP c7
 box plot
BOXP c7 c9
 box plot of c7 at each level of c9; c9 must be discrete and same length as c7
DOTPLOT cl
 dot plot
PLOT cl c3
 plots a scatter diagram of the observations in cl against those in c3
There are plotting commands to adjust the shape of graphs from the default shape and to provide flexible ways of specifying scales. Highresolution graphics mayaIso be available.
Naming columns It is often helpful to give columns names, e.g. name cl 'length'  thereafter you can
refer to cl as cl or 'length'  must include apostrophes.
Distributions and random numbers There are four commands dealing with various statistical distributions. RAND generates random numbers from a specified distribution into one or more columns. PDF computes pdf; CDF computes cdf and INVCDF computes inverse of a distribution function. The allowed distributions include Bernoulli, binomial, Poisson, discrete uniform, uniform, normal, t, F, X2, Cauchy, exponential, gamma, Weibull, beta, lognormal, logistic. For exam pIe: rand k cl; bernoulli 0.6.
 generates k zeroone values with Prob(one) =0.6
MINITAB
237
rand k c7; normal 35 2.
 generates k N(35, 22) observations and puts into c7
pdf; bino n=10 p=OA.
 prints binomial probabilities for Bi (n = 10, P = OA) (Note: if n is large and p >!, use q = 1  P for (n  X) or use Poisson approximation)
pdf; pois 7.
 prints Poisson probabilities for Pois (mean = 7) (Note: mean must be < 100 and preferably much smaller.)
rand k cl; unif ab.
 random numbers from uniform distribution on (a, b)
For continuous distributions (or discrete), pdf calculates pdf at values stored in ex, say, and puts them into ey, e.g. pdf ex ey; followed by definition of distribution.
Estimation and hypothesis testing  k 1 % CI for /.1 assuming sigma = k2 ; data in e8; k 1 = 95 is default tint [k] e7
 k % CI for /.1, sigma unknown
ztes k 1 k 2 e8
 normal test of mean = k 1 assuming sigma = k 2
ttes k 1 e6
 ttest of mean = k 1
Both ztes and ttes assurne twosided alternative. If H 1 : /.1 < k 1 use subeommand ALTE = 1. IfH 1 :/.1>k 1 , use subcommand ALTE = +1. TWOS [k] cl, e2
 twosample ttest and k% CI for differenee in means. Data in cl and e2. The subeommand AL TERNATIVE ean be used for a onesided test. The subeommand POOLED ean be used if a pooled estimate of eommon varianee is to be ealculated.
Regression regr e52 e7 cl1 [e20 e21] does a multiple regression of e5 on two explanatory variables, e7 and e11. All eolumns must have same lerigth. Optionally put standardized residuals in e20 and fitted values in c21. There are many possible subeommands. corr cl e2
 ealculates eorrelation coefficient
Matrices read 34m2
 reads (3 x 4) matrix into m2; data must be three rows of four observations
238
MINITAB and GLIM
prin m2
 prints m2
inve m2 m3
 inverts m2 and puts it into m3; m2 must be square
tran m2 m3
 transposes m2; puts into m3
add m1 m2 m3
 add m1 to m2, put into m3; similarly for SUßT and MULT
eigen m1 cl m2
 calculates eigenvalues and vectors of (symmetric) m1, put into cl and m2
ANOVA aovo cl c2 c3
 oneway ANOV A of three groups of observations; group 1 observations in cl, etc.; or equivalently use ONEW on stacked data e.g.
onew cl c2
 data in cl, corresponding elements of c2 denote group numbers
twow cl c2 c3 [c4, cS]
 twoway ANOV A of cl data; block numbers in c2, treatment numbers in c3; optionally put residuals into c4 and fitted values into cS
Time series If data are a time series, then: acf c2
 calculates acf of da ta in c2
pacf c2
 partial acf
diff 1 c2 c4
 puts first differences of c2 into c4
arim 21 1 c2
 fits an ARIMA (2, 1, 1) model to the da ta in c2
arim p d q, P D Q, S, c2 fits seasonal ARIMA model; S = season length tsplot c2
 timeseries plot of c2
Other options chis cl c 2 c3
l
mann cl c2
MannWhitney test
rank cl c3 cS
calculates ranks for elements in each co lu mn
test
There are many other commands and subcommands. You should (a) consult reference manual (b) use HELP commands, or (c) guess (!!). Many commands are
GLIM
239
obvious, e.g. to stop outlying values being trimmed from a stemandIeafplot, try: stern e5; notrim.
Logging You ean log a session with the PAPER, or OUTFILE 'filename', commands.
Example B.1 The MINITAB analysis for the data in Exereise E.1 is as folIows. set cl 35.2 57.4 27.2
(read data in three rows of 10 observations) 44.4
end set e2 (1,2,3)8 end set e3 3(1,2,3,4,5,6,7,8) end twbw cl ,e2,c3,e4 prin e4 ete.
B.2
(gives 1 repeated eight times, then 2 repeated eight times, 3 repeated eight times) (gives 1 to 8 repeated three times) (gives twoway ANOVA; put residuals into e4)
GLIM
GLIM is a powerful program for fitting generalized linear models. The user has to speeify the response variable, the error distribution, the link funetion and the predietor variables. These notes deseribe Release 3.77.
Commands All eommands start with a $. A eommand may eonsist of severalletters but only the first three are reeognized. Commands are also terminated by a $ but you do not usuaBy need to type this as the initial $ ofthe next command also serves to terminate the previous eommand. The exeeption is when you want a eommand to be implemented straight away. It will take too long to explain aB eommands but many are selfexplanatory.
240
MINITAB and GLIM
Vectors The observations and explanatory variables are stored in vectors. Userdefined vectors consist of a letter followed optionally by further letters and/or digits. Upper and lower ca se letters are interchangeable. Only the first four are recognized. A V ARIATE can contain any real number while a FACTOR contains integers from the set {1, 2, ... , k} for a specified k. A systemdefined vector consists of the function symbol (%) followed by two letters.
Scalars They store a single number. They are denoted by % followed by a single letter, such as %A. Only 26 are available and they do not need to be declared before use. Systemdefined scalars, such as %GM, consist of % followed by two letters.
Functions The symbol % also denotes a function as in %EXP, %LOG, %SIN, %SQRT, etc.
Reading in data The UNITS command sets a standard length for all vectors. Variates and factors may be declared by the VAR and FAC commands, e.g. $FAC 20 A 3$ declares a factor oflength 20 (not needed if$UNITS 20$ has been declared) with three levels. The DATA command specifies the set ofidentifiers for the vectors whose values you wish to assign and wh ich will be read by the next READ statement. The READ command then reads the data. Use the LOOK command to check the data, and the CALC command to correct errors (e.g. $CALC X(4) = 19.7$ assigns new value 19.7 to the fourth element of the Xvector). Data can also be read from a file. The ex am pIes below explain these commands more easily.
Other commands $PLOT X Y
plots a scatter diagram
$CALC X=Y+Z
forms a new vector
$CALC X=X**2+1
a transformation
$CALC X = %GL(k,n)
useful for assigning factor values. This gives the integers 1 to k in blocks of n, repeated until the UNITS length is achieved. For example if UNITS=14, %GL(3,2) gives 1,1,2,2,3,3,1,1,2,2,3,3,1,1.
GLIM
241
Defining the model The YV ARIATE command specifies the response variable. The ERROR command specifies the error distribution which may be normal (N), Poisson (P), binomial (B) or gamma (G). For the binomial case the name of the sampie size variable, in the proportion rln, must also be specified (e.g. $ERROR B N). The LINK command specifies the link function (e.g. I für identity, L for log, G for logit, R for reciprocal, P for pro bit, S for square root, E for exponent, together with the value ofthe exponent). Only meaningful combinations ofERROR and LINK are allowed, (e.g. binomial with probit or logit). The FIT command fits a model with specified predictor variables (e.g. $FIT Z W). $FlT A*B fits factors A and B with interaction. $FIT + A adds factor A to the previously tltted model.
Displaying the results The DISPLAY command produces results of the previous fit. The scope of the output is specified by letters which include: Eestimates of parameters R fitted values and residuals Vvariances of estimates Ccorrelation matrix of parameter estimates. The pro gram also produces the following variables which you can display (using LOOK), use in calculations, or store for future reference: %DF degrees of freedom for error %DV the deviance %FV fitted va lues in vector of standard length etc.
Example B.2 The GLIM analysis for the data in Exercise E.l is as folio ws : $UNITS 24 $FAC COL 8 ROW 3 $DATA OBS $READ 35.257.4 . (three rows of eight observations) 44.4 $CALC ROW = %GL(3,8): COL = %GL(8,1) $YVAR OBS $ERRN $LINK I
242
MINITAB and GLIM $FIT $ $FIT ROW: +COU $DISPLA Y er $ $CALC E=OBS  %FV$ $LOOK E $ etc.
(fits null model; deviance is total corrected SS) (fits row terms and then adds column effects) (calculates residuals) (tabulates residuals)
The ANOVA table in its usual form can be obtained from the deviances produced by GLIM by appropriate subtractions. Y ou may prefer the output from the MINITAB package in this ca se  see Example B.l above.
Example B.3 The GLIM analysis for the data in Exercise E.2 is (in brief) as follows: $UNITS 9 $FAC DAY 3 ENG 3 BURN 3 $DATA Y BURN $READ (observations in row order 16 1 with corresponding burner number) 172 132 $CALC DA Y = %GL (3,3) $CALC ENG= %GL (3,1) $YV AR Y $ERR N $LINK I $FIT $ $FIT DAY $ $FIT+ENG $ $FIT+BURN $ $DISPLA Y ER$ etc.
Example B.4 The GLIM analysis for the data in Exercise G.5 is (in brief) as follows: $UNITS 7 $DATAXNR $READ 0.94617 1.1 72 22 (seven lines of data) 4.03830 $YVAR R $ERR B N $LINK G$ (this specifies logit link; also try P for probit)
GLIM $FIT $ $FIT X $ $DISPLA Y ER$ $PLOT R %FV$ $CALC P = R/N $ etc.
(plots r against fitted values) (generates new proportion variable)
243
APPENDIX C
Some useful addresses
The practising statistician should consider joining one of the many national and international statistical societies. Potential benefits may include (a) access to a library, (b) the receipt of uptodate journals, (c) a regular newsletter giving details of forthcoming lectures and conferences, and (d) the opportunity to meet other statisticians. Most countries have anational statistical society and the relevant address can probably be obtained via the statistics department of your local university or by writing to: International Statistical Institute, 428 Prinses Beatrixlaan, PO Box 950, 2270 AZ Voorburg, Netherlands. Full membership of this international society (the ISI) is by election only, but there are many sections and affiliated organizations which are open to all. The ISI publishes an annual directory of statistical societies. The sections include: Bernoulli Society for Mathematical Statistics and Probability The International Association for Statistical Computing The International Association of Survey Statisticians The affiliated organizations include many national societies such as: Statistical Society of Australia Statistical Society of Canada Swedish Statistical Association There are also so me general, or specialinterest, societies of a more international character which are also mostly affiliated to the ISI. They include the following (note that the addresses are correct at the time of writing, but it is advisable to check where possible) : American Statistical Association, Suite 640,806 15th Street NW, Washington DC 20005, USA American Society for Quality Control, 230 W. Wells St, Milwaukee, Wisconsin 53203, USA
Same useful addresses
245
Royal Statistieal Soeiety, 25 Enford Street, London WIH 2BH, UK Biometrie Soeiety, Suite 621,806 15th Street NW, Washington DC 200051188, USA (especially for those interested in biological applications) Institute ofMathematical Statistics, Business Office, 3401 Investment Boulevard #7, Hayward, California 94545, USA Indian Statistieal Institute, 203 Barrackpore Trunk Road, Calcutta 700 035, India Institute of Statisticians, 36 Churchgate Street, Bury St Edmunds, Suffolk IP33 1RD, UK (they set exams leading to a professional statistics qualification)
APPENDIX D
Statistical tables
Most textbooks provide the more commonly used statistical tables including percentage points of the normal, t, X2 and Fdistributions. In addition there are many more comprehensive sets of tables published including: Fisher, R. A. and Yates, F. (1963) Statistical Tables Jor Biologieal, Agricultural and Medical Research, 6th edn, Oliver and Boyd, London. Neave, H. R. (1978) Statistics Tables, George Allen and Unwin, London. Pearson, E. S. and Hartley, H. O. (1966) Biometrika Tables Jor Statisticians, Vol. 1, 3rd edn, Cambridge University Press, Cambridge. There are also many specialized sets of tables, including, for example, tables of the binomial probability distribution, and numerous tables relating to quality control. A useful collection of mathematical functions and tables is: Abramowitz, M. and Stegun, I. A. (1965) Handbook oJ Mathematical Functions, Dover, New York. The following tables give only an abridged version of the common tables, as in my experience crude linear interpolation is perfectly adequate in most practical situations. The tables also give fewer decimal places than is often the case, because quoting an observed Fvalue, for example, to more than one decimal place usually implies spurious accuracy.
Table 0.1
z
Areas under the standard normal curve Prob (obs. > z) x 100
0.0 0.5 1.0 1.28 1.5 1.64 1.96 2.33 2.57
50 30.9 15.9 10.0 6.7 5.0 2.5 1.0 0.5 0.14 0.02
3~
3.5
The tabulated values show the percentage of observations which exceed the given value, z, for a normal distribution, mean zero and standard deviation one; thus the values are onetailed.
Table 0.2
Percentage points of Student's Idistribution 0.10
Twotailed probabilities 0.02 0.05
0.01
V
0.05
Onetailed probabilities (IX) 0.025 0.01
0.005
1 2 3 4 6 8 10 15 20 30 60
6.34 2.92 2.35 2.13 1.94 1.86 1.81 1.75 1.72 1.70 1.67 1.64
00
12.71 4.30 3.18 2.78 2.45 2.31 2.23 2.13 2.09 2.04 2.00 1.96
31.82 6.96 4.54 3.75 3.14 2.90 2.76 2.60 2.53 2.46 2.39 2.33
63.66 9.92 5.84 4.60 3.71 3.36 3.17 2.95 2.84 2.75 2.66 2.58
The onetailed values I•. v ' are such that Prob(l v > I.) = IX for Student's Idistribution on v degrees of freedom. The twotailed values are such that prob(llvl > I.) = 21X, since the Idistribution is symmetrie about zero. Interpolate for any value of v not shown above.
Table D.3
Percentage points of the X2distribution
v
2
3 4 6 8 10 12 14 16 20 25 30 40 60 80
IX
0.05
0.01
3.8 6.0 7.8 9.5 12.6 15.5 18.3 21.0 23.7 26.3 31.4 37.6 43.8 55.8 79.1 101.9
6.6 9.2 11.3 13.3 16.8 20.1 23.2 26.2 29.1 32.0 37.6 44.3 50.9 63.7 88.4 112.3
The values X;.vare such that Prob (X; > X;) = IX for the X2distribution on V degrees offreedom. The X2distribution is not symmetrie, but the lower pereentage points are rarely needed and will not be given here. Note that E(X;) = v. Interpolate for any value of V not shown above or use an appropriate approximation. For large V, the ldistribution tends to N(v, 2v), but a better approximation ean be obtained using [~(2X;)~(2v1)1 is approximately N(O, 1).
Table D.4 Pereentage points of the Fdistribution (a) 5% values (IX = 0.05) v2
vt
1 2 3 4 5 6 8 10 12 15 20 30 40 00
161 18.5 10.1 7.7 6.6 6.0 5.3 5.0 4.7 4.5 4.3 4.2 4.1 3.8
2
4
6
8
10
15
30
00
199 19.0 9.5 6.9 5.8 5.1 4.5 4.1 3.9 3.7 3.5 3.3 3.2 3.0
225 19.2 9.1 6.4 5.2 4.5 3.8 3.5 3.3 3.1 2.9 2.7 2.6 2.4
234 19.3 8.9 6.2 4.9 4.3 3.6 3.2 3.0 2.8 2.6 2.4 2.3 2.1
239 19.4 8.8 6.0 4.8 4.1 3.4 3.1 2.8 2.6 2.4 2.3 2.2 1.9
242 19.4 8.8 6.0 4.7 4.1 3.3 3.0 2.7 2.5 2.3 2.2 2.1 1.8
246 19.4 8.7 5.9 4.6 3.9 3.2 2.8 2.6 2.4 2.2 2.0 1.9 1.7
250 19.5 8.6 5.7 4.5 3.8 3.1 2.7 2.5 2.2 2.0 1.8 1.7 1.5
254 19.5 8.5 5.6 4.4 3.7 2.9 2.5 2.3 2.1 1.8 1.6 1.5 1.0
4
6
8
10
15
30
00
(b) 1 % values (IX =0.01) vt v2
2 3 4 5 6 8 10 12 15 20 30 40 00
2
4050 5000 5620 5860 5980 6060 6160 6260 6370 99.0 99.2 99.3 99.4 99.4 99.4 98.5 99.4 99.5 27.9 27.5 27.2 34.1 30.8 28.7 26.9 26.5 26.1 15.2 14.8 14.5 14.2 21.2 18.0 16.0 13.8 13.5 16.3 13.3 11.4 10.7 10.3 10.0 9.7 9.4 9.0 9.1 8.5 8.1 7.9 7.6 7.2 6.9 13.7 10.9 11.3 8.6 7.0 6.4 6.0 5.8 5.5 5.2 4.9 4.2 10.0 7.6 6.0 5.4 5.1 4.8 4.6 3.9 4.8 4.3 3.7 9.3 6.9 5.4 4.5 4.0 3.4 8.7 6.4 4.9 4.3 4.0 3.8 3.5 3.2 2.9 8.1 5.8 4.4 3.9 3.6 3.4 3.1 2.8 2.4 3.2 7.6 5.4 4.0 3.5 3.0 2.7 2.4 2.0 7.3 5.2 3.8 3.3 3.0 2.8 2.5 2.2 1.8 6.6 4.6 3.3 2.8 2.5 2.3 2.0 1.7 1.0
The values F.,V"V2 are sueh that Prob(Fv',V2> F.,v"v,) = IX for an Fdistribution with vt (numerator) and v2 (denominator) degrees of freedom. The Fdistribution is not symmetrie and lower pereentage points ean be found using Ft .,V"V2 = 1/F.,V2,V" where the order of the degrees of freedom is reversed. Interpolate for any values of vt ' v2 not shown above.
References
Aitkin, M. and Clayton, D. (1980) The fitting ofexponential, Weibull and extreme value distributions to complex censored survival data using GLIM. Appl. Stat., 29, 15663. Altman, D. G., Gore, S. M., Gardner, M. J. and Pocock, S. J. (1983) Statistical guidelines for contributors to medical journals. Br. Med.]. 286, 148993. Anderson, C. W. and Loynes, R. M. (1987) The Teaching of Practical Statistics, Wiley, Chi chester. Anscombe, F. J. (1973) Graphs in statistical analysis. Am. Statn., 27,1721. Armstrong, J. S. (1985) LongRange Forecasting, 2nd edn., Wiley, New Y ork. Atkinson, A. C. (1985) Plots, Traniformations and Regression, Oxford University Press, Oxford. Barlow, R. E. and Proschan, F. (1975) Statistical Theory ofReliability and Life Testing, Holt, Rinehart and Winston, New Y ork. Barnard, G. A. (1986) Rescuing our manufacturing industry  so me of the statistical problems. The Statistician, 35, 316. Barnett, V. (1982) Comparative Statistical Inference, 2nd edn., Wiley, Chichester. Barnett, V. and Lewis, T. (1985) Outliers in Statistical Data, 2nd edn., Wiley, Chi chester. Becker, R. A. and Chambers,J. M. (1984) S: An Interactive Environmentfor Data Analysis and Graphics, Wadsworth, Belmont, Ca!. de Bono, E. (1967) The Use of Lateral Thinking, Jonathan Cape, London. (Republished by Pelican Books.) Box, G. E. P. (1980) Sampling and Bayes' inference in scientific modelling and robustness (with discussion).). R. Stat. Soc. A. 143, 383430. Box, G. E. P. (1983) An apology for ecumenism in statistics, in Scientific Inference, Data Analysis and Robustness (eds G. E. P. Box, T. Leonard and C. F. Wu), Academic Press, New York. Box, G. E. P., Hunter, W. G. and Hunter,J. S. (1978) Statisticsfor Experimenters, Wiley, New York. Carver, R. (1978) The case against statistical significance testing. Harv. Ed. Rev. 48, 37899. Chambers,J. M., Cleveland, W. S., Kleiner, B. and Tukey, P. A. (1983) Graphical Methodsfor Data Analysis, Wadsworth, Belmont, Ca\. Chapman, M. (1986) Plain Figures, HMSO, London. Chatfield, C. (1978) The HoltWinters forecasting procedure. Appl. Stat. 27, 26479. Chatfield, C. (1982) Teaching a course in applied statistics. Appl. Stat. 31,27289. Chatfield, C. (1983) Statistics for Technology, 3rd edn, Chapman and Hall, London. Chatfield, C. (1984) The Analysis of Time Series, 3rd edn, Chapman and Hall, London. Chatfield, C. (1985) The initial examination of data (with discussion).]. R. Stat. Soc. A, 148, 21453. Chatfield, C. (1986) Exploratory data analysis. Eur.]. Op. Res., 23, 513. Chatfield, C. and Collins, A.J. (1980) Introduction to Multivariate Analysis, Chapman and Hall, London.
References
251
Chung, K. L. (1979) Elementary Probability Theory with Stoehastie Proeesses, 3rd edn, SpringerVerlag, New York. Cleveland, W. S. (1985) The Elements of Graphing Data. Wadsworth, Belmont, Cal. Cleveland, W. S. and McGill, R. (1987) Graphical perception: the visual decoding of quantitative information on graphical displays of data. J. R. Stat. Soc. A, 150, 192229. Cleveland, W. S., Diaconis, P. and McGill, R. (1982) Variables on scatterplots look more highly correlated when the scales are increased. Seienee, 216, 113841. Cochran, W. G. (1963) Sampling Techniques, Wiley, New York. Cochran, W. G. and Cox, G. M. (1957) Experimental Designs, 2nd edn., Wiley, New York. Cook, D. and Weisberg, S. (1982) Residuals and Influenee in Regression, Chapman and Hall, London. Cooper, B. M. (1976) Writing Teehnieal Reports, Pelican, Harmondsworth. Cox, D. R. (1958) The Planning of Experiments, Wiley, New Y ork. Cox, D. R. (1977) The role ofsignificance tests. Seand.J. Stat., 4, 4970. Cox, D. R. (1981) Theory and general principles in statistics.J. R. Stat. Soe. A, 144, 28997. Cox, D. R. (1986) Some general aspects of the theory of statistics. Int. Stat. Rev. 54, 11726. Cox, D. R. and Hinkley, D. V. (1974) Theoretieal Statisties, Chapman and Hall, London. Cox, D. R. and Oakes, D. (1984) Analysis of Survival Data, Chapman and Hall, London. Cox, D. R. and Snell, E. J. (1981) Applied Statisties, Chapman and Hall, London. Cramer, H. (1946) Mathematieal Methods of Statisties, Princeton University Press, Princeton. Daniel, C. and Wood, F. S. (1980) Fitting Equations to Data, 2nd edn., Wiley, New York. Deming, W. E. (1982) Quality, Produetivity and Competitive Position, MIT, Center for Advanced Engineering Study, Cambridge, Mass. Diaconis, P. and Efron, B. (1985) Testing for independence in a twoway table: new interpretations of the chisquare statistic (with discussion). Ann. Stat., 13, 84574. Dineen, J. K., Gregg, P. and Lascelles, A. K. (1978) The response oflambs to vaccination at weaning with irradiated triehostrongylus colubriformis larvae: segregation into 'responders' and 'nonresponders'. Int. J. Parasit. 8, 5963. Dobson, A. J. (1983) An Introduction to Statistieal Modelling, Chapman and Hall, London. Draper, N. R. and Smith, H. (1981) Applied Regression Analysis, 2nd edn., Wiley, New York. Duncan, A. J. (1974) Quality Control and Industrial Statisties, 4th edn, Irwin, Homewood, Ill. Efron, B. and Gong, G. (1983) A leisurely look at the bootstrap, the jackknife and crossvalidation. Am. Statn, 37, 3648. Ehrenberg, A. S. C. (1982) A Prim er in Data Reduction, Wiley, Chichester. Ehrenberg, A. S. C. (1984) Data analysis with prior knowledge, in Statisties: An Appraisal (eds H. A. David and H. T. David), lowa State University Press, lowa, pp. 15582. Erickson, B. H. and Nosanchuck, R. A. (1977) Understanding Data, McGrawHill Ryerson, Toronto. Everitt, B. S. and Dunn, G. (1983) Advaneed Methods of Data Exploration and Modelling, Heinemann, London. Feiler, W. (1968) An Introduetion to Probability Theory and its Applieations, 3rd edn., Wiley, New York. Fienberg, S. E. and Tanur, J. M. (1987) Experimental and sampling structures: paralleIs diverging and meeting. Int. Stat. Rev., 55, 7596. Friedman, L. M., Furberg, C. D. and Demets, D. L. (1985) Fundamentals ofClinieal Trials, 2nd edn., PSG Publishing, Littleton, Mass. Gilchrist, W. (1984) Statistieal Modelling, Wiley, Chichester. Gnanadesikan, R. (1977) Methods for Statistieal Data Analysis of Multivariate Observations, Wiley, New York. Goodhardt, G. J., Ehrenberg, A. S. C. and Chatfield, C. (1984) The Dirichlet: a comprehensive model of buying behaviour (with discussion). J. R. Stat. Soe. A. 147, 62155.
252
References
Gore, S. M. and Altman, D. G. (1982) Statistics in Practice, British Medical Association, London. Gowers, E. (1977) The Complete Plain Words, Pelican, Harmondsworth. Granger, C. W. ). and Newbold, P. (1986) Forecasting Economic Time Series, 2nd edn., Academic Press, New York. Green, P.J. and Chatfield, C. (1977) The allocation of university grants.]. R. Stat. Soc. A, 140, 2029. Greenacre, M. (1984) Theory and Applications of Correspondence Analysis, Academic Press, London. Hahn, G.). (1984) Experimental design in the complex world. Teehnometrics, 26,1931. Hamaker, H. C. (1983) Teaching applied statistics for and/or in industry, in Proeeedings ofthe First International Conference on Teaching Statistics (eds D. R. Grey, P. Holmes, V. Barnett and G. M. Constable), Teaching Statistics Trust, Sheffield, pp. 655700. Hand, D. ). and Everitt, B. S. (1987) The Statistical Consultant in Action, Cambridge University Press, Cambridge. Hawkes, A. G. (1980) Teaching and examining applied statistics. The Statistician, 29, 819. Healey, D. (1980) Healey' s Eye, Jonathan Cape, London. Henderson, H. V. and Velleman, P. F. (1981) Building multiple regression models interactively. Biometrics, 37, 391411. Hoaglin, D. c., Mosteller, F. and Tukey, ). W. (eds) (1983) Understanding Robust and Exploratory Data Analysis, Wiley, New York. Hollander, M. and Proschan, F. (1984) The Statistical Exorcist: Dispelling Statistics Anxiety, Marcel Dekker, New Y ork. Hollander, M. and Wolfe, D. A. (1973) Nonparametric Statistical Methods, Wiley, New York. Hooke, R. (1983) How to Tell the Liarsfrom the Statisticians, Marcel Dekker, New York. Huff, D. (1959) How to Take a Chance, Pelican Books, London. Huff, D. (1973) How to Lie with Statistics, 2nd edn, Penguin Books, London. International Statistical Institute (1986) Declaration on professional ethics. Int. Stat. Rev. 54, 22742. Ishikawa, K. (1985) What is Total Quality Control? The japartese Way, (translated by D.J. Lu), Prentice Hall, Englewood Cliffs, NJ. John,). A. and Quenouille, M. H. (1977) Experiments: Design and Analysis, Griffin, London. Johnson, N. L. and Kotz, S. (1969, 1970, 1972) Distributions in Statistics (four volumes), Wiley, New Y ork, pp. 32742. Joiner, B. L. (1982a) Practising statistics or what they forgot to say in the classroom, in Teaehing of Statistics and Statistical Consulting (edsJ. S. Rustagi and D. A. Wolfe), Academic Press, New Y ork. Joiner, B. L. (1982b) Consulting, statistical, in Encyclopedia of Statistical Seiences, Vol. 2 (eds S. Kotz and N. L. Johnson), Wiley, New York, pp. 14755. Jones, B. (1980) The computer as a statistical consultant. BIAS, 7, 16895. Kanji, G. K. (1979) The role ofprojects in statistical education. The Statistician, 28,1927. Kish, L. (1965) Survey Sampling, Wiley, New York. Little, R.). A. and Rubin, D. B. (1987) Statistieal Analysis with Missing Data, Wiley, New York. McCuliagh, P. and Nelder, J. A. (1983) Ceneralized Linear Models, Chapman and Hall, London. McNeil, D. R. (1977) Interactive Data Analysis, Wiley, New Y ork. Manly, B. F. J. (1986) Multivariate Statistical Methods, Chapman and Hall, London. Mann, N. R., Schafer, R. E. and Singpurwalla, N. D. (1974) Methodsfor Statistical Analysis of Reliability and Life Data, Wiley, New York. Mantle, M. J., Greenwood, R. M. and Currey, H. L. F. (1977) Backache in pregnancy. Rheumatology and Rehabilitation, 16, 95101.
References
253
Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979) Multivariate Analysis, Academic Press, London. Meier, P. (1986) Damned liars and expert witnesses.J. Am. Stal. Assoc., 81, 26976. Miller, R. G. Jr (1986) Beyond ANOVA, Basics of Applied Statistics, Wiley, Chi chester. Montgomery, D. C. (1984) Design and Analysis of Experiments, 2nd edn, Wiley, New York. Morgan, B. J. T. (1984) Elements of Simulation, Chapman and Hall, London. Morrison, D. E. and Henkel, R. E. (1970) The Signijicance Test Controversy, Butterworths, London. Moser, C. A. (1980) Statistics and public policy.J. R. Stat. So<. A, 143, 131. Moser, C. A. and Kalton, G. (1971) Survey Methods in Sodal Investigation, Heinemann, London. Neider, J. A. (1984) Statistical computing. J. R. Stat. So<. A. 147, 15160. Neider, J. A. (1986) Statistics, science and technology: The Address of the President (with Proceedings). J. R. Stat. So<. A, 149, 10921. Nelson, W. (1982) Applied Life Data Analysis, Wiley, New York. Patterson, H. D. and Silvey, V. (1980) Statutory and recommended list trials of crop varieties in the UK (with discussion). J. R. Stat. So<. A, 143, 21952. Peters, W. S. (1987) Countingfor Something: Principles and Personalities , SpringerVerlag, New York. Plewis, I. (1985) Analysing Change: Measurement and Explanation using Longitudinal Data, Wiley, Chi chester. Pocock, S. J. (1983) Clinical Trials: A Practical Approach, Wiley, Chi chester. Preece, D. A. (1981) Distributions offinal digits in data. The Statistician, 30, 3160. Preece, D. A. (1984) Contribution to the discussion of a paper by A. J. Miller. J. R. Stat. Soc. A, 147, 419. Preece, D. A. (1986) Illustrative examples: illustrative of what. The Statistician, 35, 3344. Priestley, M. B. (1981) Spectral Analysis and Time Series, Vols 1 and 2, Academic Press, London. Pullum, T. W., Harpham, T. and Ozsever, N. (1986) The machine editing oflargesample surveys: the experience ofthe World Fertility Survey. Int. Stat. Rev., 54, 31126. Ratkowsky, D. A. (1983) NonLinear Regression Modelling, Marcel Dekker, New York. Rustagi, J. S. and Wolfe, D. A. (eds) (1982) Teaching of Statistics and Statistical Consulting, Academic Press, New Y ork. Ryan, B. F.,joiner, B. L. and Ryan, T. A.Jr (1985) Minitab Handbook, 2nd edn., Duxbury Press, Boston. Schumacher, E. F. (1974) Small is Beautiful, Sphere Books, London. Scott,J. F. (1976) Practical projects in the teaching of statistics at universities. The Statistician, 25,95108. Silverman, B. W. (1985) Some aspects ofthe spline smoothing approach to nonparametric regression curve fitting (with discussion). J. R. Stat. Soc. B, 47, 152. Snedecor, G. W. and Cochran, W. G. (1980) Statistical Methods, 7th edn., Iowa State University Press, lowa. Snee, R. D. (1974) Graphical display of twoway contingency tables. Am. Statn, 28, 912. Snell, E. J. (1987) Applied Statistics: A Handbook of BMDP Analyses, Chapman and Hall, London. Sprent, P. (1970) Some problems of statistical consultancy. J. R. Stat. So<. A, 133, 13964. Steinberg, D. M. and Hunter, W. G. (1984) Experimental design: review and comment (with discussion). Technometrics, 26, 71130. Tufte, E. R. (1983) The Visual Display ofQuantitative Information, Graphics Press, Cheshire, Conn. Tukey, J. W. (1977) Exploratory Data Analysis, AddisonWesley, Reading, Mass.
254
References
Tukey, J. w. and Mosteller, F. (1977) Data Analysis and Regression, AddisonWesley, Reading, Mass. Velleman, P. F. and Hoaglin, D. C. (1981) Applications, Basics, and Computing ojExploratory Data Analysis, Duxbury Press, Boston, Mass. Wainwright, G. (1984) ReportWriting. Management Update Ltd, London. Weisberg, S. (1985) Applied Linear Regression, 2nd edn., Wiley, New York. Wetherill, G. B. (1982) Elementary Statistical Methods, 3rd edn, Chapman and Hall, London. Wetherill, G. B. (1986) Regression Analysis with Applications, Chapman and Hall, London. Wetherill, G. B. and Curram,J. B. (1985) The design and evaluation of statistical software for microcomputers. The Statistician, 34, 391427.
Author index This index does not include entries in the reference section (pp. 25054) where source references may be found. Where the text refers to an article or book by more than one author, only the firstnamed author is listed here.
Abramowitz, M. 246 Aitkin, M. 153 Altman, D. G. 76, 219 Anderson, C. W. 5,83,173 Andrews, D. F. 165 Anscombe, F. J. 121 Armstrong, J. S. 199 Atkinson, A. C. 202 Barlow, R. E. 231 Barnard, G. A. 231 Barnett, V. 5, 28, 29, 56, 58 Bayes, Thomas 58 Becker, R. A. 64 de Bono, E. 54 Box, G. E. P. 7, 15, 58, 181, 216 Broadbent, S. 165 Burch, P. R. J. 165 Cairncross, Alex 227 Carver, R. 53 Chambers, J. M. 40 Chapman, M. 40 Chatfield, C. 29, 34, 45, 47, 126, 136, 137, 145, 159, 165, 181, 22~ 228 Chernoff, H. 165 Chung, K. L. 185 Cleveland, W. S. 40, 120 Cochran, W. G. 177,213,216 Cook, D. 202 Cooper, B. M. 74 Corlett, T. 202 Cox, D. R. 5, 21, 23, 45, 53, 56, 68, 83, 95,96, 124, 147, 169, 181, 189,216 Cramer, H. 95 Daniel, C. 16 Deming, W. E. 231 Diaconis, P. 107 Dineen, J. K. 97
Dobson, A. J. 209 Draper, N. R. 197,202 Duncan, A. J. 231 Efron, B. 190 Ehrenberg, A. S. C. 15, 34, 40, 57, 74 Erickson, B. H. 45 Everitt, B. S. 126 Feiler, W. 185 Fienberg, S. E. 12 Finney, D. J. 144 Fisher, Sir Ronald 57, 246 Friedman, L. M. 218 Galton, Francis 57 Gilchrist, W. 16, 21 Gnanadesikan, R. 126 Goodhardt, G. J. 20 Gore, S. M. 218 Gowers, E. 74 Granger, C. W. J. 228 Green, P. J. 121 Greenacre, M. 222 Greenfield, Tony 70 Hahn, G. J. 216 Hamaker, H. C. 9 Hand, D. J. 70 Hawkes, A. G. 166 Healey, D. 12 Henderson, H. V. 119 Hoaglin, D. C. 45,190 Hocking, R. R. 119 Hollander, M. 77, 194 Hooke, R. 77, 171 Huff, D. 77, 166 Ishikawa, K. 231
256
Author index
John, J. A. 216 Johnson, N. L. 187 Joiner, B. L. 70 Jones, B. 70 Kanji, G. K. 83 Kendall, M. G. 182 Kish, L. 214 Kotz, S. 182 Kruskal, W. H. 182 Litde, R. J. A. 31 Louis, Pierre 216 . McCullagh, P. 209 McNeil, D. R. 45 Manly, B. F. J. 222 Mann, N. R. 231 Mande, M. J. 130 Mardia, K. V. 222 Meier, P. 167 Miller, R. G. Jr 201, 205 Montgomery, D. C. 216 Morgan, B. J. T. 170 Morrison, D. E. 53 Moser, C. A. 58, 214
Patterson, H. D. 142, 151 Pearson, Egon 57, 246 Pearson, Kar! 57 Peters, W. S. 58 Plewis, 1. 12 Pocock, S.J. 216, 218 Preece, D. A. 31, 81, 166, 199 Priestley, M. B. 228 Pullum, T. W. 29 Ratkowsky, D. A. 197 Rustagi, J. S. 70 Ryan, B. F. 62, 232 Schumacher, E. F. 78 Scott, J. F. 174 Silverman, B. W. 201 Snedecor, G. W. 181 Snee, R. D. 95, 107 Snell, E. J. 63, 109 Sprent, P. 70 Steinberg, D. M. 216 Tufte, E. R. 38, 40 Tukey, J. W. 44, 45, 183 Velleman, P. F. 45
Neave, H. R. 246 Neider,]. A. 15, 51, 59 Nelson, W. 231 Neyman, Jerzy 57 Nightingale, F10rence 57
Wainwright, G. 74 Wald, Abraham 58 Weisberg, S. 85, 121,201 Wetherill, G. B. 20, 59, 94,198,200,201
Oldham, P. D. 166
Youden, W.
J.
188
Subject index
Abstract journals 66 Acceptance sampling 228 Addition law 184 Additive factors 204 Alias 215 Algorithm 60 Alternative hypo thesis 191 American statistical association 66, 244 Analysis of covariance 205, 214 Analysis of deviance 209 Analysis of variance see ANOV A Andrews curves 37 ANOVA 1045, 181, 193, 196,2025 ARIMA model 159, 224, 226 Autocorrelation function 223, 226 Autoregressive process 224 Backache da ta 130 Backward elimination 198 Balanced incomplete block design 177, 215 Bar chart 88, 89 Bayesian approach 56 Bayes theorem 56, 191 Bernoulli distribution 186 Bimodal distribution 89 Binary da ta 108, 136, 152, 208 Binary variable 24 Binomial distribution 186 Blocking 214 BMDP package 63 Bonferroni correction 52 Bootstrapping 191 BoxCox transformation 43, 115 BoxJenkins modelling 159, 226 Box plot 36, 104, 115, 138 Boyle's law 9, 16 Calibration 201 Canonical correlations 222 Carryover effects 214
Case studies 83 Categorical data 106, 193 Categorical variable 24 cdf 181, 185 Censored da ta 108, 143, 151 Census 210 Chernoff faces 37 Chisquare distribution 185, 187, 248 Chisquare test 107, 152, 166, 167, 170, 193 CI 181 Clinical trial 21&19 Cluster analysis 41, 221 Cluster sampling 212 Co ding 26 Coefficient of determination 199 Collaboration 68, 69 Comparative experiment 174, 214 Competing risks 109 Complete factorial experiment 151, 215 Composite design 216 Computers 5964 Computer packages 6064 Concomitant variable 205 Conditional probability 166, 184 Confidence interval 189 Confirmatory analysis 15 Confounding 12, 215 Consistent estimator 188 Consulting 6870 Contingency table 106 Continuous distribution 185 Continuous variable 24 Control chart 228 Cook's distance 207 Correlation coefficient 33, 11721, 199, 200201,219 Correspondence analysis 41, 107,221 Costbenefit analysis 78 Costs 9 Count da ta 106, 193, 208
258
Subject index
Covariance matrix 219 Covariate 205 Crossvalidation 191 Cumulative distribution function 185 Cusum chart 229 Data analytic methods 40 Data collection 1{}12, 1747 Data mining 16 Data processing 257 Data scrutiny 23, 126 Decision theory 56 Definitive analysis 14, 4858 Degrees of freedom see DF Demographie variable xii Dependent variable see Response variable Descriptive statistics 3240, 8492, 1824 Design matrix 205 Design of experiments 1{}12, 21416 Deviance 209 Deviation 18 DF 181, 185 Discrete distribution 185 Discrete variable 24 Distributionfree methods 50, 193 Dot plot 36, 111 Doubleblind trial 217 DurbinWatson test 226
Exponential growth 160,224 Exponential smoothing 227 Factor analysis 220 Factorial experiment 214 Fdistribution 187, 248 Fence 183 Finite population correction 211 Fixed effects model 203 Forecasting 1549, 227 Forward selection 198 Fractional factorial experiment 215 Frame 210 Frequency distribution 88 Frequentist approach 56 Ftest 203 Gamma distribution 1867 GaussMarkov theorem 206 Generalized leastsquares 206 Generalized linear model 2079 General linear model 2057 GENSTAT 62 Geometrie distribution 186 GLIM 63, 23943 Goodnessoffit test see Chisquare test Graphs 368 Hat matrix 206
EDA44 Editing 26 Ehrenberg's approach 57 EM algorithm 31, 188 Epidemiology 218 Erlang distribution 187 Error 18, 2830 oftype I 192 of type II 192 oftype III 8 Estimation 19, 18891 Estimator 188 Ethical problems 69, 70, 218 Event 184 Expectation or expected value 185 Experimental unit 214 Experiments 1{}12, 83 Expert systems 64, 70 Explanatory variable 25, 194 Exploratory analysis 15 Exploratory data analysis 44 Exponential distribution 1867 Exponential family 208
Hazard function 230 Heteroscedastic data 200 Hierarchical design 215 Hinge 183 Histogram 36, 87 HoltWinters method 159, 227 Hspread 183 Hypergeometrie distribution 186 Hypothesis testing see Significance tests
IDA 2247 Incomplete block design 177 Independent events 184 Independent variable 198 Index journals 66 Indicator variable 205 Inductive inference 184 Inference 184, 188 Influential observation 20, 30, 207 Institute of Mathematical Statistics 60, 66, 245 Interaction 204 Interim tests 52
Subject index International Statistical Institute 66, 244 Interquartile range 182 Interval estimate 189 Interval scale 24 Inversion 28 Jackknifing 190 Journals 657, 165 Judgemental sampling 212 Kaiman filter 225 KruskalW allis test 194 Kurtosis 183 Lateral thinking 54, 80 Latin square 142 Least absolute deviations 189 Least significant difference 204 Least squares 189 Lestimators 190 Level 214 Level of significance 191 Leverage 206 Library 657, 165 Lie factor 38 Life testing 229 Likelihood function 188 Linear regression 122, 1957 Link function 208, 209 Literature search 9 Logistic regression 208 Logistic response model 111 Logit analysis 164, 172, 208 Loglinear model 107, 111, 173, 208 Lognormal distribution 187 Longitudinal data 11, 12 Main effect 204 MannWhitney test 114, 194 MANOVA 221 Mathematical model 15 Maximum likelihood estimation 188 Mean population 185 sampie 32, 182 Mean square 181, 203 Median 32, 182 Mestimators 190 Meta analysis 15 MINIT AB 62, 2329 Missing observations 26, 3031 Mode 182
259
Model building 1521 Model formulation 17 Model validation 17, 19 Moments, method of 188 Moving average process 224 MS 181, 203 Multicollinearity 198, 206 Multidimensional scaling 41, 221 Multiple comparisons 204 Multiple regression 1215, 1979 Multiplication law 184 Multistage sampling 212 Multivariate analysis 4041,21922 Multivariate normal 187, 221 Mutually exclusive events 184 Negative binomial distribution 88, 186 Nested design 215 Noise 16 Nominal variable 24 Nonlinear model 197 Nonparametric approach 50, 193 Nonparametric regression 201 Normal distribution 1857, 189, 247 Normality, tests for 20 Normal prob ability paper 183 Nuisance factor 12 Null hypo thesis 191 Numeracy 758 Objectives 8, 69 Observation al study 1011 Official statistics 67, 75, 85, 91, 165 Onetailed (or onesided) test 191 Oneway ANOV A 103, 116, 2024, 215 Opinion ratings 24 Optimal design 216 Order statistics 183 Ordinal variable 24 Orthogonal design 198 Outlier 20, 2830, 50, 102, 183, 207 Paired comparisons test 192, 193 pdf 181, 185 Percentile 183 Pilot survey 176, 210 Placebo 217 Point estimate 188 Poisson distribution 88, 186 Polynomial regression 197 Postal survey 210, 213 Power 192
260
Subject index
Precision 31 Prediction interval 196 Predictor variable 194 Preliminary analysis 14, 45 PRESS approach 191 Primary assumption 18 Principal component analysis 40, 219 Prior information 9, 69 Prob ability 161, 1845 Probability density function 185 Prob ability generating function 185 Probability paper 183 Probability plot 37, 1834 Probit analysis 208 Problem formulation 89 Problem oriented exercises 82 Projects 83 Proportional hazards model 109, 153, 230 Prospective trial 11 Protocol 217 Pvalue 191 Qualitative variable 25 Quality control 166, 22831 Quality of data 27 Quantile 1834 Quantitative variable 234 Quartile 183 Questionnaire design 176, 21213 Quota sampling 211, 212 Random effects model 204 Random numbers 163, 16970 Random sampling 211 Randomization 12, 214, 217 Randomized block experiment 144, 147, 176, 215 Range 33, 182 Rankits 184 Ranks 114, 194 Ratio scale 24 Regression 18, 42, 11725, 194202 linear 122, 1957 multiple 1215, 1979 polynomial 197 Reliability 22831 Replication 214 Report writing 714 Resampling 190 Residual 19, 20, 206 plot 124 Response surface 216
Response variable 25, 194 Retrospective trial 11 Ridge regression 198 Robust methods 20, 30, 50 190 Rounding 34 Royal Statistical Society 66, 244 S package 63 Sampie me an 32 size, choice of 11 space 184 survey 1012, 21014 Sampling unit 210 SAS package 64 Scatter diagram 37 Science Citation Index 67, 165 Seasonality 224 Secondary assumption 18 Sequential design 218 Signal 16 Sign test 193 Significance tests 42, 513, 1914 Significant sameness 51 Simpson's paradox 171 Simulation 170 Singular value decomposition 222 Skew distribution 33, 88, 90, 115 Skewness, coefficient of 183 Social trends 67 Software 61 Sources of published data 67 Spearman's rank correlation 200 Spectrum 223, 226 Splines 201 Splitplot design 215 SPSS package 63 Standard deviation 33, 182 Standard error 189 Standard normal distribution 185 State space model 224 Statistic 188 Statistical information 67 Statistical model 16 StemandIeaf plot 36, 87, 90 Step 183 Stratified sampling 211 Studentized residuals 206 Subjective judgement 78 Sufficient 188 Summary statistics 32 Surveys see Sampie survey Systematic sampling 212
Subject index Tables 346, 91 Taguchi methods 215 Target population 175, 210 tdistribution 187, 247 Teaching statistics ix, 3 Technique oriented exercises 82 Telephone survey 210, 214 Test of significance see Significance tests Test statistic 191 Time plot 37, 154 Timeseries analysis 15460, 198, 2228 Transcription error 28 Transformations 434, 90 Treatment combination 214 Trend 224 Trimmed mean 182, 190 ttest 192 Twosample ttest 101, 111, 112, 192 Twotailed (or twosided) test 191 Twoway ANOVA 142, 145, 204, 215
Twoway table 110, 138, 164 Type I error 192 Type II error 192 Type III error 8 Typing 26, 174 Unbiased 188 Uniform distribution 186 Units 210 Variance 33, 189 analysis of see ANOV A components model 204 Weibull distribution 1867, 230 Weighted least squares 206 Welch test 111 Wilcoxon signed rank test 194 Winsorization 30, 190
261