Abraham, Manja D., Hendrien L. Kaal, & Peter D.A. Cohen (2002), Licit and illicit drug use in the Netherlands 2001. Amsterdam: CEDRO/Mets en Schilt. Pp. 61-79.
© Copyright 2002 CEDRO Centrum voor Drugsonderzoek.
Licit and illicit drug use in the Netherlands 2001
Chapter 2: Response and representativity
This chapter describes the representativity and the quality of the data obtained and used in this research. In the ideal situation, response data represent the research population, down to the level of a range of demographic variables such as age, gender, marital status and address density. Thus, it is important that the sample is drawn in a way that secures that representativity and that the subsequent procedures do not introduce errors that jeopardise the representativity. The first part of the chapter focuses on the extent to which the sample and response are representative for the population. The next part deals with different types of response and non-response. Finally the quality of the response will be considered in some detail. The focus there will be on non-valid cases and erroneous data entry.
Table 2.1 shows an overview of the research population, sample and the response, for each stratum studied. It presents figures for the 13.5 million Dutch citizens registered at the Municipal Population Registry (GBA) on January 1st 2001 with the age 12 and over (research population). The total number of all registered Dutch inhabitants at that moment was almost 16 million.
As explained in chapter 1, a sample of 40,573 persons was drawn by means of a two-stage stratified sampling method, including oversampling those aged 12 to 19 and those living in Amsterdam or Rotterdam. In all 17,655 people were successfully interviewed. The response in Amsterdam and Rotterdam is lower than could be expected on basis of their share in the sample.
Whether and how the sample and response are representative for the population is illustrated in tables 2.2 to table 2.9. Each table gives -for the total sample and each sub-sample- the population, sample and response figures by some demographic characteristics.
First, the population is compared with the sample on age, gender and marital status. The sampling method and size ensured that the sample covers the population. However, as a result of oversampling in the age group 12-19, in the sample the relative size of the age groups 12-15 and 16-19 are almost twice of those same groups within the population. Directly related to this is a dissimilarity of the sample in marital status. This dissimilarity between the population and the sample is of minor importance because effects can be predicted and are corrected by weighting.
Secondly, attention should be paid to comparing the sample to the response. Possible dissimilarities between sample and response can be a result of selective non-response, which can bias the outcomes of the survey. The distributions of sample characteristics are compared and tested by means of Student-t (age) and Chi-square (gender and marital status) tests (p<.01). Sample to response comparisons reveal significant differences in age, gender and marital status. A number of patterns seem to emerge in all sub-samples. In general, youngsters (age 12-19) responded well while young adults (age 20-29) and elderly people (age 60+) were less willing to respond. Women were more willing to participate in the survey than men were. And finally, compared to unmarried and married people, divorced and widowed people co-operated less often in the survey. These patterns have also been found in other similar surveys, e.g. the NPO 1997 survey and the POLS surveys performed by Statistics Netherlands.
Although various precautions were taken to increase response (see chapter 1), the risk of selectivity generated by non-response could not be eliminated. In an effort to minimise the effects of this selectivity, the data was weighted for a limited number of population characteristics (age, gender, marital status, address density: post-stratification). The calculation of each individual weight is based on achieving complete correspondence of distributions of mentioned combined characteristics between response and population. It is to be expected that weighting improves any other possible bias in the sample characteristics. Nonetheless it is necessary to do a thorough non-response survey in order to try to generate more clarity about non-response bias.
2.3 Response and non-response
The complex design of the NPO makes it difficult to categorise and analyse all different forms of response and non-response. For better insight, the gross sample has been categorised into four subclasses on the basis of the following criteria:
The mixed design of the NPO causes complications for the analysis of non-response. The many ways in which respondents can participate in the survey led to a change in non-response categories. Thus, there is a difference in response and non-response categories between the CAPI and MM samples. As the CAPI method used in 2001 was similar to the one used in 1997 it was largely predictable what non-response categories would be found for the CAPI sample. On the basis of prior experience CAPI non-response was subdivided into the following five categories.
Frame errors in the CAPI sample are straightforward: moved, unknown at address, address not found, person deceased, and others. The response and non-response situation is less transparent for the MM sample. MM non-response is subdivided into the same five categories. However, the reasons why people would fall into a certain category could differ from those in the CAPI sample. For example, the category 'no contact achieved' included no reaction to invitation of participating, no phone answered or no questionnaire returned in reminder phase. The category 'contact achieved', including 'soft refusals' where the contact did not result in any interview, now not only included appointments without successful interviews, but also people who responded by indicating the desired mode of interview but never actually completed this interview.
Table 2.10 shows the classification of the gross sample in four response categories as described above, per sub-sample. It was found that the proportion of successful interviews of the gross sample in the highest address density areas and especially the cities contrasts with the rest of the Netherlands. The gross Amsterdam sample resulted in 34.5 per cent successful interviews, the Rotterdam sample in 40.8 per cent and other highest address density sample in 46.6 per cent. Other rates are higher and up to 51.2 per cent. The high number of non-used addresses in Amsterdam paradoxically followed from the need to draw a second additional sample because of the low response in the initial sample. The number of frame errors is generally lowest in the low address density sub-samples compared to the higher address densities. The proportion of non-valid cases is lowest in Amsterdam. This can be explained by the relative large share of CAPI respondents in the Amsterdam sample.
The NPO 2001 survey is not unique in terms of its (non-)response patterns. Firstly, the low response rate of 47.1 per cent out of the valid gross sample is non-desirable but common in the Netherlands in the year 2001. The response percentage is in line with findings in the CBS POLS 2001 study (between 50 and 55 per cent; personal communication Beukenhorst). It becomes increasingly harder nowadays to reach and question a substantial part of the sample. The response rate in the 1997 NPO survey was 59 per cent; in the CBS POLS 1998 survey this was 58.6 per cent (Abraham and Jol 2000). Second, the fact that large cities and other areas with higher address density respond less enthusiastic is consistent with the expectations. This phenomenon can also be found in the CBS POLS 2001 survey and in the NPO held in 1997.
On the basis of the low response rate in this survey, like in all sample surveys, the need for a sound non-response analysis is self-evident. The sample frame used for the non-response survey that was conducted existed of the non-response category minus the non-valid cases. The effect of non-response on drug use prevalence estimates was tested on national level and for the methodological subgroups (CAPI versus MM). A concise report of the non-response survey can be found in chapter 3.
2.4 Data quality
The complex features of the NPO survey method might be suspected to introduce new sources of errors. For that reason, threefold data quality checks were carried out after having received the data from the fieldwork organisation to inspect the quality of this data. The first check tested whether the persons in the database were 'legitimated', i.e. whether these are the identical persons in the initial sample, the second and third checks are on item-missings and logical errors respectively.
The first validity check looks at the legitimacy of the person answering the questionnaire. To avoid statistically biased and unaccurate estimates, it is required that persons drawn in the two-stage stratified sample are participating in the survey themselves, and not someone selected by them. Interviews completed by those who are not selected in the sample might bias the results of the survey. E.g. a paper questionnaire is easily passed on to an acquaintance that is interested in the research subject or the incentive. A questionnaire is considered valid if the person answering the questionnaire meets the gender and age characteristics (up to three years older or younger is allowed) as given in the sample. Because of non-matching gender and age, 627 persons were removed from the data (initially 18471 cases). Of these 627 interviews, 29 were completed by CAPI, 2 by telephone, 126 by floppy disk, 2 by Internet and 468 by paper questionnaires. Remarkably, the intervention of an interviewer (CAPI, Telephone) enhances the probability but is not a sufficient condition to ensure that the right person is interviewed. For the sake of clarity it is important to remember that all interviews are self-reported. To define the respondents' legitimacy, the assumption was made that the respondent stated their true gender and age in the interview. It stands to reason that if this is not the case, the reliability of other (self-reported) answers could also be doubted.
The second validity check quantifies item non-response by counting the number of missings on designated variables. Variable missings occur when the respondent skips a question, willingly or in error. Some variables are more prone to item non-response than others; income, for example, is left open in 18 per cent of the applied cases. This item non-response analysis focuses on lifetime drug use variables. It was decided that a decent data set could contain at most three missings out of in total 14 drug use lifetime questions. This protocol is in line with the previous NPO. As a result, 183 completed questionnaires were found to be non-valid. Of these, only 5 are CAPI, 7 disk 9 telephone; most missings were found in the paper questionnaires (162 cases). This is not surprising because this is the only interview method without possibilities of checking and cautioning if a routing mistake is made or if a question has been left open.
The last validity check examined the consistency of answers to questions in the questionnaire. A respondent's answer is considered to be inconsistent when it is contradictory to an earlier given answer; e.g. the age of first (or last) use of a specific drug is higher than the age of the respondent. Once again three mistakes were allowed. In total 21 paper interviews had to be dismissed for illogical answers.
Table 2.11 tabulates the number of non-valid and dismissed questionnaires by mode of interviewing. It should be stressed that modes are not randomly appointed to respondents and therefore respondent groups are not of equal size for each mode. For the Netherlands Percentages are given to indicate the relative occurrence of non-valid and dismissed questionnaires in each mode.
Last update: May 25, 2016