User Tools

Site Tools


research:limitations

Limitations

Every research faces certain limitations. In this section the limitations which are deriving from general methodological problems, as well as from specific circumstances of our project, will be outlined. This critical reflection of the applied research design is important to understand the quality of the provided evidence and to understand to which extent the interpretation of the evidence faces restrictions.

Inclusion and exclusion of factors into survey

Several perspectives in the field of sociology perceive knowledge not as a fact but as socially constructed by a scientific community. Therefore, it is important to consider the context of research interests, which are influencing the knowledge generation. As those influences are partly reflected in a research design it can be anticipated as eminent to reflect the research design itself to understand how the applied research design and methodology influenced the findings (Knoblauch, 2008).

Since the questionnaires in our research are the framework for the assessment of the public opinion on wind energy it is necessary to evaluate which aspects were not included in our research. This is in order to estimate if aspects that might could play an important role in influencing the social acceptance of wind energy were not considered or reflected in our applied research designs. To identify the missing factors and subjects within the questionnaires, those were compared to factors which were found within the literature review of this project. As a result of this comparison it can be stated that the questionnaires of both research projects covered a wide range of different factors and nearly all listed factors within the review could be found in the questionnaires. In the following the still missing aspects are pointed out. A detailed overview of the not include factors can be found in the Annex.

One of the major findings of the review is that factors concerning procedural fairness were only scarcely included into the questionnaires. The questionnaires only assess if people felt informed and not if they actually were able to engage in the planning process. This of course also leads to an exclusion of the factor “if the participants felt if their contribution was taken seriously”. This missing information also limits the conclusion on earlier experience in the planning process. This is also the case for conclusion on the factor “trust”. Despite an indicator may evaluate mutual trust between the public, authorities, and wind farm developers may derive if respondents feel informed or not. The fact that procedural factors are only slightly assessed may also be problematic, because the literature review of this project showed, that procedural factors play a high role in recommendation to increase acceptance.

Further in case of the following question (cf. question12 in Questionnaire Haveland-Fläming) under which condition the respondents would find wind turbines as meaningful within their area it can be said, that the given answer possibilities do not cover mitigation measures e.g. creating dedicated wildlife habitats as a compensation to the negative impact on wildlife or participatory aspects like more inclusive planning processes. The recommendations focus on economic and distributive fairness aspects - However, the exclusion of these factors can also derive from the wording of the question.

Other factors which were not included are e.g. the design of individual turbines (height, rotor diameter), which might be of interest assessing the opinion of the local population in regards to repowering of wind turbines. As only the acceptable amount of wind turbines was asked without any relation to height of these turbines this could lead to misleading conclusions (cf. question 19 in Questionnaire Haveland-Fläming).

Some factors are not directly occurring in the questionnaire but might be assessed in the combination of several questions. These are e.g. ‘NIMBY’ (abbreviation: “Not in my backyard”; is against everything newly build in the neighbourhood),‘PIMBY’ (abbreviation: “Please in my backyard”; is for everything newly build), ‘place attachment’ and ‘wind energy acceptance’. In the case of Potsdam the developed index was assessed. It is worth mentioning, that the conclusion in relation to these factors, which are based on an operationalized concept, are more limited than answers which are directly stated in regards to an factor (cf. question 17 of Potsdam questionnaire). Some factors were not included in the research as a result of interest but as the applied research methodology by the research groups. It may be seen as not feasible to assess these factors. An example of such a factor is ‘impact on property values’.

A conspicuous feature might be seen in the open questions of the questionnaires which offer the possibility to mention those excluded factors. This issue may be especially found under the question regarding the advantages and disadvantages of wind energy in your neighbourhood (cf. question 7 in Potsdam and question 6 in Havelland- Fläming).

Sample & survey method

Conducting a sample analysis and obtaining conclusions from a sample is a standard in science. However, a fitting and precise sample methodology is crucial, as a not-well applied sample methodology will deform the results (Micheel, 2010). A key issue within sampling techniques is the concept of representatives, where every individual of a population has the same possibility to be part of the sampling. Representativeness is usually guaranteed by random sampling techniques, so that a similar figure of total population can be achieved. However the random sample does not lead to characteristics like age or gender which are commensurate for the populations of the regions. It should be pointed out that an especially small sample size may complicate a conclusion drawing for the total population and may be seen as a limitation (Micheel, 2010).

Havelland-Fläming and Uebigau-Wahrenbrück used a random sampling method. These sample methods may not guarantee a certain picture of the regions populations in regards to their proportion of age and gender. However, both sub regions, as rural areas of the state Brandenburg, emulate a more or less real proportion of age and gender distribution in the chosen areas. The most people in the survey areas are seasoned people or pensioners which also answered the most questionnaires (cf. question 23 of Havelland-Fläming ). Since the return of questionnaires from the region Uebigau-Wahrenbrück was very poor (21 from 217) it was classified as a sub region of Havelland-Fläming. Havelland-Fläming was afterwards renamed as “rural areas”. As stated before the possibility to draw any statistical conclusions for the area Uebigau-Wahrenbrück is limited due to the low response rate, which led to a reduced small sample size of this area. For the sake of completeness the findings of Uebigau-Wahrenbrück were shown within the results of the section "rural areas"but not considered for the comparison with the survey of 2005.

Another limitation may be seen in the applied survey method which may influence the response and may distort the findings as well. The survey was based on written questionnaires which were delivered and accumulated subsequently from the chosen households. The written questionnaire may influence the findings indirectly as the preliminary chosen questionnaire type. The expectations of written questionnaires are a higher participation of people which are interested or have a strong position on the researched subject. This may cause an overrepresentation of very positive and very negative opinions on the topic and an underrepresentation of neutral views. Another limitation due to the chosen survey type is the creating of an uncontrollable situation (Micheel, 2010). This means that time, space and mood of the respondents during filling out of the questionnaire may influence the answering of the questionnaire as well (Bachleitner & Aschauer, 2009) and was not controllable.

In Potsdam the sampling method was a mixture of an arbitrary and a conscious sampling technique. The whole sampling took place in several public spaces around Potsdam. Initially, the research group asked pedestrians arbitrary in public spaces. This methodology is a common method for student research groups due to limited capacities like limited budgets or lack of respondents (Micheel 2010). As a second step during the survey period an adjusting of the sampling by the student research group took place. This was done by an attentive selection of respondents for becoming a proportional figure (gender/ age) of the total number of citizens of Potsdam and a more balanced outcome among the districts was managed. As can be seen, the sampling technique is not random and may also limit the conclusions drawn by the results. Furthermore, another limitation may be seen in the age of participants in the final sample. In the sample the most participants were young people (age 20-30). Due to the fact, that the mean age in Potsdam is around 42.45 years (Potsdam.de 2015) it may be concluded that the group of young participants is overrepresented in the survey. This does not result in invalid findings but must be taken into account when interpreting the results. The used survey method was a standardised interview which also leads to some conestrains. The major concern is the influence of the interviewer to the respondents during the interview like mood of interviewer, appearance and intonation. This may influence the answering of the questionnaire and distort the outcome. The used standardised interviews may reduce the influence of the interviewer more than open interviews (Micheel, 2010). The age and gender of the interviewers may also influence the respondents. It can be assumed that the high respond rate of young people might be possible because the interviewers belong to this societal group and therefore it was easier to engage these age groups into interviews. Another important point still to mention, which presumably is more likely to occur in face to face interviews, is that the respondents might had lied, because of social desirability of an answer (Micheel, 2010).

Cross-section studies

As both surveys took place during a short time period (end of May to end of June 2016) they have to be seen as cross-section studies. Cross-section studies are common in social science. These studies only provide a snap-shot perspective instead of long-term evaluation. In general, these studies are very limited to assess social processes, because social processes underlie a specific dynamic which may change over time. However, it can be stated, that cross-section studies are often applied in social science to achieve social processes. But it has to be mentioned, that all results delivered by a cross-section analysis are very limited (Micheel, 2010).

During the survey period a contortion of results may take place by short time trends (especially media) and external events. During data collection a few media trends and events took place which might have played a role in influencing the answer behaviour of the respondents. Examples for events and trends on the national level are the Erneuerbaren Energien Gesetz (EEG) or the demonstration of “Ende Gelände”- movement. The discussion of EEG might have influenced the debate about renewable energy acceptance and willingness to pay like in 2013 when the reform was discussed due to increasing electricity prices. The demonstration of “Ende Gelände”- movement is not directly related to wind energy, but still to energy production and fossil fuels. The occupation of a coalmine in the south east of Brandenburg as demonstration done by the movement takes place from 13th to 16th of May 2016. On federal level are other direct events which may also influence the view of the participants, e.g. the referendum against wind energy from the public initiative “Rettet Brandenburg” as well as several ongoing approval processes of new wind farms and their participation phases (cf. Media Analysis).

Style of questions

The wording of the questions as well as their order may influence the chosen answers of respondents (Micheel, 2010). The used questionnaires for the surveys in Potsdam as well as in rural regions of Brandenburg were an adapted version of a questionnaire draft of a preliminary survey which took place in 2005 in some regions of Brandenburg (for more detailed information see: Overall Methodology and Rural areas). The adaption for the survey in Potsdam included a redesign and removal of several questions for a better fitting to the urban region. For the rural areas in Brandenburg the adapted questionnaire was just slightly changed to be comparable to the preliminary survey. There takes place a change because of the bad prelimnary design and the not anymore actual questioning. Therefore, if misleading conceptions of questions were contained in the earlier survey, they will appear in the new survey as well (cf. Annex).

The most questions within the questionnaires were closed questions implying not only multiple choices but also answer scales (cf.figure 1). Nonetheless, these questions do not offer the possibility for the respondents to express their detailed view to an issue, because the answer opportunities are limited. So a lack of desired answer possibilities or limited scales can lead to an information loss or deform the answer behaviour of the respondents (Micheel, 2010). As the questionnaires in both research groups contain several questions based on answer scales, this issue has to be discussed as well. The number of scale steps as answer possibilities are often discussed within social sciences. Micheel (2010) suggested that a rating scale containing four answer possibilities is dominant within sociological research. Furthermore, the literature recommended that also an uneven number of scales should be used if there is the possibility that a person may have a “neutral” opinion to one topic. If a neutral scale option is given more respondents with less strong opinions to a topic tend to choose the neutral option what might distorted the results (Micheel, 2010).

Example for closed question Fig. 1: Example for closed question

In the Potsdam group an unclear scale step occurred. In a statement question about advantages or disadvantages the respondents could choose between the answer possibilities: “I do agree”, “I do not agree” and “I do not know” (cf. figure 2). Within the evaluation of the findings of this question a lot of respondents choose “I do not know” instead of “I do not agree”. It might be the case that the answer possibility “I do not know” distorted the findings of this question as well as a “neutral” answer possibility.

Response options question 7 Fig. 2: Response options question 7

The question: “How far should wind turbines be away from residential areas according to your opinion?” (cf. Potsdam question nr. 13; Havelland- Lausitz question nr. 17; Uebigau- Wahrenbrück question nr. 16, figure 3) has to be considered critically. The given answer possibilities are “800m”, “1000m”, “1600m” , “3000m” and “3000 & more”. The points of criticism are the nonlinear increase between the different answer possibilities and also why these distances were chosen at all.

question: “How far should wind turbines be away from residential areas according to your opinion?” Fig. 3: question: “How far should wind turbines be away from residential areas according to your opinion?”

A few open questions were also included into the questionnaires. These questions usually require that the respondents might be able to express themselves. This may benefit specific social classes with e.g higher education, as these social classes in general tend to answer these question more often. However, as the open questions in these questionnaires were very short and also reduced to at least three per questionnaire, this issue might only play a little role within our research. Another problem within open question is the grouping and coding of the given answers in the evaluation phase of the questionnaires (cf. question 17 of Potsdam questionnaire). This process is time consuming and may evolve into a distortion during clustering (Micheel, 2010). Furthermore, it must be stated that the open questions in the applied research projects had a poor response rate that limited the statistical interpretation and analysis of these questions.

Comparison 2005 to 2016 ( Havelland- Fläming)

In comparison of the Havelland-Fläming survey from 2005 and 2016 it can be said that the used questionnaires for the survey were very similar and rarely changed to guarantee a high comparability. As stated before the findings of Uebigau-Wahrenbrück were excluded.

In contrast to the survey of 2005 it can be said that the current survey assessed lesser regions. The regions Niederer Fläming and Dahme/Mark were surveyed in 2016. This means for the comparison that the results of the survey of 2016 might only be compared with a part of the findings of the earlier survey and are limited to the regional specific findings of both regions (cf. Discussion).

Further it is worth stating that the sample in 2016 had a much lower response rate than the survey in 2005 (cf. figure 4). This generally occurs from the fact that in 2005 school children from the researched villages collected the questionnaires, which led to a high response rate of 80%. This high response rate is unusually high for written surveys (Petermann, 2005).

Compairison of respond rates of the 2005 and 2016 survey Fig. 4: Compairison of respond rates of the 2005 and 2016 survey

As response rate for written survey in general tend to be lower and are very highly depending on the topic, the response rate of nearly 30% in 2016 may be seen as sufficient for written surveys. Especially considering that a study which assessed the effect of a decrease in a response rate from 53 to 35% in another research concluded that only very little deformation of the findings took place (Petermann, 2005). Further it is worth mentioning that the decline of the response rate found in our case follows a general trend of declining response rates in these kinds of surveys (Aust & Schröder, 2009).

Overall comparison

For the comparison of rural and urban areas it is important to outline that this comparison is very limited. As the questions within the questionnaires differed in the issue of scales, just a few were useful for a comparison. A comparison of connections and correlations between variables in several samples are a common practice in statistical analysis to verify the findings. However, in this research the sampling techniques differ and therefore the samples suffer from different distortions. The comparison of the findings and therefore the assumptions should be chosen carefully. As the deformation of the sampling techniques might had a more significant impact on a represented variable in one sample than in the other, which would influence the comparison as well (Kriz, 1983; Müller-Benedict, 2001).

Despite the different sampling techniques and the deriving consequences, the used index in this comparison also brings a lot of burdens. As the index was built using several questions and also rating some questions more than the others, the findings are very limited. Especially because the rating of the questions within the index was based on theoretical thinking, not using statistical techniques like it is recommended within literature (Micheel, 2010). Still the results of the social acceptance index were compared to the findings of the single surveys to assess if it distorted or biased the findings. This comparison showed, that the index reflected the findings of the surveys well.

Applied statistical techniques

The following provides a short reflection of the applied statistical techniques used during the evaluation phase. In the first step all questions were analysed with a frequency analysis. This technique might be seen as less problematic while just counting the given answers and therefore no deformation of the findings takes place through technique on its own (Müller-Benedict, 2001). As the applied subsampling techniques also only measured the given answers of a specific group, they also can be seen less problematic. However, when a concept, for instance, the 'NIMBY-attitude' was estimated with an subsampling, it is appreciable that the operationalization of the concept can still lead to a deformation of the findings (Kriz, 1983; Micheel, 2010).

For the advanced statistical analysis which tried to estimate the correlation and significance of connections between variables appeared some limitations. These kinds of analysis techniques face the problem that a statistical assessed correlation between variables does not automatically lead to causality in terms of content between these variables (Müller-Benedict, 2001). These statically correlations can also derive randomly, which is often the case within huge data sets. And secondly, the assessed variables can show correlation without influencing each other, which is the most critical limitation of this technique and further the most common one. This false correlation usually appears when both assessed variables are depending on a third not considered variable, which influences both variables at the same time. As this leads to an assessed change in both analysed variables and creates the perception of a correlation between these variables, even though there is no (Müller-Benedict, 2001).


References

  • Aust, F., Schröder, H., 2009,Sinkende Stichprobenausschöpfung in der Umfrageforschung-ein Bericht aus der Praxis. In: Weichbolt, M, Bacher, J & Wolf C (eds.), Umfrageforschung- Herausforderungen und Grenzen, VS Verlag für Sozialwissenschaften, Wiesbaden: 195-213.
  • Bachleitner, R., Aschauer, W., 2009,Zur Situationsspezifität von Raum, Zeit und Befindlichkeit in der Umfrageforschung. In: Weichbolt, M, Bacher, J & Wolf C (eds.), Umfrageforschung- Herausforderungen und Grenzen, Verlag für Sozialwissenschaften, Wiesbaden: 515-539.
  • Knoblauch, H., 2008, Wissen. In: Baur, N, Hermann, K, Löw, M & Schroer, M (eds.), Handbuch Sozilogie, VS Verlag für Sozialwissenschaften, Wiesbaden: 465-483.
  • Kriz, J., 1983, Statistik in den Sozialwissenschaften. Westdeutscher Verlag GmbH, Opladen.
  • Micheel, H., 2010, Quantitative empirische Sozialforschung. Ernst-Reinhardt Verlag, München.
  • Müller-Benedict, V., 2001, Grundkurs Statistik in den Sozialwissenschaften. Westdeutscher Verlag GmbH, Wiesbaden.
research/limitations.txt · Last modified: 2017/01/06 20:43 by denise.schniete