A research fault based on the people who didn’t agree to be studied, although they were chosen. People who didn’t agree may have been different in other important ways from people who did, and so the study’s results might be true for only part of the chosen group. For example, if the chosen group is depressed people and the more depressed ones were too tired or hopeless to answer a survey, then any answers about the amount of energy or hope in depression would not be giving a full picture.
If you try to survey 100 people, and 40 of them don't respond, those 40 could be different in some important way from the 60 who did respond. That's non-response bias - a problem often ignored in survey research. Non-response bias can be estimated by comparing data on the current sample with other data (e.g. from a Census) on the same population.
The bias created by the failure of part of a sample to respond to a survey or answer a question. If those responding and those not responding have different characteristics, the responding cases may not be representative of the population from which they were sampled.
Bias caused when respondents who answer an online questionnaire have very different attitudes or demographic characteristics to those who do not respond. Open source In general terms, open source software allows for users to access the source code for free and allows it to be modified and redistributed. A full definition is available at www.opensource.org.
an error due to the inability to elicit information from some respondents in a sample, often due to refusals.
The bias that results from differences between those who agree to participate in a survey and those who don’t.
A major potential source of bias, particularly in postal surveys, in that responders' opinion may differ from non-responders. For example it is typically those with extreme opinions who respond, or those who feel most involved with your organisation.