The capability of a computer, or information or telecommunications system, to perform consistently and precisely according to its specifications and design requirements, and to do so with high confidence.
the extent to which a measuring device yields the same results wherever the same quantity is measured.
The degree to which a student would obtain the same score if the test were readministered (assuming no further learning, practice effects, or other change). It is a measure of the stability or consistency of scores. Standardization: In test construction, refers to the process of trying the test out on a group of students to determine uniform or standard scoring procedures and methods of interpretation.
Reliability has many meanings in statistics. Here we say that data are reliable if repeated measurements on the units generate similar values.
Consistency of a test in measuring performance. (129)
Consistency of diagnosis. If diagnoses can be assigned reliably, an individual will be assigned the same diagnosis across differing circumstances (e.g., different diagnosticians).
The extent to which a measure is stable and consistent over time in similar conditions. go to glossary index
Dependability of scores, their relative freedom from error; the consistency of an individual’s performance on a test
Consistency of classification of masters and nonmasters
the extent to which a test gives consistent results. (45)
The degree to which test scores for a group of test takers are consistent over repeated applications of a measurement procedure and hence are inferred to be dependable and repeatable for an individual test taker; the degree to which scores are free of errors of measurement for a given group. See Generalizability theory. Ibid.
The characteristic that same or similar results can be obtained through repeated experiments or tests.
Reliable measures are measures that produce consistent responses over time.
The ability of separate clincians or researchers to consistently diagnose the same disorder after observing the same pattern of symptoms in patients.
The actual degree of dependability with which the equipment performs (note: actual, versus hoped-for). May be expressed in failures per 100,000 hours.
The degree to which the results obtained by a measurement procedure can be repeated by other studies.
the capacity of a measuring device, or indeed of a whole research study, to produce the same results if used on different occasions with the same object of study. Reliability enhances confidence in validity, but is insufficient on its own to show validity, since some measurement strategies can produce consistently wrong results. Establishing intercoder or interrater reliability may be important in some studies where unambiguous meanings for codes in a coding scheme are at stake, so that exercises in which the same material is coded by more than one person and the results compared for consistency may be carried out.
The degree to which an instrument will yield the same result when applied to the same participant or sample of participants more than once. Equally, reliability is achieved when an instrument can be used in different settings or populations so long as each administration does not differ on any relevant variables.
A measure of how consistent the results obtained in an assessment are in a norm-referenced evaluation situation; consistency of a student's ranking within the group of students against which the student is being compared (see Dependability).
extent to which a measurement (such as an instrument or a data collection procedure) produces consistent results over repeated observations or administrations of the instrument under the same conditions each time.
Reliable/reliability in psychology means does something measure what it claims to be measuring in a consistent fashion.
Reliability is the degree to which an assessment or instrument consistently measures an attribute. Inter-rater Reliability. Inter-rater reliability is the degree to which an assessment yields similar results for the same individual at the same time with more than one rater. Test-Retest Reliability. Test-retest reliability is the degree to which an assessment yields similar results from one testing occasion to another in the absence of intervening growth or instruction.
The extent to which the data collected can be collected on another occasion and produce comparable outcomes.
The measure of consistency of an assessment tool. The tool should give similar results over time with similar populations in similar circumstances.
An assessment tool's consistency of results over time and with different samples of students.
the accuracy with which an item or test is measuring what it is measuring, i.e., the likelihood that the obtained result would be replicated if the item or test were given again to the same students- the consistency of scores obtainable from a test. It is usually an estimate on a scale of zero to one of the likelihood that the test would rank test takers in the same order from one administration to another proximate one.
The degree to which an instrument consistently measures in the same way on repeated trials (e.g., a math test given to a student one day would yield roughly the same score if given to the same student the next day).
consistency in measurements and tests; specifically, the extent to which two applications of the same measuring procedure rank persons in the same way. v. reliable.
Fundamentally, consistency. That is, if a test is given repeatedly, under the same circumstances, predictable results will be obtained.
The extent to which a test, measurement, or classification system produces the same scientific observation each time it is applied. Some specific kinds of reliability include test-retest, the relationship between the scores that a person achieves when he or she takes the same test twice; interrater, the relationship between the judgments that at least two raters make independently about a phenomenon; split half, the relationship between two halves of an assessment instrument that have been determined to be equivalent; alternate form, the relationship between scores achieved by people when they complete two versions of a test that are judged to be equivalent; internal consistency, degree to which different items of an assessment are related to one another.
A measure of whether the answers or results will be the same if the test or experiment is repeated.
Quality of the collection of evaluation data when the protocol used makes it possible to produce similar information during repeated observations in identical conditions. Reliability depends on compliance with the rules of sampling and tools used for the collection and recording of quantitative and qualitative information. Sound reliability implies exhaustive data collection and the appropriateness of the evaluative questions asked. This notion is important not only for primary data but also for secondary data, the reliability of which must be carefully checked. Related Terms: Objectivity, Soundness, Representativeness BACK
The reproducibility of a study’s results.
The extent to which different operationalisations of the same concept produce consistent results. To provide results that are statistically reliable a reasonably large and representative sample is required. The table below shows the statistical reliability at the 95% confidence level, for different sample sizes: Sampling tolerances applicable to results at or near these percentages(based on 95% confidence level) Sample size 10/90% 30/70% 50%± %± %± % 100 10 300 600 1,000 1,500 2,000 For example, if the results of a survey of a representative sample of 1,000 residents shows that 50% are satisfied with a particular service, the range within which the true figure would lie, if all residents in the authority had been interviewed, would be ±3 points, 95 times out of 100. In fact, the "true" figure is more likely to lie at the mid-point of the range, rather than at either extreme.
The consistency of the test instrument; the extent to which it is possible to generalize a specific behavior observed at a specific time by a specific person to observations of similar behavior at different times or by different behaviors.
the extent to which a data gathering method will give the same results when the process is repeated.124 Reliability includes the amount of error (random or systematic [bias]) that is inherent in the method used for data collection.
In testing, answers the questions: Is the test consistent over time? If the same students take the test a second time, will they score the same
the quality of producing almost identical results in successive repeated trials (177, 489)
an indication of the consistency of scores across evaluators over time or across different versions of a test. For example, a test is reliable when different teachers or other evaluators give student responses the same or similar scores no matter when the assessment takes place or who does the scoring.
A measure of the ability of a test or other appraisal instrument to evaluate what is being measured on a consistent basis.
Whether a measure produces the same or similar responses with multiple administrations of the same or similar instrument.
The degree to which observations or measures are consistent or stable. Also see coefficient alpha.
An indication of how consistent test scores will be, given different testing conditions or editions of a test. A test or measure is reliable when it is consistent (i.e., repeated measurements would show the same achievement or several observers of a classroom situation would closely agree with ratings recorded for individuals on the same criterion).
(consistency, precision): Consistency among observations. Normally measured as a correlation coefficient among variables Reliability is a function of the number of raters or ratings, so reliability can be improved by increasing the number of observations. Technically, reliability is a characteristic of the judgment made from data collection and not of the data collection procedure. [See correlation coefficient
Sometimes used loosely, this actually refers to the reproducibility of a measurement procedure. It is NOT the same as validity or applicability of a study.
The consistency in results of a measuring instrument, including the tendency of a measurement to produce the same results when it measures twice some entity or attribute believed not to have changed in the interval between measurements (Grinnell, 1990)
The extent to which measurements are consistent or repeatable; also, the extent to which measurements differ from occasion to occasion as a function of measurement error, 29, 32, 128-153
The extent to which a change in value of an indicator is caused by a change in what it measures and not due to measurement error. Reliability of polls or surveys is often an issue, since small changes in the wording of questions can elicit remarkably different responses.
The degree to which observations are repeatedly classified the same.
The consistency with which a measuring instrument (such as a psychometric test) performs its' function, gauged, for example, by comparing test scores from the same subjects at different times.
The consistency of a measure; the measure yields the same result on different occasions or applications (when no real change has occurred).
In assessment, refers to the extent to which a test shows consistency in its measurements, i.e., whether there is variation in scores over repeated testings.
The extent to which a measure, procedure or instrument yields the same result on repeated trials.
The extent to which a measure obtains similar results over repeat trials.
The extent to which a test is dependable, stable, and consistent when administered to the same individuals on different occasions.
The extent to which the same result will be achieved when repeating the same measure or study again. For example, someone completing the same assessment tool twice within a short period of time should get roughly the same result if the tool is reliable.
The degree to which the same result is found when a measurement is repeated under identical conditions.
Reliability refers to the degree to which test scores are consistent, dependable, or repeatable. Reliability is a function of the degree to which test scores are free from errors of measurement.
the extend to which a product performs or works consistently
refers to the idea that a tool is consistent in its report. This may be the issues in looking if a tool administered at different times produces similar scores. It may also be an issue when two or more evaluators look at the same evidence as to whether the scores of each evaluator are consistent with the others. It may also be an issue as to whether each of several items used to construct a scale are loading to that scale in a consistent manner.
Extent to which a system can not fail during operation. Tolerances must be established for all a system functions during requirements and data and process integrity controls must be designed to assure that a system functions within the established tolerances.
The extent to which an assessment or measurement method is dependable in terms of inter-rater consistency or relatively free from random errors of measurement.
The extent to which a test yields consistent results and thus is replicable
The degree to which the test consistently measures what is suppose to measure.
Consistency or dependability of data and evaluation judgements, with reference to the quality of the instruments, procedures and analyses used to collect and interpret evaluation data. Information is reliable when repeated observations using the same instrument under identical conditions produce similar results.
Capable of being relied on; dependable; may be repeated with consistent results.
Reproducibility or stability of data measures.
Reliability is a statistical term defining the degree to which assessment scores are consistent, stable, dependable and relatively free from random errors of measurement. An unreliable assessment cannot be valid.
The characteristic of a test or examination that ensures that chance factors affecting the performance of those taking it are reduced as much as possible. Such factors can include differences in the circumstances in which people take the test and inconsistencies among those who mark it. So common 'examination conditions' and steps to make criteria as clear as possible and to compare and, if necessary, modify individual markers' assessments of the performance of those taking the test improve reliability. One way of checking the reliability of a test is to see if the same range of scores is achieved by two different but entirely comparable groups of people. Reliability should not be confused with validity.
Reliability refers to two technical criteria: consistency and "generalizability." The first criterion seeks consistency in results. For example, will a student's score on a test today be close to his/her score tomorrow? "Generalizability" seeks to ensure that an assessment's questions that cover a subset of skills can capture or "generalize" a broader universe of skills. Return to the Top
The ability of a data gathering tool to obtain consistent results.
The extent to which a test or measurement result is reproducible.
the extent to which an observation that is repeated in the same, stable population yields the same result (i.e., test-retest reliability). Also, the ability of a single observation to distinguish consistently among individuals in a population.
reliability represents the accuracy of an assessment methodology and is normally expressed as a correlation coefficient ranging from 1 to 0. For example it could represent the extent to which interviewers' ratings of candidates are correlated. For a psychometric test, it can be estimated by calculating the extent to which questions all seem to be measuring the same thing or by identifying whether the rankings of individuals' scores in a group remain similar if they are tested on another occasion.
Reliability is a measure of the consistency and dependability of a test score's representation of a student’s knowledge or ability. Reliability is the analysis of scores over such factors as time, different administrations of the same test, different tasks or questions that measure the same skill, or different score raters of the same performance question.
The extent to which the same result is achieved when a measure is repeatedly applied to the same group.
the consistency or stability of a measure or test from one use to the next. When repeated measurements of the same thing give identical or very similar results, the measure is said to be reliable.
The ability of an outcome procedure to consistently give the same value upon repeated measurements of the same phenomenon. Reliability depends both upon accuracy and precision which may be adjusted separately for some instruments. Reliability must be established in order to ensure that variation in an outcome assessment over time reflects a true change rather than measurement error.
Consistency of results across measurements and among and between assessors (interrater reliability).(See also Validity)
The degree to which a test is measuring something consistantly has to do with whether or not testing and other means of assessment are consistent, and the degree to which they are consistent.
Refers to the reproducibility of results with any criterion or method. Also see validity. S__________
The extent to which a test measures consistently.
A statistical term used in assessing an instrument, meaning consistency or predictability. E.g. a survey question has 100% reliability if the survey is repeated and each respondent gives the same answer both times. See validity.
In assessment, the consistency of an assessment outcome; for example, different assessors using the same evidence making the same judgement, or the same assessor making the same judgement about the same evidence on different occasions.
A measure of how dependably a system performs.
Yielding comparable results each time. In examinations, reliability is consistency; the same result is achieved on successive trials.
Consistency of measurement (ie something is reliable if you repeat the intervention with the same subject and get a similar/equal finding).
Reliable assessment uses methods and procedures that engender confidence that competency standards are interpreted and applied consistently from learner to learner and context to context. The outcome or re-evaluation of a Registered Training Organisation conducted against the Australian Framework Standards for Registered Training Organisations prior to the expiry of the initial registration period.
the random error component of a measurement instrument.
the ratio of sample or test variance, corrected for estimation error, to the total variance observed.
refers to whether the study, if repeated, would achieve the same results.
The reliability of a network can be measured by one factor. The number of packets lost in a time period. If two or more packets are sent out on the network at the same time, they crash together and either destroy each other, or destroy the data held in the packets. Thus, protocols that guarantee reliability must make sure that a packet was received, and if not, it must send it again. The two main factors that affect reliability are the number of messages sent, and how complex the network is. (Gossweiler 2)
Reliability refers to the accuracy and consistency of a measurement or test.
regardless of where or when a candidate is assessed or by whom they are assessed (within reason), the result will always be similar. The assessment gives a true picture of the candidate's performance.
Whether a test or instrument used to collect data, such as a questionnaire, gives the same results if repeated on the same person several times. A reliable test gives reproducible results.
The consistency or stability of assessment results--across time, within a test or other assessment procedure, or across different forms of an assessment. A different type of reliability, known as interrater reliability, is critical for performance-based assessment. It is an estimate of the consistency of the scores assigned by two or more raters. High interrater reliability indicates that the raters used the same criteria to evaluate a performance and that they understood and applied the criteria similarly.
Consistency and dependability of data collected through repeated use of a scientific instrument or data collection procedure under the same conditions. Absolute reliability of evaluation data is hard to obtain. However, checklists and training of evaluators can improve both data reliability and validity. Sound reliability implies exhaustive data collection and the appropriateness of the evaluative questions asked.
The extent to which data collected are reproducible or repeatable.
The consistency or stability of test scores when using the same test on the same individual over time.
The degree to which a measure yields consistent results.
an attribute of a process or system that consistently produces the same result.
Something that can be reproduced, and each time will give the same results.
The extent to which a test actually measures whatever it is designed to measure.
(noun) The extent to which an experiment, test, or other measuring procedure yields the same results on repeated trials.
In a fault-tolerant, distributed system, reliability is a measure of how many times a given computation succeeds out of the number of times it is attempted. The acceptable level of reliability will vary between applications, and even between users. A reliable system is one in which any party may achive arbitrarily high reliability by investing sufficient resources.
An indicator of score consistency over time or across multiple evaluators. Reliable assessment is one in which the same answers receive the same score regardless of who performs the scoring or how or where the scoring takes place. The same person is likely to get approximately the same score across multiple test administrations.
The ability of a device to perform within the desired range over a measured period of time.
The extent to which a measurement instrument yields consistent, stable, and uniform results over repeated observations or measurements under the same conditions each time.
The degree to which a test measures the same information consistently time after time.
The degree that a response on the same task will produce the same results or scores.
reliability is a measure of whether the research design been administered correctly and information recorded accurately. In other words if the research was repeated would it reach the same conclusions.
The degree of consistency with which a test measures a trait or attribute. Assuming that a trait or attribute remains constant, a perfectly reliable test of that measure will produce the same score each time it is given.
Is the degree to which test results are consistent with repeated measurements.
The degree to which electric power is made available to those who need it in sufficient quantity and quality to be dependable and safe. The degree of reliability may be measured by the frequency, duration, and magnitude of adverse effects on consumer services.
The consistency of test scores obtained by the same individuals on different occasions or with different sets of equivalent items; accuracy of scores.
The extent to which a measurement procedure yields the same results on repeated trials
The ability to detect errors in survey measurements. A high reliability is typically associated with a survey network where many independent determinations of the same station coordinates are possible. An example of zero reliability is a single radiation to a point, as any errors in the measurement data cannot be detected.
Describes whether a measurement gives approximately the same result in repeated tests.
The extent to which an assessment would produce a similar score on several occasions or when undertaken by several different assessors. Source: Wright, P.W.D. & P.D. (2003) Glossary of assessment terms, Wrightslaw Associates. http://wrightslaw.com/links/glossary.assessment.htm Accessed on 12/02/03.
The degree of confidence that can be assigned to an estimate.
The degree to which the measure is free from random error.
A measure of the reproducibility of a measurement. It is measured by kappa for nominal measures and by correlation for numerical measures.
The quality of producing almost identical results in successive repeated trials. p. 181
The quality of a measurement process that would produce similar results from (1) repeated observations of the same condition or event, or from (2) multiple observations of the same condition or event by different means. Reliability also refers to the extent that a data collection instrument will yield the same results each time it is administered. In qualitative research, reliability refers to the extent that different researchers, given exposure to the same situation, would reach the same conclusions.
Steady, predictable and consistent electric service and prices.
The extent to which a measurement is consistent, dependable, and relatively free from errors of measurement.
An attribute of a network or network component that consistently performs according to its specifications. Reliability has long been considered a critical attributes that must be considered when making, buying or using any hardware, software or network component.
Consistency of data values across measurement instruments or human observers EHR/NSF Evaluation Handbook, Chapter Seven: GlossarySource web site
Extent to which a variable or set of variables is consistent in what it is intended to measure. If multiple measurements are taken, the reliable measures will all be very consistent in their values.
The degree to which the results of an assessment are dependable and consistently measure particular student knowledge and/or skills. Reliability is an indication of the consistency of scores across raters, over time, or across different tasks or items that measure the same thing. Thus, reliability may be expressed as (a) the relationship between test items intended to measure the same skill or knowledge (item reliability), (b) the relationship between two administrations of the same test to the same student or students (test/retest reliability), or (c) the degree of agreement between two or more raters (rater reliability). An unreliable assessment cannot be valid.
The degree to which an assessment yields dependable and consistent results. (McTighe & Ferrara)
The measure of a network's availability. Often measured in terms of the number of nines; for example, "five nines" reliability means that the network is available 99.999% of the time.
A test’s reliability concerns the consistency with which it measures whatever it is supposed to be measuring. A reliable assessment is dependable and will yield similar results each time it is used. Perfect reliability is represented by a reliability coefficient of 1.0, but in practice this is never achieved although figures upwards of about 0.85 are commonly obtained. SEN Special Educational Need.
The consistency with which a test measures an item.
(Machine Safety) Ability of a machine, its circuits and components, to consistently perform its function within its specifications without failing.
In sociological research, the extent to which a study or research instrument yields consistent results.
In statistics, reliability is the consistency of a set of measurements or measuring instrument. Reliability does not imply validity. That is, a reliable measure is measuring something consistently, but not necessarily what it is supposed to be measuring.