Definitions for "Significance test"
A test of the reliability of estimate of statistical parameters. Such tests proceed by assuming that the estimates are not significant and are those to be expected from sampling a particular population, and then, from the properties of the population, determining the probabilities of such occurrences. The hypothesis (that the estimates are not significant) is rejected only when an observational result is found to be significant, that is, when the obtained result belongs to an objectively specified unfavorable class ( critical region or rejection region) having a fixed, small probability of occurrence in random samples from the hypothesized population. When the result falls in the acceptance region, it is not significant and the hypothesis cannot be rejected. The boundaries of the classes are set in such a way that the total probability (unity) is appropriately divided between them, say 0.95, 0.05 or 0.99, 0.01. The probability assigned to the critical region, commonly either 0.05 or 0.01, is called the significance level. See chi- square test, Student's t-test, analysis of variance.
A significance test allows us to determine the probability of obtaining the value of a test statistic (e.g. , , F, Chi-square) given that the null hypothesis is true.
A procedure for assessing the evidence in a set of data against a null hypothesis. The result of a signficance test is a -value. cf.  hypothesis test.