In statistical inference on observational data, the null hypothesis refers to a general statement or default position that there is no relationship between two measured phenomena. Rejecting or disproving the null hypothesis—and thus concluding that there are grounds for believing that there is a relationship between two phenomena (e.g. that a potential treatment has a measurable effect)—is a central task in the modern practice of science, and gives a precise sense in which a claim is capable of being proven false. The null hypothesis is generally assumed to be true until evidence indicates otherwise. In statistics, it is often denoted H0 (read ‘H-nought’, “H-null”, or “H-zero”). The concept of a null hypothesis is used differently in two approaches to statistical inference. In the significance testing approach of Ronald Fisher, a null hypothesis is potentially rejected or disproved on the basis of data that is significant under its assumption, but the null hypothesis is never accepted or proved. In the hypothesis testing approach of Jerzy Neyman and Egon Pearson, a null hypothesis is contrasted with an alternative hypothesis, and the two hypotheses are distinguished on the basis of data, with certain error rates. Proponents of each approach criticize the other approach. Nowadays, though, a hybrid approach is widely practiced and presented in textbooks. The hybrid is in turn criticized as incorrect and incoherent’for details, see Statistical hypothesis testing. Statistical inference can be done without a null hypothesis, thus avoiding the criticisms under debate. An approach to statistical inference that does not involve a null hypothesis is the following: for each candidate hypothesis, specify a statistical model that corresponds to the hypothesis; then, use model selection techniques to choose the most appropriate model. (The most common selection techniques are based on either Akaike information criterion or Bayes factor.)