In 1865 an Austrian monk, Gregor Mendel, presented the results of painstaking experiments on the inheritance of the garden pea. Those results were heard, but not understood, by Mendel’s audience. In 1866, Mendel published his results in an obscure German journal. The result of this was that Mendel’s work was ignored and forgotten. Mendel died in 1884 without knowing the pivotal role his work would play in founding the modern discipline of genetics.
By 1899, some geneticists were beginning to realize the necessity of mathematically analyzing inheritance in order to understand how evolution might work (Bateson, 1899). They did not realize that Mendel had already solved this problem. Then, in 1900, three leading scientists of the day, Hugo de Vries, Carl Correns, and Erik von Tschermak, simultaneously rediscovered Mendel’s paper and realized how important it was. With the rediscovery of Mendel’s principles, genetics as a scientific discipline exploded into activity. Within two years, the first human study of inheritance (Garrod, 1902), describing the Mendelian inheritance of alkaptonuria, was published. This paper, too, was far ahead of its time, the importance of which would only be recognized as the one gene-one polypeptide principle was developed in the latter part of this century.
Now, more than a century later, Mendel’s work seems elementary to modern-day geneticists, but its importance cannot be overstated. The principles generated by Mendel’s pioneering experimentation are the foundation for genetic counseling so important today to families with health disorders having a genetic basis. It’s also the framework for the modern research that is making in roads in treating diseases previously believed to be incurable. In this era of genetic engineering – the incorporation of foreign DNA into chromosomes of unrelated species – it is easy to lose sight of the basics of the process that makes it all possible.
Recent advances in molecular genetics have resulted in the production of insulin and human growth hormone by genetic engineering techniques. Cancer patients are being treated with cells that have been removed from their own bodies, genetically altered to enhance their tumor destroying capacity, and then reinserted in the hope that microscopic tumors escaping the surgeon’s scalpel may be destroyed.
This newfound technology has not been without controversy, however. Release into the environment of genetically engineered microorganisms that may make crops resistant to disease-causing organisms (or even capable of withstanding temperatures that normally would freeze plants) has met with strong opposition.
In the future, you may be called upon to help make decisions about issues like these. To make an educated judgement, you must understand the basics, just as Mendel did. This exercise will give you a better understanding of the basic laws that govern the inheritance of characteristics by successive generations.
The corncob is not the fruit of the corn plant in itself, nor are the kernels the seeds. Each kernel of corn is really a fruit, which develops from the ovary of one of the female flowers of the plant. There are a great number of inheritable characteristics in corn (Zea mays). In this experiment we will investigate two, the color of the kernel and starchy consistency of the endosperm which gives rise to wrinkled or smooth kernels.
The endosperm is a nutritional reserve for the developing corn seedling that provides energy to the seedling after immediately germination. This reserve is drawn on until the developing plant begins to generate its own energy by photosynthesis. Three layers of cells protect the endosperm. The inner most layer, the aleurone layer, contains purple pigments called anthocyanins. The amount of anthocyanin in the aleurone layer and the amount of starch present in the endosperm are genetically determined and can be inherited according to Mendelian rules.
The purpose of this experiment is to demonstrate experimentally that some characteristics of ears of corn are inherited according to the Mendelian laws of inheritance. By the end of this experiment you will have determined if: 1. the quantities of purple and white corn kernels correspond to the Mendelian ratios of the F2 generation of a monohybrid cross, and 2. the quantities of yellow and white and wrinkled and smooth kernels correspond to the Mendelian ratios of the F2 generation of a dihybrid cross.
The Monohybrid Cross
You will be provided with an ear of corn whose kernels show two different colors-yellow and purple. Place a colored pin in the end of one row of kernels and count and record the number of each type of kernel in the row. Place an uncolored pin at the end of the next row and continue counting. After each row is completed, move the uncolored row marker pin to the next row until you return to the row marked by the colored pin. All individuals in your group should individually count the kernels. Record the results of the counts in table 1 below.
Determine the expected numbers for each phenotype using Mendel’s law of segregation and perform a Chi-square calculation to determine if the class data obeys Mendel’s law of segregation. Include a statement describing what the Chi-square calculation indicates about the class data.
Chi-square (Χ2) Calculation:
The Dihybrid Cross
You will be provided with an ear of corn that has not only purple or yellow kernels but are also either smooth or wrinkled. The possible combinations are yellow/smooth, yellow/wrinkled, white/smooth and white/wrinkled. Sweet corn kernels wrinkle when they dry while starchy kernels remain smooth.
Place a colored pin at the end of one row and count and record the phenotypes of all the kernels in that row in table I below. Place an uncolored pin at the end of the next row and continue counting. After each row is complete, move the uncolored pin to the end of the next row and continue counting until you reach the row marked by the colored pin. Each ear should be counted by at all people in your group.
Calculate the expected number of individuals of each phenotype using Mendel’s law of independent assortment and perform a Chi-square calculation to determine if the class data fits Mendel’s law of independent assortment. Include a statement that describing what the Chi-square calculation indicates about the class data. Chi-square (X2) Calculation:
The Chi-square Test
When scientists set out to solve a problem, they formulate a hypothesis that suggests a possible solution to the problem. They then carry out experiments and collect data to test if the hypothesis is correct and, therefore, a solution to the problem. As part of the development of the hypothesis, the scientists should be able to make some predictions about the data that they will collect.
An often-occurring problem when analyzing data is that it does not always fit into the predictions from the hypothesis. The question, then, is do the data still fit the predictions or does it differ from the predictions? This is where statistical analysis is used.
There are a large number of statistical tests, each with its own specific use. One important consideration is which statistical test is the most appropriate for the data. This usually depends on the type of data and how it was collected. For most of our experiments the most appropriate test is the chi-square (X2) test. The formula for the chi-square test is:
X2 = Σ(O-E)2/E
where O are the observed, or experimental, results and E is the expected, or hypothetical, results.
To illustrate how to use the formula, let’s look at an example using a coin. A coin has two sides, a head and a tail. According to the laws of probability, the chance of flipping a coin and having it land head up is ½ (0.5) and the same for a tail. Based on this law, if we tossed a coin 100 times, half the time it should be a head and half the time it should be a tail. Our hypothesis is that there is an equal probability of tossing a head or a tail and our expected results, for 100 repetitions, would be 50 heads and 50 tails. Now we take a coin and toss it 100 times and get 52 heads and 48 tails. These results differ from what we expected but do they still fit with our hypothesis?
|Coin Face Showing |Observed |Expected | |Head |52 |50 | |Tail |48 |50 | | |100 |100 |
To test the validity of our results we carry out a chi-square test as follows:
X2 = (52-50)2/50 + (48-50)2/50
= (2)2/50 + (-2)2/50
= 4/50 + 4/50
= 8/50 or 0.16
Notice that you have to set up the (O-E)2/E term for each data set (that is, the head data set and the tail data set) and that the result of each of these is added together to give the X2 value.
Now that we have a value for X2, what does it tell us about our data? The main consideration here is to remember what we are testing with the chi-square test. Many people think that it is testing the original hypothesis, which is incorrect. What we are testing is a statistical hypothesis called the NULL hypothesis. Simply put, this hypothesis says that any deviation of the observed (experimental) results from the expected (hypothetical) results is due to random chance alone. The X2 value is used to determine the probability of this hypothesis being true or false. Statisticians and biologists have set an arbitrary value of 5% as the probability level at which the NULL hypothesis is false; that is, if the probability of the deviation being due to chance alone is 5% or less, then the NULL hypothesis is false.
To determine the probability level, we use a X2 table which tabulates probabilities, X2 values and another property of data called the degrees of freedom. The degrees of freedom is equal to the number of data sets minus one, or the number of (O-E)2 /E terms minus one. For instance, in our example there are two data sets so the degree of freedom is one.
Now let’s look at the X2 table and find our probability value. First, we find the degrees of freedom for our data in the DF column, then we move across that row until we find our X2 value. We then read off the probability value (as a decimal) from the top of that column. The table in the lab manual only shows values for probabilities from 0.1 or less as 0.05 (or 5%) is the key value. The rule of thumb here is that any X2 value that gives a probability value greater than 0.05 says that the NULL hypothesis is correct; a probability value less than 0.05 means the NULL hypothesis is false.
For our example, the X2 value was 0.16 with a degree of freedom of one. From the table, the probability is much greater than 0.05, or in terms of the NULL hypothesis, the probability is very high that the deviation in the observed data is due to random chance alone. This, in turn, means that our experimental data fit with the predicted results based on our original hypothesis, and support this hypothesis.