Answer:
1. The attribute that information gain would choose as the tree's root is Sound. This is because the information gain of Sound is 0.75, which is higher than the information gain of Fur (0.5) and Color (0.25).
2. The decision tree that would be constructed by recursively applying information gain to select roots of sub-trees is as follows:
```
Root: Sound
* Meow: Cat
* Bark:
* Fine: Dog
* Coarse: Dog
```
3. The new example [Sound=Bark, Fur=Coarse, Color=Brown] would be classified as Dog. This is because the decision tree shows that all dogs bark and all dogs with coarse fur are dogs.
Here is a more detailed explanation of how the decision tree is constructed:
1. The first step is to calculate the information gain of each attribute. The information gain of an attribute is a measure of how much information about the class is contained in that attribute. The higher the information gain, the more valuable the point is for classification.
2. The attribute with the highest information gain is chosen as the tree's root. In this case, the attribute with the highest information gain is Sound.
3. The data is then partitioned into two groups based on the value of the root attribute. In this case, the data is partitioned into two groups: dogs and cats.
4. The process is then repeated recursively for each group. In this case, the process is repeated for the dogs and the cats.
5. The process continues until all of the data has been classified.
The decision tree is a powerful tool for classification. It can be used to classify data that is not linearly separable. In this case, the data is not linearly separable because there are dogs that bark and cats that meow. However, the decision tree can classify the data correctly using the information gained from each attribute.