To calculate the mean, you need to add all the numbers in a sample (in this case, a season) and divide them by the number of values. For season 1, adding all the sales together will get you 2015. Dividing this by the number of games will get you 2015/5 = 403.
For season 2’s mean, all the numbers add to 1940. Dividing this by the number of games will get you 1940/5 = 388.
When the season means are compared, we can see that Season 1 had a slightly greater mean. This implies that the average game for season 1 had better attendance than the average game from season 2.
When finding the range, we want to find the smallest and largest value in each season, and then subtract the smallest from the largest to find our range.
To find the range for season one, we take the largest value, 419, and the smallest value, 382. We then subtract the smallest from the largest, 419 - 382 = 37. Our range for season 1 is 37
To find the range for season two, we take it’s largest value, 532, and it’s smallest, 302, and subtract the smallest from largest, 532 - 302 = 120. Our range for season 2 is 120.
When comparing the seasons by range, season 2 has a much larger value. This implies that the data values for season 2 are much more spread out, while season 1’s values are more dense and closer together in value.
Hope this helps. Let me know if you have any questions.