Here is an hidden fact, or maybe forgotten reality on digital cameras. The short answer is no, unless you need it for a very specific purpose. For general use we don’t need anything more than a 2-3 meg camera.
To make the argument easier to follow I round the numbers. An average computer monitor would have about 1600×1200 (resolution) which is about 2 000 000 pixels, or 2meg. That is the physical dots on your screen (say LEDs) which is controlled by your graphic card to be set it to different colors which overall they make the image on your computer screen. In fact you could see the pixels on your screen if you look closely with a magnifier. It is easier to see them on a TV screen as they have much less resolutions.
If you have an image of 2meg on your camera, you should see it clearly on your screen dot by dot, pixel by pixel, so it is a perfect match. So if you have a 2 mega pixel camera you miss no pixels, all the information on your camera will be displayed on your screen. Now what happens when you have an image of 3mega pixels, or a camera of 3 mega pixel? Since the screen is not able to show them all, it skips every third pixel, because of its physical limitation. It just can’t fit 3000000 dots in 2000000 available space dots. Similarly if you have a 4 mega pixel image, computer has to skip every second pixel to be able to accommodate all the information and shrinks them all into that limited space. The reality is the scanning ( like a camera) and reproduction of it as an output on the screen do not match, so what is the point of having a camera with 3 or 4 or even 10 mega pixel when you can’t even see the out put result. Of course we can’t tell the difference of a picture quality just by looking at the screen as our eyes are not designed to distinguish each individual pixels amongst 2 000 000 of them.