From Bruce Dawson (Microsoft senior software design engineer) on Andre Vrignaud's Blog (Microsoft's Director of Technical Strategy for Xbox Live):
Many developers, gamers, and journalists are confused by 1080p. They think that 1080p is somehow more challenging for game developers than 1080i, and they forget that 1080 (i or p) requires significant tradeoffs compared to 720p. Some facts to remember:
* 2.25x: that's how many more pixels there are in 1920x1080 compared to 1280x720
* 55.5%: that's how much less time you have to spend on each pixel when rendering 1920x1080 compared to 1280x720--the point being that at higher resolutions you have more pixels, but they necessarily can't look as good
* 1.0x: that's how much harder it is for a game engine to render a game in 1080p as compared to 1080i--the number of pixels is identical so the cost is identical. There is no such thing as a 1080p frame buffer. The frame buffer is 1080 pixels tall (and presumably 1920 wide) regardless of whether it is ultimately sent to the TV as an interlaced or as a progressive signal.
* 1280x720 with 4x AA will generally look better than 1920x1080 with no anti-aliasing (there are more total samples).
A few elaborations:
Any game could be made to run at 1920x1080. However, it is a tradeoff. It means that you can show more detail (although you need larger textures and models to really get this benefit) but it means that you have much less time to run complex pixel shaders. Most games can't justify running at higher than 1280x720--it would actually make them look worse because of the compromises they will have to make in other areas.
1080p is a higher bandwidth connection from the frame buffer to the TV than 1080i. However the frame buffer itself is identical. 1080p will look better than 1080i--interlaced flicker is not a good thing--but it makes precisely zero difference to the game developer. Just as most Xbox 1 games let users choose 480i or 480p, because it was no extra work, 1080p versus 1080i is no extra work. It's just different settings on the display chip.
Inevitably somebody will ask about field rendering. Since interlaced formats display the even lines on one refresh pass and then the odd lines on the next refresh pass, can't games just render half of the lines each time? Probably not, and even if you could you wouldn't want to. You probably can't do field rendering because it requires that you maintain a rock solid 60 fps. If you ever miss a frame it will look horrible, as the odd lines are displayed in place of the even, or vice-versa. This is a significant challenge when rendering extremely complex worlds with over 1 million pixels per field (2 million pixels per frame) and is probably not worth it. And, even if you can, you shouldn't. The biggest problem with interlaced is flicker, and field rendering makes it worse, because it disables the 'flicker fixer' hardware that intelligently blends adjacent lines. Field rendering has been done in the past, but it was always a compromise solution.