Haha, you thought I was done with that? Sadly, no, some game developers and game marketers like to say things about computers, game development, and console parts that just aren't true, or at the very least are not the whole story.
The thing that prompted me to write this particular blog post was a video by Perils of Pokey on Cinematic games, where he presents a statement from Ubisoft talking about the CPU and why it dictates the graphics thing and, part of it was true, the part of it where he states that "Game designers are limited by the CPU" but the argument he's presenting and the points he makes are just plain false. Specifically, he says that Assassin's Creed: Unity could run at 100 FPS if they were only going off the Graphics card (false), and that dialing back the graphics to improve the framerate wouldn't do anything (also false). It is because of this that I want to take the time to outline what each part of a computer does and how those functions relate to video games.
Please note, this description of parts is not comprehensive and it is not intended to substitute for an actual education. I'm just presenting my knowledge on computer parts to explain why Ubisoft's partial truth irritates me and, if I get anything wrong, I apologize. And secondly, I will only be talking about the parts that affect gameplay directly, and as a result things like the motherboard and CPU fans will largely be left out of this equation. While those parts can affect gameplay in certain situations, they really only do so at very specific points in time, such as when you need to cool off your processor or you're building a gaming PC.
For this discussion, we are going to be looking at four major parts:
- CPU: Central Processing Unit, the thing that calculates everything that is necessary for your game to function properly.
- GPU: The Graphics Card, the thing that outputs necessary data to the screen in the form of Pixels.
- RAM: Random Access Memory, the thing that holds the necessary calculable parts of your game that you're interacting with right now.
- Hard Drive: The storage unit that holds all of your game's memory, including the executable app file and any of your saves.
The CPU, which will be known from now on as the Processor, is responsible for calculating every single aspect of a game so that it can go smoothly. It calculates the code that was used to create the gameplay, it calculates every polygon of every model, it calculates what the textures and bump maps are supposed to do, it responds to input to play the animations that need to play at the right time, and it holds all other pieces of data, such as where everything is in the game world.
The GPU, which will be known from now on as the Graphics card, is directly connected to the Processor, like every other part. The Graphics card talks to the Processor so that it can gather the necessary information to know what should be on screen at any given moment, and then outputs all of that as an image on the screen, usually at a rate of 30-60 images per second, also known as the Frame Rate. And here's where the first niggle starts to appear.
In that original post, he says that they're bound to the processor, which is true, your game can only do what your processor allows it to do but, one of the points he makes is that the graphics card could output the game at 100 fps if that's what they were going off of. This is completely false and shows that he either has no idea how computers work or is hoping that those he's talking to don't.
The idea he presents, and I can't remember the exact wording, is that the polygons of a given model are held in the Graphics card. This is blatantly false. You see, when 3-D graphics were first introduced, a certain amount of processing power had to be used to create the polygons of a given model and figure out their size and location. From what I can tell, the amount of calculations is based on the number of points that connect to make the polygons. However, for the purposes of this discussion we'll simplify it to how game artists will usually calculate available power, in the form of triangles. What I mean by that is that the processor calculates polygons in the shape of triangles, as in every collection of 3 points or vertices.
Each of those polygons takes up a certain number of calculations, which in turn means that the more polygons you have on a model the more calculations are going to be required to place those polygons, and the fewer calculations you have left over for other things.
In order to figure out by how much, though, you have to look at two core pieces of a given processor: the clock speed, or how many calculations a single core can do per second, and the number of cores, or how many cores are performing calculations simultaneously.
Let me give you an example: let's compare the PS3's Processor to the PS4's. Both of them have the same clock speed, so 1.75 GHz, or 1.75 billion calculations per second. However, the PS3 had a single core processor while the PS4's processor has 8 cores. This means that, while both have the same clock speed, because the PS4 has 8 cores rather than one in its processor, in theory, it should be capable of computing 8 times the calculations of the PS3. However, this really only applies if a given developer knows how to use all of the cores in the processor at once, which many do, I'm assuming.
Once the given locations of all of the necessary polygons have been calculated, they are rendered to the screen by the GPU in the form of Pixels. Many good graphics cards will be able to handle up to 1080p graphics, which means that the GPU can handle around 2 million Pixels. However, there's a difference between pixels and polygons.
Polygons, in terms of 3-D graphics, are collections of points that take up space in a 3D environment. A single pixel is a square of a certain color. What's the difference? The difference is that when the polygons' placement is figured out and the processor has found out where everything is, the Graphics Card finds what's supposed to be on-screen and outputs it as a 2D image.
What this means is that, if you reduce the polygon counts on a given model, you free up more calculations in the processor for other things, such as the code required to run gameplay or animations at the right time and, yes, improve the frame rate. Because the frame rate is variable and has to be updated every time a single-pass of calculations is finished, there's no way the Graphics Card would be able to perform these tasks, since all it does is output what's supposed to appear onscreen.
"So does that mean the Processor is responsible for loading times?"
No. Loading times are the result of having to swap one set of assets within RAM for another. The best example comparison I can give are Demon's Souls vs. The Last of Us. Demon's Souls has a fairly large world in terms of how big everything is that has to be loaded in but From Soft made a pretty big mistake when it came to the handling of these large assets. Namely, that they divided these large assets into large chunks. What this caused was a bunch of loading times where, whenever you want to go somewhere, you have to sit through a loading screen and wait for the current level to be swapped out for the one you're heading toward. The Last of Us does something really similar but handles it so much better because of what Naughty Dog figured out that From Soft hadn't, at the time.
You see, The Last of Us has one long loading screen that appears right as you're booting up your save. You have to sit through one long loading screen but, once that's done, you can enjoy the rest of the game without having to deal with anything else. If there were no or very few short loading screens at all, like in Jak & Daxter, I'd be more inclined to believe that they used a method that's radically different from the one in Demon's Soul, given that Jak & Daxter handles loading times by masking them with long tunnels (in the case of the hover bike sections in The Precursor Legacy), or with check in stations (like in Jak 2). However, the fact that The Last of Us has a visible loading screen before the game starts leads me to believe that what they're actually doing is similar to what Demon's Souls did, where they have large amounts of data that needs to be loaded in.
So then what's the difference? Well, Demon's Souls decided to take one large chunk of data and divide it into a bunch of smaller, though still really big chunks of data, which resulted in a plethora of long loading screens. The Last of Us, however, loads everything that's supposed to be in the game in RAM all at once. Meaning that, no matter how far you progress into the game, even if you can't go back, the stuff that you passed, the stuff you haven't seen yet, and the stuff that's onscreen right now are all loaded in RAM at the same time. This results in one long loading screen upfront but, after that, gameplay goes largely uninterrupted.
That said, if you're an aspiring game developer or game designer like me, I personally do not recommend using this method for every single game you make. Games that require you to have no loading times or make use of Procedurally-Generated Assets, like The Last of Us and Rogue Legacy respectively, will require everything to be within RAM all at once. However, the drawback to this is that, if the RAM you're working on is not large enough to contain every single asset that you have in the game, the game will never load, it may crash, and it could potentially result in some damage to the device itself.
To be fair, The PS4 and XBox One both have around 8 GB of RAM and, while I'm not entirely certain how much of that is usable by any given game, let's be generous to both and say that only 5 GB are usable by any given game, that means that that 5 GB pool is the limit on how much data your assets are allowed to take up if you use this method. If your total number of assets, all combined, exceed that limit, I suggest you do what Demon's Souls did, namely divide up the content into smaller chunks, and then use some processing method to disguise loading times. Some ways you can do this are a loading screen (not recommended), a hallway that, once entered, will remove the previous level and load in the next one by the time you get to the next door (better but still not perfect), you could reduce the amount of overall data that your assets use up to begin with (probably your best option but not everyone is going to want to do that), or cover it up with an unskippable cutscene (do so at your own discretion).
"So then, what does the Hard Drive do?"
Well, compared to the other three parts that we just discussed, the Hard Drive actually has a surprisingly small amount to do with overall gameplay performance and everything else. Your hard drive is going to hold your executable file for digital games, and it's going to hold your save data, but other than that, it doesn't really do much at all. If you're physically distributing your games, on Blu-Ray for example, a Dual-Layered Blu-Ray Disc can hold 50 GB of overall data, so as long as you don't exceed that, it should be fine. Digital games do, in theory, have a much higher limit, since they're not restricted by an optical medium, only the storage space that is present. However, the fact that most digital-only games don't even reach 20 GB of required space is telling that a good game doesn't need that much space to hold everything.
"So, why did Ubisoft say that?"
Well, it's not clear exactly why, other than to push the graphics angle and try to dissuade people from wanting higher frame rates. However, one reason I may speculate is that, the Processor is the most important part of a computer or interactive device. Without the processor, the Graphics card can show a single image in a best-case scenario but everything else will be useless.
So I hope that clears some things up. If I got some things wrong, I apologize but I think I got most of what I wanted to say fairly on the mark.
No comments:
Post a Comment