All Articles

Tuesday Time 8 - Shitty Post Rising: Reblogance

“Neither Friday Nor Close To On Time” Edition

So if you aren’t in my discord, the reason why I’ve been radio silent for a few months was a combo of being in a huge funk, dying from work, and also I was waiting for the next release of the engine I’m using. That’s pretty much it, let’s go.

A bevy of new features (lol!!!!!!!!!!!!!!)

Bevy 0.6 has finally released with the new renderer!

And the good news is It only took a few days of pain to get freelancers running again, woo!

Image of updated game

Most of the new features don’t affect me at all!

There was actually a massive 2d rendering performance upgrade and some of the previous stress tests I was doing that chugged things now run flawlessly, which is pretty huge. Potentially means bigger maps, more characters, more shit going on, higher allowed zoom out, etc.

Timestep Hell

I decided that my current method of “speeding up time” has way too many shortcomings in terms of loss of precision and changing behavior.

Timestep

For a quick crash course introduction, the “timestep” in a game is the interval at which it updates. In the old days this was as fast as the processor could compute, for many console generations it was the framerate of the game, but in modern times it’s generally considered good practice to separate your game logic from your render logic. There’s a lot of different ways of achieving this, and the generally accepted “best” method can be kind of complicated and breaks down a bit when you introduce the idea of changing the speed of game time, such as is needed in colony management games.

What’s wrong?

Currently, the way I’ve implemented the timestep is that it ticks forward once a second, and then all game logic is “frame timed,” it uses how long the last frame took as a multiplier to incrementally change over each frame, and then this is multiplied by the “time scale multiplier,” which just acts as a flat multiplier on the delta time (explained later). So if the last frame took 1/60th of a second, instead of only advancing 1/60th of the amount to advance in one second, if the multiplier is 5 it would instead advance 5/60ths, or 1/12th the amount. This technically works but there’s some issues with this approach.

ERror

This method actually works perfectly for linearly changing values. Because it’s just advancing in bigger steps, it happens faster. For example, If something is just increasing by 60 each second, if instead adding 1 60 times a second we added 10 60 times a second, it’s going to reach 60 in 1/10th of a second instead of 1 second. Magic!

Here’s a scrungy Excel graph showing this from some data I generated with a script

linear graph

And normalized to the same time lengths, we can see that they’re achieving things at roughly the same rate and scale.

linear graph normalized

The problem is for any kind of value that doesn’t change linearly, error and loss of precision can creep in very easily. And then once that loss of precision results in error, the error accumulates.

exponential graph

Hmm, those look like roughly the same curves…

exponential normalized

Oh wait, no, they’re not at all.

exponential close up

Reminder that these should be dead on the exact same line.

Not to mention the kind of issues this introduces from determinism and bugfixing and so on where something as simple as clicking a button on the UI changes the way a massive amount of values even calculate across the game.

Then there’s the maintenance and cognitive load. Similar to using a raw delta time, I now have to include this multiplier at every single point a value changes. Pain in the ass and very prone to bugs.

Fixing Time

The way that most other games with speed up/slow down buttons avoid this problem is instead adjust the rate at which the game is ticking. So if all calculations assume you’re doing the math 30 times a second, if you crank it up to 300 the game is now running 10x as fast. Of course, way more calculations are happening and it’s more intensive which is why you tend to see that performance hit when you hit those buttons, but the guarantee that all the timescales run exactly the same is a pretty big one.

As a matter of a fact, many games crank the tick rate higher than the actual rendering rate of the game. Rimworld is hard capped with vsync at all times, but the Ultra speed setting cranks the game all the way up to 480 ticks per second. Significantly higher than even the refresh rate of my 144hz monitor. And doing this is actually fairly straightforward in a standard game loop. I’ll elaborate a bit more on this later, but it basically involves just running the things that tick multiple times within one frame.

Ah, so all I need to do is get the systems in bevy to run repeatedly until they’ve run enough for the frame. Except…

Scheduling Difficulties

Currently the way that Bevy schedules systems is basically to form a graph of dependent systems per “stage”, then build a “schedule” which defines what order to run systems in and what systems can be run in parallel across threads. The actual “tick” step runs through 5 predefined stages order, with input being in the first stage, game logic by default being in the third stage, and rendering being in the fifth.

What’s being planned is actually to entirely axe the concept of systems and do everything as one big scheduled graph, but this is insanely complicated of a problem so it’s being discussed to absolute death currently. For most usages, this won’t matter, it means that rendering and game logic stages can be fully pipelined and parallelized which is interesting and cool and means game logic chugging wouldn’t affect framerate. However, I have a unique problem.

The biggest limitation of the current scheduler is that it has no way of firing a system multiple times within one world update. If the game is running at 60 fps, the absolute most a game system can update is 60 times a second. Uh oh, you may already see where this is going…

Fix that Timestep

This is completely unworkable for having an accurately adjustable game speed. But as to why requires a bit of explanation.

A brief history

Game time used to always be tied to the processor speed. You always knew you were on specific hardware and that it would update some specified number of times a second. If you know a cpu cycle is always going to take some fraction of a second, then you know how much time has passed in a specified number of cpu cycles.

Eventually, processors started becoming fast enough that relying on the cycles resulted in impractically small fractions of a second. And then on top of that, if things ever ran on something that was a different speed than you expected, strange things would happen. This is where the classic “dos game runs stupidly fast” problem comes from.

Conveniently, this was around the time that processors started exposing a “real time clock,” which is a very high precision way of measuring how much time has passed. So what you would do is simply do your game logic, do your rendering, and then wait around until enough time passed to hit a target framerate. For a long time this was 20-30, it took around until the ps2 that 60 was more common, and it dropped back around to 30 on seventh gen consoles. You then could do all of your math and timings assuming that each frame is going to last for 1/30th of a second, or 1/60th of a second or so on, which is why you hear terms like “this has invulnerability for 8 frames” or so on.

Delta

So what’s an alternative? Well, you can take the time before the frame, take it after, and then take the difference. You now have a measure of how long the previous time took in seconds, or the “delta time” as it’s frequently referred to. If you know how long the last frame took, you can make a fairly informed guess about how long the next one will take as well.

Why is this useful? Well, if you multiply how much you’re doing by a time interval, you get how much to change within that time period. So if you want to change by 100 a second, multiplying by (1/100) gets you 1, so if your framerate is 100 times a second, you still update the value by 100 every second but split into every individual frame! Wow!

Easy, right? Heheh, WRONG!

Problems…

Variable delta timing is neat, but has many associated problems. Even with hard limiting things so that you don’t have lag spikes resulting in clipping through walls, the variable interval is unpredictable. Especially in regards to systems sensitive to subtle changes in input and floating point imprecision like physics, input, or any facsimile, you’re vulnerable to all kinds of things like something as fundamental as input “feeling different” when the framerate is different. This is an issue you still see in a lot of games!

So how can we have the fixed intervals of a locked framerate game loop, but the independent unlocked framerate of rendering as fast as possible?

Well duh, you just go right back to fixed clocks again…but for the game loop only.

Back to the Fixed Time: Accumulating Boogaloo

What if you instead measured the amount of time the last render took, and then just ran the game ticks for as many times as needed for that time period? So if the last frame took 1/30th of a second, then run twice, and so on. Aha, but then what if the render took less time than the tick, or an odd number, or so on? Simple, hold a running count of how long it’s been since the last game tick, and then tick once it gets above the interval. Boom, magic. Or something.

If you’re interested in a bit more of the nitty gritty implementation, and some slightly more detailed of information, gafferongames has a classic article about the problem of consistent time steps in game engines. It has a great code focused explanation of this technique along with other methods of timestep. If you’re not, the biggest thing to get out of the article is that the renderer/real time clock produces time and your systems consume it.

In plainer of English, your game systems need to run repeatedly until they’ve “consumed” the number of times they need to run, as given by how long a frame took and how far behind they are. For a more practical example, let’s say you have a physics system running at 30 times per second. If for whatever reason the whole game hangs and a rendered frame takes a full three seconds to output, now the physics is “behind” three seconds and needs to run 90 times in one frame to chew through that accumulated time. Now you’re back on schedule and it’s like you never “lost” that time.

Many games also have limits to how much you can process in one frame to avoid “death spiral” problems where one frame takes a long time, it gets all of the game behind, and then it spends so much time calculating how to catch up that the next frame takes a long time, and then it spends so much time calculating how to catch up that…you get the idea. So if a frame is taking longer than some arbitrary limit to process it will just hard cutoff and leave the extra time as ” leftovers” for the next frame to pick up.

So problem solved! Right? Sort of…

Back to my ass

As I said, Bevy can only run systems one time per frame. So if your systems ever get behind, you need to have a higher framerate than the system tickrate for it to ever be able to catch up. This makes low framerates a pretty serious threat to the game even being playable, and also brings a much bigger issue to the table. The one I was trying to sidestep with using a multiplier instead of just ticking faster.

Systems cannot tick faster than the framerate, ever.

This means if your system ticks at say, 30 times a second, if the player is managing to run the game at 200 fps, massively overkill and overtaxing their hardware for not much gain, they could only speed up the game time a maximum of 6x before it basically breaks indefinitely. And this says nothing about players with lower spec hardware or what happens once that framerate starts dipping because there’s a lot on screen or whatever.

So I need some way to get bevy to tick faster than the framerate. Yay?

The solution?

I don’t really know.

Ideally bevy would simply get the needed features built in, but I’m not waiting another few months for a feature release and the needed refactor to natively handle this in the built in scheduler. There’s currently a previously mentioned rework in the RFC stage, but the rfc hasn’t even been accepted yet.

Thankfully, bevy gives you the ability to pass your own custom scheduler! That means I can just write my own custom scheduler that lives outside of the normal one and can run the needed systems multiple times! Woo!

Unfortunately, I have no clue how to write a custom scheduler! Not woo!

I’ll probably spend the next week just doing a bunch of research on ecs scheduling theory and how bevy’s api implements this or so, hopefully I can get something promising by next week.

ok

Well, it’s back. Unfortunately for my sanity. Probably going to try to get time in order by next week maybe.

Next week AI, did you forget about this one?