This semester I took a class on computer security with a completely open-ended final project. Since I somehow trick all of my final projects into being about games, the problem my group decided to look at and try to solve was that of fake scores on leader board services such as the iOS Game Center, Kongregate, and others.
As outlined in this article, cheating can be pretty easy and prevalent on these leader boards, particularly on the iOS Game Center. Speaking as someone who was playing Super Hexagon way too much earlier this semester primarily because of competition at the top of the leader boards, I know firsthand how polluted leader boards can negatively affect a game. For example, here's the leader board from when I managed to reach number two in the world:
And here it is now:
As you can imagine, seeing a score five orders of magnitude higher than what seems humanely possibly can be a real deterrent for people trying to play legitimately.
With that in mind, what did we do to try to fix this problem? If you want to read a big technical document, check out the paper I wrote on the project. Otherwise, here's the condensed version.
Most exploits work by just communicating with the leader board server pretending to be the game and saying "I got [X] Score!" In most systems that we researched, the server will simply say "Okay!" and that's it. Way too easy.
An obvious attempt to prevent this is to have the game somehow prove that it's the one talking to the server (as opposed to some shady cheater). This is usually done by having the game cryptographically sign score postings with a secret it shares with the server, an approach taken by Newgrounds, Adultswim.com, and others. The problem with this approach is that the secret shared by the game and the server has to be stored on the player's machine. If the player is relatively resourceful, it's not too hard to peek into memory or use a decompiler to find the secret, leak it to the internet, and then we're back to square one.
For our project, we built a system that verifies claimed scores based on a simple assumption. We assumed that a player can be said to have "earned" a score if they are able to produce a series of inputs that will reproduce the score when run on a trusted version of the game's code. This assumption isn't always valid, particularly if players are cheating by playing the game with bots, but it seems to be reasonable enough to prevent straight-up forgeries.
Using this assumption, we built a library for the Flixel game engine that makes use of Flixel's replay system (originally a debugging feature) to record input logs and send them to the score server where they can be verified by a trusted version of the game code.
|The server hard at work|
The system seems to work pretty well. Depending on the game, the logs sent to the server grow at rates somewhere between 30KB-1MB per hour and can be subject to some heavy optimization. Also, we were able to modify my game Roller Derby Riot Queens (sadly very vulnerable to score forgery) to use the system with fewer than 10 additional lines of code. Not too bad!
Other game engines can easily be expanded to provide this protection. If you're interested in learning more, I'd recommend checking out the full paper.
Thanks for reading!