Anti grief code

I was reading about how people can hack the code to super charge their ships.

If they can do that already, then it wouldn't matter if the developers put some code in that detected out of parameter weapon damage. Hackers will be invincible anyways. So, the developers could just make anyone invulnerable if anyone gets shot with impossible amounts of damage.

Would it help?
 
Having some sort of check to make sure what the clients are hitting each other with matches what they should be hitting each other with makes sense, but it'd likely be more useful as a tracking thing to investigate after the fact rather than any practical means of countering cheating in real time.

ie if client A tells client B "okay I hit your ship for 3000 damage" then client B should log that information and pass it to the server, the server says "hang on a minute none of client A's weapons are capable of delivering 3000 damage, raising a red flag", and client A gets put on ze watchlist.
To avoid client B trying to maliciously report people that aren't cheating, again, it's a watchlist rather than an autoban. If the devs notice that everyone client B faces is mysteriously cheating, yet client A only got reported for cheating by client B, then they can probably disregard B's reports.
 
Real time would be better if it is possible. Save someone pulling their hair out. Weapon damage needs to be between 0 and some number. In the first implementation that number could just be the maximum number the strongest weapon can hit for. Then, at least, ridiculous hacking is eliminated.

Point is, they don't have to stop hacking, they just stop hacking being effective. Or, mitigate the effectiveness as much as possible.

I think the client would need to check a great deal of things being transmitted to it. But a players ship can't blow up unless his own client acknowledges and processes the data it is sent from a peer.

But it gets more difficult still. Obvious things like an attack from multiple sources which have all been hacked are a problem too. What if a hacker(s) generate 10 laser blasts from 10 npc ships. Each one being under the individual cap? Being attacked by 10 ships could potentially happen in legitimacy.

How then does a client deduce an out of parameter event? It needs to be able to do so without relying on information from peers because all info is potentially tainted.
 
Last edited:
I am almost certain that if this sort of logic screen was in place the client might be prone to make errors but I would prefer it to be prone to making errors in favor of its own player. It should lean toward granting immunity if damage is very high and/or the number of attackers seems very high for the context of the encounter. There are sure to be more logical deductions that can be made from the data available to the client. For example, how can the data be accurate when 100 laser blasts a second are occurring but the client only knows of 3 attackers. I such a case, the client can refuse to participate on behalf of the player.

We have to assume that a hacker can send any data he wants. The recipient client must do some validation.
 
Last edited:
A variant of this could be that both players had to report the damage done and current health of shield/hull to the server, and a simple check if both clients report the same would not give any concern for either player doing anything strange.

So this should not require any game logic server-side, only a simple comparing of numbers. So this would, over time, build a collection of players who encounters these kind of issues. So basically we would have a very simply sanity check done server side, and over time patterns will emerge, who gets detected with numbers, and automatic checks can be run on the data, and it should be quite obvious which client is using hacked data. This is stuff that can be done later, we want to detect cheating, but we do not need to actually act on this directly. So irregularities is logged with ships involved etc, and then sent to a queue for analysis. And the other the queue is managed is based on how many analysis request a client have had, so we act faster on clients that keeps triggering the sanity check,


We cannot trust either client, that is why I suggest that both clients have to send the data to the server for the basic comparison, as either of the clients or even both, can be lying about detecting "anomalies" on the data from the other client. And as long as we keep the P2P traffic model, we cannot trust that the clients send the same data to the server as what they send to the other client, or that a client receiving data not lying to the server about what it received.


This is a very simplified explanation, and the actual implemention of this will most likely become alot more complex than I outlined here, as there many potential pitfalls that would generate false positives, etc.
 
I don't know to what degree game servers participate. Having a game server involved might help. I don't know what the company wants to spend on that. I have had the notion that game servers are not very involved in what goes on so I was only thinking in terms of two participants. Hacked client vs standard client.
 
Back
Top Bottom