All the submitted data is available in a sheet, you're free to examine it. Any random manipulation should for the most part cancel out as white noise, unless you think a bunch or people banded together to misrepresent the mode% in a specific way for whatever reason. Even if we only assume 80% of the submissions to be legit it will still paint a very similar picture.
Even in the absence of deliberate manipulation - and I agree that it's unlikely that anyone bothered to do that for this particular survey - that doesn't do anything to guarantee a representative sample, and being able to read the submitted data doesn't help with that. (That just shows that the analysis of the submitted data honestly represents the numbers in that data, which I don't think has ever been contested)
If I put together a "modes and professions" survey and go onto my Colonia-focused discord, I'd expect close to 100% for Explorer profession (because they got out here!) and probably about 60%-70% for Open (based on comparing Colonia-region traffic reports and CG participant numbers with the number of commanders I actually see in systems)
If I take the same survey to the PvP League server, I'd expect far fewer Explorers, a near-100% sample of PvPers, and far more percentage in Open.
If I take the same survey to the Mobius faction server, I'd expect a much more balanced set of professions than the previous two, and a vast majority for Private Group.
The sampling errors in those three cases are of course completely obvious. But all self-selecting surveys are subject to the same class of errors. They're massively dependent on who sees them, who promotes them to their friends, etc. At least in the three examples above you know which ED sub-communities the survey went to and can perhaps account for that: in this case, you don't really even know that.
For a non-Elite Dangerous example, the accuracy of a survey with a genuinely random sample is +/-3% with 1000 participants. The accuracy of opinion polling in the recent UK General Election was in the worst case +/-10% - from companies who specialise in trying to get their samples as close to "genuinely random" as possible and who put their professional reputations on the line for it. Some random internet poll which doesn't even *try* to get a random sample? +/-100% is the normal quoted accuracy on those.
Frontier meanwhile don't need to resort to sampling as they have the full population data available. That allows theoretical perfect accuracy - though still room for endless quibbling about what the statistics mean. (Obviously they *could* lie in public about what they found - though I'm not sure what the motivation would be to do so)