Powerplay Should Frontier freeze Powerplay for one cycle and fix the botched cycle-tick issues?

See above


  • Total voters
    196
  • Poll closed .
Hello, I don't understand ? What happend to powerplay ? I saw this morning on reddit this post.

I left PP because I don't have the time to continue

Thanks in advance.
 
Hello, I don't understand ? What happend to powerplay ? I saw this morning on reddit this post.

I left PP because I don't have the time to continue

Thanks in advance.

There were two extra hours in the Powerplay cycle, and ALL the triggers seem to have changed. None of this was announced whatsoever by FDev. Also, it seems that there were bugs with preparation & expansion as well.

The two extra hours caused some snipe attempts to fail, which let some expansions succeed. The trigger changes are HUGE, undermining becoming easier while fortifying got ~5 times harder in some systems.
 
Has Voted !

Yes of course !
It is by many lengths the most reasonable solution to the current situation.
 
I really wish the guys who voted "No" posted a comment. The current situation how FD "fixes" powerplay manually is ludicrous and not an option at all.
 
it makes me wonder if FDev actually will pay attention to it?

Unlikely, based on the updates from Zac. But hey, what do we know? We only play it.

Oh, and paid for it to be made.

And keep funding it.

This is a bit like an AGM for a company where everyone votes against the CEO getting a 60% pay rise, and he gets it anyway. Because that's what he wants.
 
It's amazing that any of this is done manually. Absolutely nothing about any of it should need manual intervention. Complete fail.

And why they can't just go back to the state at 7:00gmt and ignore everything done after (refund anyone's credits etc) is a little strange. Aren't all transactions timestamped? They should have data on who turned what merit in at what time down to the second at the very least and should have been able to correct their error. Then again, none of this is surprising.

Ummm... no. They never stated its done manually. Actually Zac's question about which system in particular proves that this is largely automated and the devs are not involved in this part of the codebase. It is a classic example of cascading error, actually. The update messed up with the trigger, the trigger didn't work, due to trigger not working preparations were not counted correctly, due to that the expansions didn't happened. Also probably due to that, the other parameters shifted to unexpected state. Thats what we know so far.

As for rolling back the data, sorry but that's not how a complex project like this works. The full backups are probably huge and happening not that often. Even if they do keep the historical data for, IDK, debugging purposes, probably this is not enough to restore the state. Probably they would need to go back to the state of last cycle which would mean rollback of the entire frikkin galaxy for everyone and then replaying actions done by the players somehow, if even... Or maybe rolling back to a pre-engineer backup and then re-aplying the update, scrapping all actions done by the players since? Also, correct me if I'm wrong, but engineers update was AFTER previous cycle tick. So, probably that data is now invalid because changes in the databases related to the engineers, for example. Even if they pulled the Sisyphus effort of writing a migration script scrapping all available historical info from their system (which would be no small task, mind you), it probably wouldn't be enough to restore said system state. Those are of course only my assumptions, but they are backed by a ~15 years experience working in IT with large datasets. I'm just saying its not that trivial like you paint it to be.

Its a big failure, but big failures do happen IRL, sadly. The devs are only humans, you know... And if they pull a miraculous rabbit outta their... hats, there will still be haters.
 
Ummm... no. They never stated its done manually. Actually Zac's question about which system in particular proves that this is largely automated and the devs are not involved in this part of the codebase. It is a classic example of cascading error, actually. The update messed up with the trigger, the trigger didn't work, due to trigger not working preparations were not counted correctly, due to that the expansions didn't happened. Also probably due to that, the other parameters shifted to unexpected state. Thats what we know so far.

As for rolling back the data, sorry but that's not how a complex project like this works. The full backups are probably huge and happening not that often. Even if they do keep the historical data for, IDK, debugging purposes, probably this is not enough to restore the state. Probably they would need to go back to the state of last cycle which would mean rollback of the entire frikkin galaxy for everyone and then replaying actions done by the players somehow, if even... Or maybe rolling back to a pre-engineer backup and then re-aplying the update, scrapping all actions done by the players since? Also, correct me if I'm wrong, but engineers update was AFTER previous cycle tick. So, probably that data is now invalid because changes in the databases related to the engineers, for example. Even if they pulled the Sisyphus effort of writing a migration script scrapping all available historical info from their system (which would be no small task, mind you), it probably wouldn't be enough to restore said system state. Those are of course only my assumptions, but they are backed by a ~15 years experience working in IT with large datasets. I'm just saying its not that trivial like you paint it to be.

Its a big failure, but big failures do happen IRL, sadly. The devs are only humans, you know... And if they pull a miraculous rabbit outta their... hats, there will still be haters.

Except we've seen errors in a number of previous cycles that do not make much sense if it's automatic as that code is just math and rules and you shouldn't get issues where odd things happen.

As for the backing up, you're assuming the data was lost and they have to return to some snapshot state. That's not what seems to have happened. The cycle tick didn't do what it was supposed to and take the server offline. It continued on. All of this activity should be timestamped in the transaction server, so you should simply be able to go in that database and kill all the activity that occurred after the 7:00gmt timestamp, refund those players as needed and re-start the powerplay cycle end code using the data from the transaction server stopped at the correct time now. It seems from their response that they could do this but it's too much work.
 
This last cycle has been the culmination of a year of bugs, glitches, rule changes and all around general broken-ness. We need real fixes and I feel we are all happy to wait for that to happen.

Running repairs won't cut it this time. The wheels have fallen off the powerplay band wagon. Enough is enough.

Please, for the love of Braben, put powerplay on hold and work it out properly. Once the issues are resolved, lets kick off powerplay again with a system that works.
 
As for rolling back the data, sorry but that's not how a complex project like this works. The full backups are probably huge and happening not that often. Even if they do keep the historical data for, IDK, debugging purposes, probably this is not enough to restore the state. Probably they would need to go back to the state of last cycle which would mean rollback of the entire frikkin galaxy for everyone and then replaying actions done by the players somehow, if even... Or maybe rolling back to a pre-engineer backup and then re-aplying the update, scrapping all actions done by the players since? Also, correct me if I'm wrong, but engineers update was AFTER previous cycle tick. So, probably that data is now invalid because changes in the databases related to the engineers, for example. Even if they pulled the Sisyphus effort of writing a migration script scrapping all available historical info from their system (which would be no small task, mind you), it probably wouldn't be enough to restore said system state. Those are of course only my assumptions, but they are backed by a ~15 years experience working in IT with large datasets. I'm just saying its not that trivial like you paint it to be.

You're making a lot of assumptions that this is a huge monolithic dataset but if you watch FDev's AWS presentation (I run an AWS Partner myself so it was interesting) then you'll see that the data is held in a lot of different stores and in different database types (MySQL vs NoSQL - MongoDB and DynamoDB iirc). Those should all be backed up individually so we're not talking about one massive dataset. I think it's more likely that they just don't store each transaction for PP and that when UM/fort levels are adjusted all that gets updated are the commanders PP data and the systems - not who/when that data was dropped. That could make rolling that back very difficult depending on how that data is snapshotted/backed up but that's an architectural decision FDev has made and that doesn't stop it being irritating that they can't do that when there has been an error in their systems.
 
Those should all be backed up individually so we're not talking about one massive dataset. I think it's more likely that they just don't store each transaction for PP and that when UM/fort levels are adjusted all that gets updated are the commanders PP data and the systems - not who/when that data was dropped. That could make rolling that back very difficult depending on how that data is snapshotted/backed up but that's an architectural decision FDev has made and that doesn't stop it being irritating that they can't do that when there has been an error in their systems.

The thought has crossed my mind also (that they only adjust the numbers and reward the commander accordingly not storing the transaction per-se). And you're right in that is a highly distributed environment, thats why I've assumed that the only option is to rollback the whole thing. Without overview of the architecture we can only assume :) If (which is probable) the system is tightly interconnected then rollback of only one asset would be a nightmare and/or impossible.

They should pause the powerplay immediately, I now know from fellow Winters CMDRs that they have done so in the past. Do a good assessment of what can be fixed and what not by someone who actually understands how powerplay should work :p and then apply the fix and communicate to us, preferably in advance so we can prepare for the battleground we will be thrown into.
 
Last edited:
They should just take the whole power play offline and fix it with proper time and not just do a rush job fixing it with ducttape.
 
Back
Top Bottom