I was pondering if your answer was in support or contradiction. It indeed to some degree looks like support, yet it started with a "but", so i decided to be sure and clarify.
Nope.
QA finds bugs and reports them.
Management makes decision either to release the product in this state or not.
That's how respensibilities should be applied.
And that's where the problem is. FDev management isn't gonna care and roll out the patch nonetheless, history proves that in plentitude. There are obviously people in the upper echelon of management/control that has a timeline and darned be the results.
And that's where the problem is. FDev management isn't gonna care and roll out the patch nonetheless, history proves that in plentitude. There are obviously people in the upper echelon of management/control that has a timeline and darned be the results.
Doesn't seem likely - if issues had been reported by QA then typically a company would issue a list of 'known exceptions' or somesuch to minimise any bad publicity, and I see no sign of that. Also no fast turnaround on issues that would indicate they knew about them already. Either QA missed things, or there were last minute changes that went through without testing - but we'll never know so no point in speculation. I'd rather we just tried to help make sure it happens less in the future.
Doesn't seem likely - if issues had been reported by QA then typically a company would issue a list of 'known exceptions' or somesuch to minimise any bad publicity, and I see no sign of that. Also no fast turnaround on issues that would indicate they knew about them already. Either QA missed things, or there were last minute changes that went through without testing - but we'll never know so no point in speculation. I'd rather we just tried to help make sure it happens less in the future.
Or there are differences between the QA environment and the live environment which mean that some issues don’t occur when testing. (Which IMHO is the most likely scenario, certainly more likely then them never even firing things up on some of the supported platforms.)
Big question in that case is are those differences addressable in practicality.
Also if not, then what can be done? (Will come back to this.)
Another factor involved may be the co-location if all testers.
In terms of solutions, if it was all just client side stuff, wouldn’t be too bad, but the server changes complicate things.
My suggestion would be to follow a process along these lines:
Prelim:
1. Identify general external test needs, inc platforms, key OS’s, hardware types, internet bandwidth, geographical region, etc.
3. Agree Test Plan, responsibilities and schedule.
Main:
1. Complete final internal testing (inc go/no-go decision, and packaging.
2. Commence server upgrade.
3. Release client package to External Testers.
4. Notify External Testers when server upgrade complete, and commence external testing.
5. External testers complete external testing and submit results.
6. Internal review of results and go/no-go decisions.
Testing passes & go decision:
1. Final prep for huge load on servers.
2. Prep comms on any known issues that have been accepted internally.
2. Comms to players that the client upgrade is ready to download inc notification of any known issues.
3. Release client package to all customers.
Immediately post release:
1. Monitor server load, deal with server side issues etc.
2. Troubleshoot any new player issues.
Fully post-release:
1. Review, inc. was the right decision made with regard to any accepted issues, were there issues that weren’t picked up by the external testers but which occurred in the full release and if so what the causes were and future mitigation, etc.
2. General lessons learnt and update of strategy for future releases.
In terms of negative impact, I reckon it would cause a fair bit of delay on launch day on the first attempt, dropping to adding around an hour to downtime on launch day after a few goes and refinement of the process.
Having said all that, I don’t know that FD don’t already do something similar.
I obviously haven’t covered risk management and approach in the event of critical issues and/or a no-go decision, but that’s enough for now.
Also apologies to anyone reading for who the level of detail is too high or too low - it’s an unknown/mixed audience, so I’ve gone for a middle-ish level.
Nope.
QA finds bugs and reports them.
Management makes decision either to release the product in this state or not.
That's how respensibilities should be applied.
I agree it is a management thing, but it shows a big problem with company policy. Management should not let something go live without testing, particularly if notable core things in the database and rear-end have been worked on since. It begs the question on whether management actually know what could go wrong or whether they simply don't care.
It's basically the software equivalent of the Challenger disaster. The coders and internal testers must have known it wasn't ready, yet it still had the go ahead.
If only more developers had the ethic of Wube, the developers of Factorio. They have a company policy that nothing leaves beta as long as they are aware of even a single bug, and they do quite extensive open beta tests that last for long periods.