Page 545 of 545 FirstFirst ... 535541542543544545
Results 8,161 to 8,168 of 8168

Thread: The Star Citizen Thread v8

  1. #8161
    All it proves, I feel, is that CIG/RSI/Whatever they are this week, are still spending time and money on things that aren't giving me the games they owe me

  2. #8162
    Sad to hear that John Bain, Total Biscuit, has died. The one and only time I met him, at Pax Prime 2013, someone asked him what game he was most looking forward to playing. He replied "Star Citizen".

  3. #8163
    Originally Posted by Cmdr-Wotherspoon View Post (Source)
    Sad to hear that John Bain, Total Biscuit, has died. The one and only time I met him, at Pax Prime 2013, someone asked him what game he was most looking forward to playing. He replied "Star Citizen".
    Absolutely terrible.

  4. #8164
    So sorry about TB

    CIG have a presentation out today about the wonders of the Optane SSD.

    They are trying to explain their I/O subsystem that's so good, it's just perceived as bad.

    Now, what really makes me wonder is why they are focused on local workstation performance. They had videos earlier on showing off their distributed storage platform and how their virtualized centralized environment reduced devtime - despite it somehow being distributed.

    They've got apparently 1.34Tb of build data that they need to sync up per branch. Their old system hosted 24x SSD in SATA RAID 6, and sync took 6 hours and 13 minutes. Moving to Optane - they have 4 in VROC RAID, and sync takes 3 hours and 30 minutes.

    That's all very nice, but not particularly juicy, and makes absolutely no sense.

  5. #8165
    Originally Posted by Cmdr-Wotherspoon View Post (Source)
    Sad to hear that John Bain, Total Biscuit, has died. The one and only time I met him, at Pax Prime 2013, someone asked him what game he was most looking forward to playing. He replied "Star Citizen".
    RIP but I'm very sure he isn't the only one, we have some people here openly stated that they are in old age and no longer sure if they would ever see SC, hell there's a meme posted a few pages back.

    And then you look at this scam masquerading as developing a dream game :-/

    Speaking of which, I have a disturbing question, what if Crobber died? Would he be Judas or Jesus? You know what I mean.

  6. #8166
    Originally Posted by Asp Explorer View Post (Source)
    All it proves, I feel, is that CIG/RSI/Whatever they are this week, are still spending time and money on things that aren't giving me the games they owe me

    ...so entitled

  7. #8167
    Originally Posted by Asp Explorer View Post (Source)
    So sorry about TB

    CIG have a presentation out today about the wonders of the Optane SSD.

    They are trying to explain their I/O subsystem that's so good, it's just perceived as bad.

    Now, what really makes me wonder is why they are focused on local workstation performance. They had videos earlier on showing off their distributed storage platform and how their virtualized centralized environment reduced devtime - despite it somehow being distributed.

    They've got apparently 1.34Tb of build data that they need to sync up per branch. Their old system hosted 24x SSD in SATA RAID 6, and sync took 6 hours and 13 minutes. Moving to Optane - they have 4 in VROC RAID, and sync takes 3 hours and 30 minutes.

    That's all very nice, but not particularly juicy, and makes absolutely no sense.
    Commentary stolen (and hopefully censored) from elsewhere:

    Originally Posted by Nasty Ebil Goon
    Batch workers - this is literally multithreading 101 and has extraordinarily little to do with Optane performance either way. I need to look up the exact talk this was a part of because so far this is like a crayon scribble being hung in the louvre.

    18:15 - "...Parrelize a bunch of your game code without your game coders necessarily understanding the underlying process." Dear god NO, this type of parrelization is horrible in any moderately complex game because whatever time you save by not having to care about how threading happens is eaten up tenfold by the dozens of edge cases you create for yourself when you need to communicate with objects on other threads, except now it's in a system that you don't necessarily know which thread they'll be on and in what order they're supposed to execute, so you have to either write complex data synchronization code on top of that to pass data between objects in an efficient manner, or litter your code with spinlocks that will utterly trash performance.

    18:45 - "Background worker system runs when batch works run out of work. This is a good idea in theory. In practice, it needs a bunch more management on top of it to ensure that some workers are always available, otherwise those assets will never get loaded. Would have been important to specify that they break their workers into separate pools based upon task priority to ensure that deadlocks like that do not occur.

    19:45 - "It decrypts and decompresses said block even if it's not in memory yet" - This makes sense if they're using a compression / encryption scheme that can be decoded on a per-chunk basis rather than requiring the entire thing in memory in order to do so. That is actually a good optimization. The problem is that they're optimizing a terrible design. Just before this, he mentioned that a ship is a "300 MB asset made up of different files". There is no plausible reason to NOT bundle assets into logical, serial chunks for production releases. This is a relatively low-effort optimization, but provides an enormous performance improvement because not everyone has an SSD or an Optane that mitigates the issue of killing your load times due to cache misses and seek time lengths. This is game optimization 101 that has been standard practice for 15 years, especially in console development, as disc/HDDs have such horrific seek times that I have literally seen total loading times decreased by a factor of 30 after moving to a proper bundling system.

    20:25 - ROFL. Now you can see exactly that in practice. Those load times disparities would be far, far less if they were organizing and bundling their assets properly. This entire talk is basically "make up for your lazy architecture by requiring your users to buy expensive hardware to get a non-terrible experience".

    20:40 He's quick to point out that "Star Citizen only has one load" - FALSE. He literally just explained how assets are streamed in during gameplay. It's the same thing. I guarentee this was marketing speak trying to conflate loading screens with the process of loading content.

    21:35 The entire point of the spinning pink indicator is to ensure that it constantly spins so that users / devs don't think that the game crashed. It can't even do that right. How did they screw up something so basic?

    22:35 Okay I feel a tiny bit of his pain here because everything always goes wrong during presentations, but it's amusing in this context as he essentially ended up spending more time advertising just how abysmally slow the game loads on HDDs.

    23:00 Legitimate improvements for developers for version control. That's correct. What is worrying is that they need to perform a custom copy of > 60GB of data several times a day per developer in the first place. I'm struggling to imagine how their build system could be screwed up enough to require that.

    26:00 1.34TB of data per branch, which has to be rebuild every time a setting is changed. Okay. So I'm guessing that they never heard of having a build manager whose job is specifically to help ensure that this sort of thing doesn't happen frequently, and when it does, causes the least impact possible?

    26:25 "We do this for every release, and since we release every 3 months, that's kind of a big deal!" Or you could be like everyone else and run the build overnight because 3hrs vs 6hrs shouldn't make that much of a difference if it only happens once every three months.

    I can't watch anymore. Their build and content loading pipeline is [BLEEP]ED in all caps and they're using Optanes to try to mitigate it. I assume this was an Intel-sponsored event where they don't care what they say so long as it's positive for their product.

    Like it has legitimate uses for devs. That part was actually mostly okay. But relying on it for the end user experience is stupid. Stupid, stupid, stupid. It's just another one of their lovely unforced errors where the proper solution is sitting right there in front of them on a table, and they choose to do something retarded instead because it's slightly easier in the short term and yet causes ten thousand problems later on.

    For comparison, take a peek at a GDC talk from actually competent developers who do things like occasionally release games: http://www.gdcvault.com/play/1024577/Geometry-Caching

  8. #8168
    Originally Posted by Tippis View Post (Source)
    Commentary stolen (and hopefully censored) from elsewhere:
    As always, the trick is to develop the code in such as way that the management overhead is dwarfed by the savings made elsewhere. There is no point doing any of this if the savings aren't noticeable, if there is no advantage - why engage in a Red Queens Race where you spend a fair degree of resources to stay where you are?

    Unfortunately, this is where good design and good management need to shine. Unfortunately, as far as I can see anyway, both are missing at CIG. They seem to have many of talented coders...but are lacking experienced team leaders, good design and decent management.