However, there's not much use for a single-user email server. Unless you were to set up a chatbot that sends emails to you, in lieu of friends. Not sure how rewarding that would be though.
Actually I can provide two very good reasons for a single user email server. One I used to run at the LINX (predecessor and my successor did as well) - namely, we had a private email server setup which was specifically assigned to be the receiver for log files when something broke, the error logs would immediately route to the server along with any pertinent info and then be delivered via email to our client, which we could then collect anywhere from within the building. The advantages of that would be that the server could run on a toaster, so it took nothing out of the information systems procurement budget along with the fact that as literally nothing else was run on said toaster oven, we could reliably guarantee the email would ping within 60 sec of it landing on the server, we could check the logfiles on our phones and do diagnostics at the coffee machine, then have a solution in our head by the time we got back to the NOC.
The other primary reason for a local email server is laboratory testing, you set up a load of dummy email accounts and have them scripted to hammer the server with scripted junk, and test the server for bugs, faults and whatever. This is particularly important if you're doing things like kernel patches and security upgrades, under no circumstances whatsoever do you -ever- do that on a live server without thoroughly testing it beforehand. (We had a junior admin who suggested this once, the CIO told him if he performed maintenance unbidden, he'd be collecting his things the next day, the blood drained from his face kinda rapidly).
That's all well and good, but you just know a fully-offline game is getting hacked within minutes. There Will Be Spoilers. Not wanting to facilitate that happening is not DRM. Besides, how do we know what is going on server-side? The tiny amount of bandwidth used by the server-client interactions might be a result of huge processing and memory requirements under the hood. My PC might struggle as it is right now - simulating the machinations of the factions and NPCs across a galaxy might be beyond it, I fear. And I'm sure you can fake it, but that's a different game entirely, and I don't think we gave FD enough money to make 2 separate games. Anyway, this is starting to look like yet another offline mode whinefest, so let's move on...
Any client gets hacked within minutes, hell, World of Warcraft got hacked within minutes, there's private servers for that, and that's not exactly suffering for it. Now, on the topic of not knowing what's going on serverside, on that you are entirely correct, we don't know. However, we -can- infer some basic things from the network traffic, which currently as we both agree deals in passive components save for the P2P component for multiplayer. If I was to make some educated guesses based on the traffic and what comes through and goes back, the following would be my summary:
* Server is needed to validate all players transactions, to ensure consistency and to prevent cheating and exploits. Again, no arguments there. I think we're both settled on this point.
* Low level abstracted data such as markets in places the players occupy and the "bubbles" around each player. As the players get more spread out, there's more bubbles to process, and therefore more data is processed at the lowest level of abstraction, more load on the servers. AWS cloud solutions work well in this sense because they scale up and down based on what's required, but they don't always do it -quickly-. Then again, there's not likely a need for that unless there's a mass player migration and a huge spawning of new bubbles.
* Persistent NPC location data. So far this part feels very debatable, most of the NPC's genuinely feel proc-gen.
* Stellar cartographic data - this is where the biggest "data sink" is potentially (because you can't abstract it, even when the player bubbles vanish, the data has to remain persistent), because there's got to be some kind of database which tracks the discoveries of stellar objects. Thankfully those discoveries don't provide residual revenue to those people who find them otherwise you'd have to tie it to names and that would result in a -huuuuuuuuge- database set.
Now, as to simulating or faking it, that's actually not the hard part. Procedural generation these days can do a very, very good job of "hiding the magic" behind the curtain if it's done right. If you've seen the nemesis system in Shadows of Mordor you'll know what I mean by this. That's a good example of Procedural Generation being mixed with dynamic and emergent events to create enemies with distinct personality and looks based on your own actions and influence. It's certainly not out of the realms of impossibility and most definitely not a computationally impossible problem, games are advancing rapidly and what looks impossible to solve now is easy six months away. Limit Theory is rapidly redefining what Proc-gen is capable of as well.
Chatbots may have many simultaneous 1-1 interactions, or multiple 1-many interactions, or could even interact with other chatbots. But the database of all such interactions can be shared and learned from - that's the point of having all those interactions, no? You might find them lurking on IRC channels sometimes, learning and occasionally chiming in. Maybe back in the days of Eliza, you would have had a point, but chatbots have evolved a bit since then. The "official" Turing test rules may be more restrictive than what I describe, but who was talking about that really? The general concept of such a test is not to be confused with scientific conditions.
The best ones are either written to be "Stateless" AI, which can effectively come to a conversation without prior knowledge and make intelligent decisions based on the information that's provided, or "Stateful" where they accrue information over the course of the conversation and refer back to it. You're discussing the latter, and there's plenty of good examples out there, but a good chunk of them critically can be downloaded and run locally if you so desire, there's no need to run them on some huge supercomputer, as long as you're willing to spend time teaching it. They're open source in many cases, which allows people to study the code, and if they can, improve upon them.
Not really. When I think of DRM, I think of a game that has no requirement for online connectivity but does it purely to authenticate the copy is valid. That's different from a game that you experience primarily as a lone player vs NPCs, but has major elements that require server-side content that would impair the experience if attempted as a pure offline thing.
That's where we agree, somewhat, but that's only right now. I have no idea what the background simulation will behave like, and nor does anyone outside of Frontier staffers. And maybe not even them, if it is coded to react to everyone's actions. (I would guess that's where the "living" part of the "living galaxy" comes in.)
None of that bit has much to do with DRM, though. It all still smells like a zombie argument...
At least at the point of newsletter 49, I did not see anything that was present that couldn't be simulated or that really screamed at me "This needs a server". There's dynamic markets, but in all fairness you can simulate -those- too. The words "Living Dynamic Galaxy" still cut far too close to "Glassbox Engine", but perhaps FDEV will surprise everyone.