Programming Language

personally Im thinking a game making utility of some sort - would explain bugs occurring in already working code appearing.

Because no programmer prior to this has ever introduced new bugs while making changes to existing code...

- - - - - Additional Content Posted / Auto Merge - - - - -

Python is used for Linux programming, in this case probably for custom server modules to support the server side of the game - it has nothing to do with what's on your Windows machine.

Python is cross platform, I regularly use it on Windows. Moreover there's a utility called py2exe which bundles your Python program, along with the Python runtime, into a standalone executable.

Not saying Frontier actually do this, but Python is definitely not exclusive to Linux.
 
64bit by definition must be faster surely? each CPU processes 32bits per cycle as apposed to processing 64bits per cycle? so per clock cycle a quad core can process 128bits where a 64bit quad core would process 256bits?

I'm no expert I'm just asking. I know that 64bit can address more RAM though.

Bits per cycle isn't exactly equivalent to processing power. And the ability to address more RAM can actually be attributed to the memory architecture, memory bus width and even the file system, so again, not perfectly equivalent, or a reason to (or not to) use 32 vs 64.

It's also a myth that 64bit is somehow only part of "modern" processing. PC100 memory had a 64bit bus width (formerly known as FSB or front side bus), whereas USB 3.0 has a 1bit bus width, yet the transfer rate on USB 3.0 far exceeds any PC100 I've ever seen, and our CPUs still run a 100/133 bus (well, actually a multiplier, but effectively the same).

Bus width and bit rate (one in the same, essentially...except ....) mean nothing without frequency. A 64bit 1MHz processor is crap compared to a 16bit 200MHz processor.

Bits in CPUs effectively refer to the number of open "lanes" of communication a processor can use. If your program is capable of *addressing" 64 simultaneous communication lanes, it can take advantage of a 64bit processor, but that doesn't mean it will get processed FASTER, as your program isn't the only thing being processed, and the CPU isn't the only thing doing the processing. But let's not confuse bit rate directly with bus width....

If the bus width of your memory is limited to 32bit, but you're running DDR2 with a 64bit processor, you're golden. If you're running DDR3 on a 64bit system (like we are today) you have less issues with delayed packets resulting in slower processing as you have additional lanes open at all times.... you can see where this is leading...

So, all things being equal (or not...) 64bit everything just means we're balancing the scale - but not really. DDR architecture combined with PCI /PCI-e bus speeds and SATA bus speeds mean that if everything was actually able to take advantage of 64bit architecture, we'd effectively hit a bottleneck on bus speed.

Hence the term, "unified architecture" - we're not there yet, and our systems can't support it anyway. You're not losing a thing from 32bit software, at least when you're talking games.

Now, if you want to start talking audio recording....

- - - - - Additional Content Posted / Auto Merge - - - - -

Because no programmer prior to this has ever introduced new bugs while making changes to existing code...

- - - - - Additional Content Posted / Auto Merge - - - - -



Python is cross platform, I regularly use it on Windows. Moreover there's a utility called py2exe which bundles your Python program, along with the Python runtime, into a standalone executable.

Not saying Frontier actually do this, but Python is definitely not exclusive to Linux.

And what language is executed in Windows when you do that?

In terms of game dev, python is *almost* exclusively used for server-side programming.
 
Last edited:
I wonder if they are using large volume flat data structures. Think Google/hadoop.
I would love to start hitting that database using tableau our information builders and start building data analysis on commander behavior

Google uses a proprietary data structure called SPDY. They made it.

- - - - - Additional Content Posted / Auto Merge - - - - -

E : D is only available on Windows so the Python is defintaly not cross platform yet.

Your language translator is broken.
 
If a game is being coded to run natively on Windows, it's almost always C++ at the core.
it could also be C or .net or dozen other languages before it's compiled into native code.

Python is used for Linux programming, in this case probably for custom server modules to support the server side of the game - it has nothing to do with what's on your Windows machine.

Ruby is a web app language and would really never be used for a game - it's also NOT used on their site. (Forums are in PHP using standard vBulletin and elitedangerous.com is in PHP, cached with Varnish and UI is done in JQuery - no Ruby there, either).

Python is popular in linux but it's not limited to it, same way like ruby on rails is popular but ruby can be used for a lot of other things.
 
it could also be C or .net or dozen other languages before it's compiled into native code.



Python is popular in linux but it's not limited to it, same way like ruby on rails is popular but ruby can be used for a lot of other things.

You think they're coding this game in Ruby?
 

Mu77ley

Volunteer Moderator
Python is used for Linux programming, in this case probably for custom server modules to support the server side of the game - it has nothing to do with what's on your Windows machine.

Python is a general purpose, cross-platform language used for many things. It's available on many platforms Windows, OS X and Linux being the main ones.

Ruby is a web app language and would really never be used for a game - it's also NOT used on their site. (Forums are in PHP using standard vBulletin and elitedangerous.com is in PHP, cached with Varnish and UI is done in JQuery - no Ruby there, either).

Nope. Ruby is not a specific web app language, it's another general purpose, cross-platform language that can be used for many things.

It is used for web apps as the Ruby on Rails web framework is quite popular, but it's no more a web app language than Python with Django or Perl with Mojolicious.

*edit* Side note - SQL isn't really a programming language, it's a database structure and not one used for games - this would be on the server side.

Again, this is incorrect. SQL stands for Structured Query Language, and it's a special-purpose programming language used for manipulating relational databases.

Considering they're running Apache servers it's likely they are using MySQL server-side. You might think they'd use MSSQL "because it's a Windows game" and you'd be wrong - that would be terribly expensive and inefficient. MySQL is far lighter and free.

MySQL is not really suitable for any heavy load database (in fact, one DB guru I know thinks it's not suitable for use as a database at all, but that's another conversation), and it now has the added issue of being owned by Oracle so its future is in doubt (hence the MariaDB fork).

Postgres is a far better choice of open source database than MySQL.
 
one of the recent innovations AMD introduced into their GPUs was better 16-bit processing. crazy, yes? in fact, highly desirable because smaller mobile devices actually don't need 32-bit graphics so programmers for it can switch to using 16-bit numbers and save battery!

I'd be interested in more detail here, perhaps a link? Specifically what part of the GPU are you talking about?

- AMD GPU's have had 16bit, (dither capable render/frame buffers and ROP processing since day one).

- AMD GPU's don't support OpenGL ES lowp precision, (16bit floating point), mode in fragment shaders, (the shader compiler just substitutes mediump).

- AMD sold of all it's mobile/embedded GPU technology to Qualcomm a long time ago, which is now known as Adreno, (anagram of Radeon), http://en.wikipedia.org/wiki/Adreno

let's take your specific question. why is 64-bit faster? because when you need to deal with numbers bigger than can be contained in 32-bit, it saves you a huge amount of low-level bit-wise arithmetic, saves you using multiple memory registers, saves multiple fetches from memory when fewer would do. it's like having a larger bucket - you make fewer trips to the river and back. and you don't have to fill it up, but if you need more water the capacity is there. you don't have to carry two buckets.

It's never this simple, sure back in the good old days, before uncoupled memory controllers, instruction/data caches, speculative out of order deep instruction pipelines, etc; you could make a statement like, 2n bit architecture is going to be faster than n bit architecture and you would be right most of the time.

You could actually do a complete dissertation/master's thesis on the complexities and performance implications of 32bit vs 64 bit on just one CPU type alone, let alone multiple CPU types spread across multiple OS's. I'll just leave you with one of the most common issues with 32bit vs 64bit.

Using pointers, (which is any type of data structure more complicated than an array), on a 64bit architecture requires more memory/data cache processing/bandwidth and can also increase memory usage significantly.

what if the ED programmers are certain that their software will never need to reference more than 4GB of memory

This is exactly FD's reasoning, (plus the fact that there are still a significant percentage of users using 32bit OS's). I have a sneaking suspicion though that when station/ship walking and planetary landings make an appearance that a 64bit OS will become mandatory.

and that nearly all of their maths will be with numbers inside of -2,147,483,647 to +2,147,483,647 (that's the range that 32bits gives you, it's actually pretty big). Or more likely that it will never need more than 128 decimal points which is the other way to use it?

There is an FD developer post somewhere that states they use 32bit ints, 32bit floats and 64bit doubles for ED.

If you are talking about IEEE 32bit floats, 128 decimal points is incorrect. It offers -126 to +127 for the exponent and just over 7 decimal digits of precision, see http://en.wikipedia.org/wiki/IEEE_floating_point

i hope that's been of interest.

It certainly has.
 
Last edited:
64bit by definition must be faster surely? each CPU processes 32bits per cycle as apposed to processing 64bits per cycle? so per clock cycle a quad core can process 128bits where a 64bit quad core would process 256bits?

I'm no expert I'm just asking. I know that 64bit can address more RAM though.

many prof. apps are slower when the ram+ is not used. 32 bit is quicker.
 
The engine is more than likely C/C++ and may use some languages like Python or Lua for particular aspects that the engine interprets. Usually useful for when the scripts drive particular parts of the game you may not want to change in the C/C++ code and recompile for.
 
Last edited:
Another reason that I've not seen mentioned is third-party libraries. Sometimes these are only available in 32-bit or 64-bit forms. If a particular, critical library is only available from its vendor in 32-bit form, that often forces your entire application to be 32-bit too, even if it otherwise doesn't need to be. There are ways around this, but they are rarely simple, clean and cost-free (in terms of performance).
 
32 vs 64 bit performance is a bit complicated on Intel compatible architectures. The ia32 instruction set is a mess, it's register starved, many instructions only work with particular registers, some of the registers are special purpose. When AMD created the 64 bit extensions (now usually called ia64, but often people call it amd64) to this they added some new instructions but importantly a whole heap of general purpose registers (8 new 64 bit general purpose registers).

This also caused the ABI to be different (well, certainly in Linux, I don't know about Windows). Function calls on ia32 generally use the stack to pass arguments (except FASTCALL, when a single argument can be passed in a register). Function calls on amd64 generally use registers to pass arguments. Not having to use the stack makes function calls a bit cheaper on amd64, and having the extra registers means that more work can be done without having to load/store values from memory.

But on the flip side pointers are now all 64 bits and will take up more valuable space in the CPU's cache.
 
(now usually called ia64, but often people call it amd64)

Just for the sake of correctness it is never referred to as ia64, always either AMD64 or it's generic name, x86-64. The reason for that is that the 64bit instruction set extension to the x86 architecture employed on modern consumer processors was created by AMD and was branded as AMD64 by AMD. Intel 64 is what Intel calls their implementation of AMD64.

ia64 was an architecture created by Intel for it's failed Itanium processors, it's instruction set is not compatible with x86 at all.

This thread has so much misinformation in it :eek:
 
Why do you ask? What are you interested in?

OO should lead to less bug prone software; once a feature ia written and free of bugs (eg the milky way and stars) they should not be affected by new code as they were (the milky way disappeared at one point and then reappeared in a later build.unless there is a serious lack of discipline in the programming regime; I cite the station flips which we are lead to believe was due to new players enetering the instance; the player should be asking for the conditions of the instance not imposing conditions.
 
Just for the sake of correctness it is never referred to as ia64, always either AMD64 or it's generic name, x86-64. The reason for that is that the 64bit instruction set extension to the x86 architecture employed on modern consumer processors was created by AMD and was branded as AMD64 by AMD. Intel 64 is what Intel calls their implementation of AMD64.

ia64 was an architecture created by Intel for it's failed Itanium processors, it's instruction set is not compatible with x86 at all.

This thread has so much misinformation in it :eek:

IA-64 (also called Intel Itanium architecture) is the architecture of the Itanium family of 64-bit Intel microprocessors. The architecture originated at Hewlett-Packard (HP), and was later jointly developed by HP and Intel.

The Itanium architecture is based on explicit instruction-level parallelism, in which the compiler decides which instructions to execute in parallel. This contrasts with other superscalar architectures, which depend on the processor to manage instruction dependencies at runtime. In all Itanium models, up to and including Tukwila, cores execute up to six instructions per clock cycle. The first Itanium processor, codenamed Merced, was released in 2001. As of 2008, Itanium was the fourth-most deployed microprocessor architecture for enterprise-class systems, behind x86-64, Power Architecture, and SPARC.[1] --- http://en.wikipedia.org/wiki/IA-64

:)
 
See, of course it's gonna fail. They should've used <OTHER LANGUAGE>


Seriously though, LUA? *shudder*
Embed Python! Much nicer, easy to embed, and metric tons of excellent libraries ready to use.
 

Robert Maynard

Volunteer Moderator
Nope, not true, 64 Bit is better when you need extreme mathematical accuracy, but that's not the same as faster.

Also 32 Bit apps runs on 64 bit systems, and the reverse is not true.

A 32-bit x86 derived processor can use 64-bit (and probably 80-bit) floating point with ease - whether the code is compiled for 32-bit or 64-bit does not change the accuracy of floating point.

A mathematically heavy DSP program I wrote runs about 10% faster on a 64-bit PC when compiled as a 64-bit executable rather than a 32-bit executable on the same machine.
 
Back
Top Bottom