That's not really the same though. You are not really limited by RAM in most cases, also you are not limited by slow storage media like CD/DVDs as it was/is with most of the consoles.
The reality is, hardly anyone on mobile writes their own engines (a part from some of the big players) because it's just too much of an investment and Unity (or one of the other popular engines) is well optimized and allows much faster development.
All iOS devices except the iPad Air 2 have less than 2GB RAM (most 512 MB). Android 1-4 devices have often less than 1GB RAM. It's common that only 1-3 apps can stay in RAM depending on the platform and the apps memory usage (foreground apps, not background services).
Applications/games in the Win95/PS1/N64 era were coded a lot more efficient. Back than, a common Win95a PC had 4-8MB RAM, (highend was 32MB).
Win95 machines with 4 MB RAM were exceptions, not the rule. It was very painful to use such machine, as it was swapping all time time, otherwise doing nothing.
Any realistic setup had 8 MB RAM or more.
Application at that time also didn't support i18n, didn't anti alias fonts, had low-res, low-color assets, that were enough at 320x200(240)/640x480 resolution.
Windows 95 was painful to use even with 8 MB. 12 MB was minimum amount that didn't cause it to swap all the time when actually doing something. 16 MB was nice.
A Pentium 133 with 8MB and 800x600 (32bit colors) run fine in with Win95a and several open applications. Try that with Android, even with 2GB RAM and quad core CPU - the Java based system on top of Linux is quite resource hungry. Flagship Android 5 phones have at least twice the hardware spec (twice as much RAM and CPU) of the iPhone 6 and are comparable in performance and user experience (latency) and not faster. That's the difference of Object-C vs. Java. And old applications like Microsoft Office are all coded in C/C++ and parts of older applications in Assembler.
One icon in Win95 had 512 bytes (32x32, 16 color, 4 bitplanes). One icon in Android has 256 kB (256x256, true color). The 800x600, 16-bit hicolor (that's what I used at the time) framebuffer had a bit under 940 kB. The 1920x1200 truecolor has 8,8 MB, not counting the texture backing stores used by modern display servers.
The amount of RAM needed has to do with assets used by the code, not the code itself. The code itself is miniscule.
And no, Android phones do not have 4 GB RAM. Low end has 512 MB, with many phones in 1-1,5 GB range and the 2015 flagships have 3 GB. (Nexus 5 a 7 have 2 GB. Nexus 6 has 3 GB). All that without swap (where would you like to swap? To flash?). While most modern 32-bit ARM CPUs do come with LPAE, Android does not support that, so going above 4 GB will have to wait for ARMv8.
Android doesn't support LPAE? That's pretty surprising, any sources about that? LPAE doesn't need any usermode support to function. What specifically does Android do to prevent using LPAE in underlying Linux kernel?
Well, one thing is what Linux kernel supports by itself, other is, what does the board support package for your chipset. So maybe there is LPAE Android device somewhere, where the SoC provider did bother, but in general, nobody does.
You phone is also Intel based, not ARM. That opens another question - would Intel be able to make phone SOCs, if Android SDK would compile to native ARM code, as some advocates prefer?
That shows how little you understand about the Android build tools. Dalvik and ART are not compilers.
Prior to this year, javac compiled the Java code to .class files and then dx translated the Java bytecode in the .class files into Dalvik bytecode in a .dex file, with some simple dedupe optimizations.
Only this year did the Android build system switch to Google's own compiler.
Go take a degree in computer science, learn about intermediate code representation, compiler frontened, compiler backend, CPU instructions, JIT compiler, AOT compiler, register selection.
Then make little drawings about which piece of Android is converting intermediate code representation into native CPU instructions.
For brownie points compare the quality of generated Asssembly code between Hotspot, Dalvik and ART for the same unmodified jar file.
Already done and wrote a non-optimizing lisp compiler and an optimizing toy compiler with common subexpression elimination and fancy register allocation.
I gather from your response that you've realized you were wrong about Android not using javac but were too proud to admit it. Don't worry, we can fix your pride problem with these tasks below:
1. Dalvik and ART don't take jar files as input, so it is impossible to get your brownie points. Learn why.
2. Oracle's Hotspot targets x86 and x86-64, and Dalvik and ART are mostly focused on ARM. Learn the difference between ISAs.
3. Hotspot and Dalvik make different tradeoffs between CPU and memory both in their choices of garbage collectors and in their JIT strategies. Think about why that would be.
4. The word "compiler" by itself refers to a program that translates source code into object code. Notably, an assembler is not usually considered to be a compiler, and JIT "compilers" were originally called dynamic translators for three decades, with JIT compiler only appearing in the 90s. Given that terminology background, figure out why most people would call javac a compiler but not Hotspot or Apple's Rosetta.
> Already done and wrote a non-optimizing lisp compiler and an optimizing toy compiler with common subexpression elimination and fancy register allocation.
And yet failed to grasp the difference between frontend, backend and intermediate execution format.
> I gather from your response that you've realized you were wrong about Android not using javac but were too proud to admit it. Don't worry, we can fix your pride problem with these tasks below:
I don't have to acknowledge anything. Anyone knows that javac does not execute code on the Android platform. As such talking about whatever influence it might have on runtime performance, besides peephole optimizations, constant folding and similar AOT optimizations only reveals ignorance about the Android stack.
> 1. Dalvik and ART don't take jar files as input, so it is impossible to get your brownie points. Learn why.
Yes they do. Jar files get converted into dex files, which means the same file can be used as canonical input for both platforms.
Then again we are learning about Android aren't we?
> 2. Oracle's Hotspot targets x86 and x86-64, and Dalvik and ART are mostly focused on ARM. Learn the difference between ISAs.
Maybe you are the one that should inform yourself about Oracle and certified partners Java JIT and AOT compilers for ARM platforms.
Learn about the Java eco-system.
> 3. Hotspot and Dalvik make different tradeoffs between CPU and memory both in their choices of garbage collectors and in their JIT strategies. Think about why that would be.
Of course they do different tradeoffs. The ones made by Dalvik and ART are worse than approaches taken by other Java vendors, hence why they generate worse code, which leads to bad performance.
Learn about commercial embedded JVMs.
>4. The word "compiler" by itself refers to a program that translates source code into object code. Notably, an assembler is not usually considered to be a compiler, and JIT "compilers" were originally called dynamic translators for three decades, with JIT compiler only appearing in the 90s. Given that terminology background, figure out why most people would call javac a compiler but not Hotspot or Apple's Rosetta.
Learn about Xerox PARC documentation and its references JIT compilers.
Or better yet feel free to dive into OS/400 documentation about its kernel level JIT compiler.
All of which go back a little earlier than the 90's
I remember 386DX (40 MHz) with 4 MB to be unusable at all (yes, it was possible to install Win95, but that's all) and Pentium 120 with 16 MB and S3 card running 800x600 hicolor to be great. In 1996.
I actually worked on a game that was running on the iPad2 (some gameplay https://www.youtube.com/watch?v=uaq0Sfp3_5Q ) and while you had to optimize quite a bit to achieve a certain number of drawcalls and memory usage it was still developed in Unity and far from what devs did in the ps1/2 era.
"Instead, if the amount of free memory drops below a certain threshold, the system asks the running applications to free up memory voluntarily to make room for new data. Applications that fail to free up enough memory are terminated."
That has nothing todo with what op is referring to. Even on iOS you have a virtual memory system, like a normal PC does. If your app runs out of ram it will page to storage and the OS handles this for you. The PS1/N64 did not have any storage beyond RAM and also no virtual memory management so you had to write all the paging from CD/Cartridge to RAM yourself. Quite a difference.
Sort of. iOS does have a virtual memory system, but it's not as forgiving as a full OS would be; applications that fail to free up memory when asked to (i.e. in a low memory situation) are killed by the OS. See also: https://developer.apple.com/library/mac/documentation/Perfor...
That should be much easier to optimize against vs having no paging at all though. Are Apps running in the background asked to free up storage or be killed before the active app needs to do the same or are they referring to the active app ?