As JVM gets closer to being the de-facto VM for running any language (Jython, jRuby, Scala, Java, etc), the possibility of higher performance by running the VM "closer to metal" is quite exciting. I commend Sun on pushing JVM beyond just Java.
Except that LLVM isn't actually a high-level VM. You have to bolt on quite a bit to even begin to run Python, Java, Scala, etc, and achieving interoperability between those has yet to even be approached.
LLVM is the best thing since sliced bread -- and there's even work to provide an alternative to Java's c1/c2/interpreter backends using LLVM -- but it's not really a direct replacement for the JVM or any other high-level virtual machine.
The x86 ISA is a truly lousy standard. It lacks registers, it's asymmetrical and confusing. It's a Core Whatever wrapped around a Pentium 4 wrapped around a Pentium III wrapped around a Pentium II wrapped around a Pentium Pro wrapped around a Pentium wrapped around a 486 wrapped around a 386 wrapped around a 286 wrapped around a 8086, which is a hastily developed substitute for the failed ia432 built over the 8085, which was an upgrade over the 8080, which was an 8-bit version of a 4004.
I can think of two main benefits you can get from a VM that you can't get from bare metal:
1) (and this is also true for most if not all dynamic languages) that you can distribute your app to anyone on any machine and have it run. This does limit your access to the full capabilities if the hardware, but write once run anywhere was pretty close to being true (at least at the JVM level).
2) you can perform optimizations at runtime that you may not have known about at compile time.
As I see it, those are the main theoretical benefits.
Theres no reason you cannot get #2 in hardware.. now that'd be an interesting direction for processor manufacturers to go - architectures and instruction sets designed like that of a VM, with built in garbage collection and runtime optimisation and dynamic dispatch features.
As for #1, write once run anywhere still doesn't seem to be quite there, imho.
Still, what I meant was that the processor contains logic for some virtual machine style operations, eg keeping track of objects on the heap on-chip (or in a dedicated segment of ram/virtual address space) and having a hardware garbage collector. The main instruction set would still be RISC, but it would contain a small number of CISC instructions for allocating memory which the garbage collector then manages.
As for dynamic dispatch, I guess thats not really needed, since it can be constructed from simpler instructions, but having a Python-style method dictionary lookup for member functions be performed in hardware could be interesting.
Finally, if a processor had reflection and introspection features.. eg data is annotated in memory so that the structure and state of a program can be determined melodramatically at runtime (and modified).
Actually, while writing this, I realise that this probably doesn't really make much sense... :-/
What I like most about this idea is that system administration via init scripts, et al, can just go away -- presumably one will have APIs for interacting with the available hardware, necessary 'OS' services, etc.
It'd make automated deployment of systems even more manageable -- turtles all the way down.
I have to say that I find Java generally distasteful, but I feel the same way about UNIX. An "OS" for my applications that is just some OO code I interact with would be much better than random shell scripts that talk to other random shell scripts via unstructured one-way text pipes. Turtles all the way down, indeed.
Of course, UNIX is trying to fix that too... so it will be interesting to see how this evolves.
You'll need some mechanism to re-deploy major updates that cannot be accomplished via custom mechanisms, and that's most likely to still be ...
Why wouldn't the virtual machine bootstrap itself from network loaded code -- then it wouldn't be necessary to update the root Xen image.
Why couldn't I write a network service (also on the VM) that runs on other Xen instances and serves up the netboot code?
Why would updating the root VM require SSH? Couldn't I have a nice web management UI that I can just upload a new bootstrap JVM image to (on the rare occasions that I need to?). Perhaps systems could directly self-update the root Xen image on reboot?
I don't know why I'd have turtles-all-the-way down and then require some ugly update system that involves editing RC scripts.
You can achieve that now. Just by running java as the main process.
Erlang can do that (more or less). People run dedicated machines where erlang vm is the init script. With the ability to distribute code in a cluster and update a live code it's a perfect environment to manage :) Running different services is already implemented through the standard "supervisor" processes.
Java (or Erlang) as the main process can't (easily) configure the network adapter, add routes to the routing table, access/configure the IPMI module, modify firewall/IP forwarding rules, etc, etc.
There's still a lot of shell-script automation that has to be done via tools like chef/puppet/cfengine.
The article says that the JVM sits on top of a micro kernel, so I would assume that that micro kernel would provide a thin layer of abstraction for hardware and IO support such as file-systems. Its an interesting concept if you couple it with something like EC2 where you could throw out a bunch of Java VMs to do massive parallel processing of a data-set.
There are also many Java programs that do not use files or sockets; they use "the database", "the classloader", "the configuration class", etc.. With the details abstracted away, this JVM can replace those classes with custom implementations that have the same interface but are implemented without any OS support.
Note: I haven't watched the video or read the source code yet.