Here's what I've realized both about the universe AND intelligence .
Silicon cannot touch either of them. I didn't always think this though. I thought intelligent beings, but first smaller structures, could evolve in a large digital universe. I made a blog post:
http://scrollto.com/blog/2017/04/11/life-a-universe-simulati...
It took a long time but I eventually created what I set out to do.
But the result was far from what I wanted. Even with a Titan V, a world of only 4096×4096 could be handled at a reasonable update rate. I basically had spacetime foam under the ideas of Doubly special relativity, ie. Feynman checkerboard universe.
If our own universe is digital, then it updates at C/plank_length times per second, over 10^40 hz. Not only that but it consists of over 10^185 planck lengths.
Nothing interesting is really viewable or even extrapolatable from the smallest of truths or fundamentals. Thats why many predictions of string theory need higher energies to be tested. The smaller you delve, the higher energy needed.
In any case, I realized that evolution itself has been fueled by orders of magnitude. Symmetries of matter and energy, planets stars and solar systems...same magnitudes required.
Even the best silicon isnt going to have 10^20 transistors on it. We would have to somehow go analog..use chemicals or matter in a way that didnt require slow refining and construction. Chemical based computing....
Now about ML and AI, same issue is present. Brains have 100 billion neurons. The best GPUs have maybe 5000 cores.
The only way forward is to maximize what each core does - as much as possible.
I learned this with the cellular automatas. Black and White 2-state automatas are neat but are a huge waste of processing power. They hold little information. Better is integers. Even better is floats. Why stop there though? Lets use complex numbers.
Magnitude is against carefully crafted silicon. If we want to achieve what magnitude can, with silicon, we need to make sure our fundamental neuron units arent unnecessarily sparse. Magnitudes can afford to use simple units. Silicon cant. 5000 maybe 100000 cores when advances in fab accelerate. But still not 100 billion. Still needs to use the advanced abilities of the cores to their full extent. A neuron cannot compute any universal function...but a gpu core can.
Anyways, I am almost failing to mention that I dont believe we have anything to worry about. Unless more research is devoted to special purpose hardware (consider an i7 has 731 million transistors), the software side will have a really hard time compensating for low magnitude.
Lets see what happens. Its going to be exciting none the less. I am doing ML work myself on Boltzman networks and RNNs snd Hopfield nets. This is a promising field and its emerging at breakneck pace. Cheers!
This is interesting. I too am interested in simulations and fields like Artificial Life where we start with some basic building blocks and hope to allow something capable of evolution to form.
My question is always "where do we start?" because, as you found out, starting with the most fundamental physics simulation we can conceive of, we are unlikely to generate much of interest for some time, if ever.
But I do have a feeling that in order to get something truly novel and interesting it's going to have to "evolve" in an "environment" and the challenge will be in identifying whether a particular instance of primordial soup is on its way to developing more complex structures.
I strongly believe that the importance of the multi-billion year process of evolution is seriously underestimated by the AI community and that it's pure hubris to think we can short-circuit that entirely and simply reverse engineer the brain with fancy algorithms.
> But I do have a feeling that in order to get something truly novel and interesting it's going to have to "evolve" in an "environment" and the challenge will be in identifying whether a particular instance of primordial soup is on its way to developing more complex structures.
I thought that way for a long time too which is why I chased the cellular automata ideas and eventually implemented many varieties. But nowadays, I think its all mostly torched by magnitudes.
The closest chance we have is not relying on the magnitudes. Instead of trying to evolve a universe, or variety of entities - we need to focus on a single entity.
I thought AI had a flawed premise, like you mention: attempting to develop single individual is directly antithetical to how life normally develops. None the less, my mind is changed since my automata dabblings. Single individual engineering is the only method that has the slimmest chance in hell.
Silicon cannot touch either of them. I didn't always think this though. I thought intelligent beings, but first smaller structures, could evolve in a large digital universe. I made a blog post: http://scrollto.com/blog/2017/04/11/life-a-universe-simulati...
It took a long time but I eventually created what I set out to do.
Here is ScatterLife: https://github.com/churchofthought/ScatterLife/
But the result was far from what I wanted. Even with a Titan V, a world of only 4096×4096 could be handled at a reasonable update rate. I basically had spacetime foam under the ideas of Doubly special relativity, ie. Feynman checkerboard universe.
https://en.m.wikipedia.org/wiki/Doubly_special_relativity
https://en.m.wikipedia.org/wiki/Feynman_checkerboard
If our own universe is digital, then it updates at C/plank_length times per second, over 10^40 hz. Not only that but it consists of over 10^185 planck lengths.
Nothing interesting is really viewable or even extrapolatable from the smallest of truths or fundamentals. Thats why many predictions of string theory need higher energies to be tested. The smaller you delve, the higher energy needed.
In any case, I realized that evolution itself has been fueled by orders of magnitude. Symmetries of matter and energy, planets stars and solar systems...same magnitudes required.
Even the best silicon isnt going to have 10^20 transistors on it. We would have to somehow go analog..use chemicals or matter in a way that didnt require slow refining and construction. Chemical based computing....
Now about ML and AI, same issue is present. Brains have 100 billion neurons. The best GPUs have maybe 5000 cores.
The only way forward is to maximize what each core does - as much as possible.
I learned this with the cellular automatas. Black and White 2-state automatas are neat but are a huge waste of processing power. They hold little information. Better is integers. Even better is floats. Why stop there though? Lets use complex numbers.
https://github.com/churchofthought/HexagonalComplexAutomata
Magnitude is against carefully crafted silicon. If we want to achieve what magnitude can, with silicon, we need to make sure our fundamental neuron units arent unnecessarily sparse. Magnitudes can afford to use simple units. Silicon cant. 5000 maybe 100000 cores when advances in fab accelerate. But still not 100 billion. Still needs to use the advanced abilities of the cores to their full extent. A neuron cannot compute any universal function...but a gpu core can.
Anyways, I am almost failing to mention that I dont believe we have anything to worry about. Unless more research is devoted to special purpose hardware (consider an i7 has 731 million transistors), the software side will have a really hard time compensating for low magnitude.
Lets see what happens. Its going to be exciting none the less. I am doing ML work myself on Boltzman networks and RNNs snd Hopfield nets. This is a promising field and its emerging at breakneck pace. Cheers!