Revolutionizing Artificial General Intelligence by adding real-world understanding.
The AI world is keen on Machine Learning because it looks like it works, but it has no genuine intelligence so there are many things it can’t do, doesn’t do well, and will never do.
Future AI’s radical software creates connections on its own between different types of real-world sensory input like sight, sound, and touch, in the same way that a human brain interprets everything it knows in the context of everything else it knows.
Sallie emulates the processes of human thought, beginning with perception.
Sallie uses mobile sensory pods so the system can learn from a real-world environment and gain a fundamental understanding of physical objects in reality, cause and effect, and the passage of time. These sensory pods help Sallie learn about the real world.
The knowledge Sallie acquires can then be used for other applications, including personal assistants like Alexa and Siri, automated customer service systems, and robots like Boston Dynamics’, Spot.
Future AI’s approach vs. Machine Learning
The machine learning fantasy is that if you simply add more power, the machine will become intelligent. But, it doesn’t work and we can prove it.
Contrary to popular fantasy, machine learning will not become more intelligent by simply adding more power. Consider the development of Expert Systems decades ago. It stalled because one cannot program all the possibilities of real-world interaction without having any underlying understanding.
While machine learning has tremendous usefulness as a powerful statistical method, it lacks genuine intelligence, thereby limiting its abilities and applications now and in the future.
Future AI is overcoming the limitations of machine learning with unique graph algorithms and structures that are self-adaptive.
Sallie emulates the processes of human thought, beginning with perception. Sallie’s “mind” runs on a powerful desktop computer while she interacts with the real world via “mobile sensory pods” that supply vision, hearing, speech, and touch. All this input is integrated into her “Universal Knowledge Store” (UKS algorithm) which leads to powerful general intelligence and underlying understanding.