Approaching Artificial General Intelligence by adding common sense.
Machine Learning looks like it works but it has no genuine intelligence. So there are many things it can’t do, doesn’t do well, and will never do.
Future AI’s radical self-adaptive hypergraph software creates connections on its own between different types of real-world sensory input like sight, sound, and touch, in the same way that a human brain interprets everything it knows in the context of everything else it knows.
Sallie’s “mind” emulates the processes of human thought, beginning with perception.
Sallie can learn from a real-world environment and gain a fundamental understanding of physical objects in reality, cause and effect, and the passage of time. Sallie’s mobile sensory pods help Sallie learn about the real world including:
- 3D objects in a physical environment
- Object persistence
- Passage of time
The knowledge Sallie acquires can then be used for other applications, including:
- Robotics, particularly autonomous robots
- Digital assistants like Alexa, Siri, Bixby, and others
- Language translation
“Future AI is a company that anyone interested in the AI landscape should be paying close attention to over the coming years.”
Future AI’s approach vs. Machine Learning
The machine learning fantasy is that if you simply add more power, the machine will become intelligent.
Contrary to popular fantasy, machine learning will not become intelligent by simply adding more power. While machine learning has tremendous usefulness as a powerful statistical method its abilities and applications are limited, both now and in the future.
Instead: Future AI is introducing new data structures and algorithms with a completely different approach to intelligence.
Future AI is overcoming the limitations of machine learning with unique graph algorithms and structures that are self-adaptive.
Sallie emulates the processes of human thought, beginning with perception. Sallie’s “mind” runs on a powerful desktop computer while she interacts with the real world via “mobile sensory pods” that supply vision, hearing, speech, and touch. All this input is integrated into her “Universal Knowledge Store” (UKS algorithm) which leads to powerful general intelligence and underlying understanding.