An AI that can be understood
One of the most popular AI frameworks in recent years is deep neural networks, whose structure is based on a simplified model of the network of neurons in the brain. Indeed, Oceanit has used deep networks in many applications and demonstrated state of the art levels of performance. However, understanding the human brain is difficult, and similarly understanding why a deep network makes the predictions it does can be difficult or even impossible. Our goal with NoME is to create an AI system that overcomes this limitation, and whose decisions can be easily understood by and explained to the user.
Linguistic-based architecture
Rather than drawing inspiration from how the brain functions physically, our approach is anthronoetic - that is, it is based on how humans reason abstractly. Indeed, humans do not reason at the level of a neuron. Instead, we create complex combinations of a richer set of concepts. To describe what these concepts are and how they can interact, NoME uses ideas from Chomskyan linguistics. This provides both a theory for the structure of human cognition and the means for conveying those thoughts to a user.
Explanations not predictions
The difference between anthronoetic and statistical AI is best seen in their outputs. Statistical approaches such as deep networks simply generate predictions (only answering “What?” questions). By contrast, our anthronoetic approach creates explanations (answering fundamental “Why?” questions), from which predictions follow. This additional explanatory level of output enables NoME to demonstrate how it derived its conclusion, and thus work in conjunction with experts, who need to trust their AI collaborator, particularly in safety critical applications where the AI requires a human in the loop to correct any costly mistakes. Importantly, such error-correction works both ways: humans checking AI and AI checking humans so as to eliminate error, converge on truth, and make unlimited progress.