Theory of Theories

Engineering has been invaluable not only for building intelligence but also for understanding it. The success of Deep Learning, Reinforcement Learning and powerful computers have enabled machines to solve some virtual sensory-motor tasks like Atari and abstract reasoning tasks like Go and Poker. However, the solutions obtained by these methods are tailored for the tasks they are trained on. This is the biggest strength of machine learning -- if you can frame the problem as a loss function then you can optimize it to make useful predictions. However, the most common problems faced by human or animal minds are of a very different kind.

Minds have to reproduce the structure of the environment via learning. Only then can it efficiently make various predictions about the past, current and future. Evolution must have discovered a large collection of useful predictive questions for animals to solve during their lifetimes. Doing this leads to the formation and verification of knowledge.

Reinforcement or Deep Learning clearly fits the bill for optimizing future predictions. However, the real mystery in AI is the nature and origin of these predictive questions. This is where we need good theories of the mind as existing theories are narrow and offer knowledge-free methodoligies to answer specifically framed questions. To construct minds that aquire a vast variety of knowledge representations, we need a theory over many such narrow theories. However, majority of recent work focuses too much on producing behaviors by brute force exploration methods. This is mainly a cultural problem and is one of the main reasons why we are or will be stuck soon.

Most of the current dominant ideas in AI come solely from the ML community. ML offers simple mathematical objects like matrices folded in space and time to map inputs and outputs in datasets. This paradigm is currently being pushed in all concievable directions and will continue to produce valuable things. However it says very little about the types of predictive questions to frame or ways to represent rich structure of the environment. It might be a very comfortable and somewhat productive local minima.

So what is missing? I believe we are stuck because most of the conceptual and theory work in AI stopped decades ago. The 'Society of Minds' hypothesis was one of the last distinct attempts made to sketch a theory of the mind. Perhaps it is time to combine ideas from various scientific disciplines to come up with a theory of theories -- hierarchical and modular reinforcement learning, deep learning, core knowledge in developmental psychology, programming language research, probabilistic methods, designing representation modules for perception all the way from raw sensations to primal sketches to 3D frame of references, motor primitive discovery and multiagent frameworks in economics.

Tejas Kulkarni, London

comments powered by Disqus