In the year 2031, or 2040 and beyond, there are outstanding questions to consider to explore significant advances in learning and reasoning.
Join host Dr. Benji Maruyama, AFRL Materials and Manufacturing Directorate, and co-host AFOSR program officer for the Science of Information, Computation, Learning, and Fusion, Doug Riecken, on February 2, 2022 from 2-4pm EDT for a lively discussion with A.I. leaders: Katerina Fragkiadaki, Carla Gomes, Rauf Izmailov, Bart Selmen, and Vladimir Vapnik as they debate the next big question in the science of artificial intelligence.
This is an ongoing series of 2-hour sessions with thought leaders on the subject.
Agenda
INTRO SECTION
2:00-2:10 EDT
Welcome from AFOSRDr. Benji Maruyama, AFRL Materials and Manufacturing DirectorateDr. Doug Riecken, AFOSR
THINKER/SPEAKER SECTION
2:10-3:25 EDT
Remarks and Panel Discussion
Each speaker will present in ~7-8 min their question(s) and a couple of comments to communicate the key ideas – then at least two or more of the other four speakers will comment on the question(s) for ~7-8 min in order to explore more details. Speaking order:
Rauf Izmailov, Peraton LabsVladimir Vapnik, Columbia U.Katerina Fragkiadaki, Carnegie Mellon U.Carla Gomes, Cornell U.Bart Selman, Cornell U.
OPEN DISCUSSION BY SPEAKERS WITH ALL ATTENDING
3:25-4:00 EDT
Interactive Discussion
We invite all attendees to pose questions/topics for the panel speakers
Panel Bios
In 20 years from now, I believe the problems on which we are stuck today (in visual learning and reasoning) will have been resolved in the following ways:i) Hierarchical entity decompositions: We will be able to effectively learn by visual observation and build not hierarchies of features grids as we have today but hierarchies of symbol-like entities of sub-parts, parts and objects, via appropriate structured and modular deep architectures, that is, by re-designing the connectionist framework (not by marrying existing neural and symbolic methods).ii) Feedback: We will have on-the-fly feedback and adaptation within generative models with varying amount of compute time, to infer entity decompositions in novel scenes: the more un-familiar a scene the longer the inference time.iii) Abstraction and analogical reasoning: Analogical reasoning will be carried out by binding detected entities and selected features of them across contexts, tasks, skills, and models, abstracting away irrelevant information depending on the context.iv) Continual learning with two modes of learning: model-based and reactive. Results of (slow) planning or deliberate search would be continually distilled to fast reactive processes. Full bio
Carla Gomes, Cornell University
Artificial Intelligence (AI) is a rapidly advancing field inspired by human intelligence. AI systems are now performing at human and even superhuman levels on various tasks, such as image identification and face and speech recognition. Can AI also dramatically accelerate scientific discovery and perhaps even win a Nobel prize in Science? Hiroaki Kitano first posed this question in 2016. We further discussed this question at a Turing Institute workshop in 2020, chaired by Gil, Kitano, and King, and formulated the AI Scientist Grand Challenge. In my own research, I am interested in accelerating scientific discovery for a sustainable future, particularly materials discovery for clean energy. The tremendous AI progress that we have witnessed in the last decade has been largely driven by deep learning advances and heavily hinges on the availability of large annotated datasets to supervise model training. However, scientists generally only have access to small datasets and incomplete data: Scientists amplify a few data examples with human intuitions and detailed reasoning from first principles for discovery. Our AI systems need to encapsulate the scientific process for scientific discovery: We need AI systems that combine learning with reasoning about scientific knowledge and find suitable problem representations for scalable solutions. Our AI systems need to be able to predict far outside the training distributions for scientific discovery, while current machine learning systems primarily perform data interpolation. Furthermore, our AI systems need to interpret results and understand causation beyond correlation to discover new scientific concepts and knowledge. Can we automate such a hybrid scientific discovery strategy? Full Bio
Rauf Izmailov, Peraton Labs, Chief Scientist and Fellow
"Making AI decisions more -- or less -- confident"
During the past decade, it became painfully clear that purely data-driven AI/ML, while becoming superior to humans in a number of well-publicized tasks, can be easily fooled towards producing wrong over-confident decisions in situations that would rarely, if ever, confuse a human. Although humans also learn based on available data, that learning "bakes in" both inborn and acquired intuitive rules and common sense. The implications of that are already leveraged in studies of human decision making, where Kahneman's arguments about rational decisions have been juxtaposed with Gigerenzer's work on the role of uncertainty. Current AI/ML systems do not handle uncertainty well, they over-extend their putative knowledge into areas where it would be better to refrain from decisions and they do not subject their own model and data to introspection. Intelligent AI/ML systems have to learn to be sceptical, realistic and intuitive about themselves, about their data, andabout their inputs. Full Bio
Bart Selman, Cornell University
Knowledge-centric AI
Data-driven deep learning has been transformative for AI. Deep learning is remarkably effective, and I expect much more progress to come with applications in all areas of AI. However, the data-centric approach to AI (let's call it AI 2.0) also forces us into a specific paradigm. In particular, we formulate AI challenges by emphasizing quantitative measures using training and test data. This paradigm has brought us significant progress. However, it will ultimately limit us. To reach the next level of more general and robust AI systems (AI 3.0), I will argue we need to move to a knowledge-centric approach.The difference between knowledge and data is somewhat subtle but practically important. More general AI capabilities will require developing formalisms that can incrementally grow their knowledge base in a process that is very different from training on data alone. More specifically, the idea that simply having enough data or generating enough data can compensate for a lack of knowledge is misguided when facing the inherent combinatorial nature of our conceptual world. I will illustrate this issue by considering AI for mathematical discovery. Full Bio
Vladimir Vapnik
NEW MATHEMATICAL, ALGORITHMIC, AND PHILOSOPHICAL DEVELOPMENT OF LEARNING THEORY
1. The main content of learning theory constitute analysis of two selection problems which are appear due to two different modes of convergence of functions existing in Hilbert space: the weak mode and the strong mode of convergence.2. Using mechanisms of weak convergence learning machine realizes the first selection problem: selection a set of admissible functions from functions of Hilbert space.3. Second selection problem constitute the existing understanding of learning problem: selection of the desired function from the set of admissible functions.4. To realize weak mode of convergence it is required to have some finite set of functions belonging to Hilbert space (the predicates). Formally any function of Hilbert space can serve as a predicate. Selection of the appropriate predicates is informal part of learning model which reflect prior knowledge of the problems of interest existing in the Nature.5. Selection of predicates forms philosophical part of learning problem which have a direct connection to both: to concept what is intellect and to classical models of philosophy of the Nature.6. In framework of this model there exist complete solution of learning problems in RKHS of a parametric families of kernels which allow effectively construct learning algorithms. Full Bio
Dr. Benji Maruyama, AFRL Materials and Manufacturing Directorate
Prof. Hiroaki Kitano has issued the “Nobel-Turing Challenge” for an AI Scientist to win a Nobel Prize by 2050.
He notes that scientific research today is at the pre-industrial revolution level.
Materials Scientists are pioneering the use of AI and autonomy to build research robots that perform cognitive and manual labor that are orders of magnitude faster than humans alone, leading to a Moore’s Law exponential increase in the speed of research. To enable this, closed-loop autonomous research systems need advanced AI methods to understand complex image and spectral data, and to make rapid, reasoned decisions in high-dimensional parameter space.
Together, teams of human and AI scientists can revolutionize the research process.
Stach, Eric, et al. "Autonomous experimentation systems for materials development: A community perspective." Matter(2021).
Full Bio
Doug Riecken, AFOSR program officer for the Science of Information, Computation, Learning, and Fusion
Riecken is a trained concert pianist with a B.A. from the Manhattan School of Music and studies at the Juilliard School of Music. He spent many years performing classical, jazz, and rock styles on international concert tours with world-renowned artists before he switched to a career in cognitive and computing science. He received his PhD from Rutgers University under thesis advisor Dr. Marvin Minsky from MIT; a founding father of artificial intelligence. Riecken and Minsky spent 30+ years in friendship researching learning and the mind. Riecken is a thought leader in the areas of big data analytics and machine learning, human-computer interaction and design, knowledge discovery and data mining, global cloud enterprise architectures, and privacy management. He joined the Air Force Office of Scientific Research as a program officer in 2014 and is a senior member of the AFRL ACT3 team. Full Bio