You are currently reviewing an older revision of this page.
In the year 2031, or 2040 and beyond, there are outstanding questions to consider to explore significant advances in learning and reasoning.
Join host Dr. Benji Maruyama, AFRL Materials and Manufacturing Directorate, and co-host AFOSR program officer for the Science of Information, Computation, Learning, and Fusion, Doug Riecken, on February 2, 2021 from 2-4pm EDT for a lively discussion with A.I. leaders: Katerina Fragkiadaki, Carla Gomez, Rauf Izmaliov, Bart Selmen, and Vladimir Vapnik as they debate the next big question in the science of artificial intelligence.
This is an ongoing series of 2-hour sessions with thought leaders on the subject.
Guests are welcome to join a live stream on AFOSR's Facebook page.
Agenda
INTRO SECTION
2:00-2:05 EDT
Welcome from AFOSR Dr. Benji Maruyama, AFRL Materials and Manufacturing Directorate and Dr. Doug Riecken, AFOSR
THINKER/SPEAKER SECTION
2:08-3:20 EDT
Remarks and Panel Discussion
Each speaker will present in ~7-8 min their question(s) and a couple of comments to communicate the key ideas – then at least two or more of the other four speakers will comment on the question(s) for ~7-8 min in order to explore more details. Speaking order:
Katerina FragkiadakiCarla GomezRauf IzmaliovBart SelmenVladimir VapnikDr. Benji Maruyama, AFRL Materials and Manufacturing DirectorateDoug Riecken, AFOSR
OPEN DISCUSSION BY SPEAKERS WITH ALL ATTENDING
3:20-4:00 EDT
Interactive Discussion
We invite all attendees to pose questions/topics for the panel speakers
Panel Bios
In 20 years from now, I believe the problems on which we are stuck today (in visual learning andreasoning) will have been resolved in the following ways:i) Hierarchical entity decompositions: We will be able to effectively learn by visual observation andbuild not hierarchies of features grids as we have today but hierarchies of symbol-like entities of sub-parts, parts and objects, via appropriate structured and modular deep architectures, that is, by re-designing the connectionist framework (not by marrying existing neural and symbolic methods).ii) Feedback: We will have on-the-fly feedback and adaptation within generative models with varyingamount of compute time, to infer entity decompositions in novel scenes: the more un-familiar ascene the longer the inference time.iii) Abstraction and analogical reasoning: Analogical reasoning will be carried out by bindingdetected entities and selected features of them across contexts, tasks, skills, and models, abstractingaway irrelevant information depending on the context.iv) Continual learning with two modes of learning: model-based and reactive. Results of (slow)planning or deliberate search would be continually distilled to fast reactive processes. Full bio
Yann LeCun, NYU Center for Data Science and Facebook
LeCun's current interests include AI, machine learning, computer perception, mobile robotics, and computational neuroscience. He has published over 180 technical papers and book chapters on these topics as well as on neural networks, handwriting recognition, image processing and compression, and on dedicated circuits and architectures for computer perception. The character recognition technology he developed at Bell Labs is used by several banks around the world to read checks and was reading between 10 and 20% of all the checks in the US in the early 2000s. His image compression technology, called DjVu, is used by hundreds of web sites and publishers and millions of users to access scanned documents on the Web. Since the late 80's he has been working on deep learning methods, particularly the convolutional network model, which is the basis of many products and services deployed by companies such as Facebook, Google, Microsoft, Baidu, IBM, NEC, AT&T and others for image and video understanding, document recognition, human-computer interaction, and speech recognition. Full Bio
Tom Mitchell, School of Computer Science at Carnegie Mellon University
Mitchell's research lies in machine learning, artificial intelligence, and cognitive neuroscience. His current research includes developing machine learning approaches to natural language understanding by computers, as well as brain imaging studies of natural language understanding by humans. A pioneer in artificial intelligence and machine learning, Mitchell’s research focuses on statistical learning algorithms for discovering how the human brain represents information and for enabling computers to understand the meaning of what humans say and write. His work with colleagues in the Psychology Department produced the first computational model to predict brain activation patterns associated with virtually any concrete noun, work that has since been extended to other word types, word sequences and emotions. His Never Ending Language Learner is a computer program that searches through web pages 24/7 as it teaches itself to read. Full Bio
Doug Riecken, AFOSR program officer for the Science of Information, Computation, Learning, and Fusion
Riecken is a trained concert pianist with a B.A. from the Manhattan School of Music and studies at the Julliard School of Music. He spent many years performing classical, jazz, and rock styles on international concert tours with world-renowned artists before he switched to a career in cognitive and computing science. He received his PhD from Rutgers University under thesis advisor Dr. Marvin Minsky from MIT; a founding father of artificial intelligence. Riecken and Minsky spent 30+ years in friendship researching learning and the mind. Riecken is a thought leader in the areas of big data analytics and machine learning, human-computer interaction and design, knowledge discovery and data mining, global cloud enterprise architectures, and privacy management. He joined the Air Force Office of Scientific Research as a program officer in 2014 and is a senior member of the AFRL ACT3 team. Full Bio
Stephen "Cap" Rogers, AFRL Automatic Target Recognition and Sensor Fusion
Cap serves as the principal scientific authority and independent researcher in the field of multi-sensor automatic target recognition and sensor fusion. He initiates, technically plans, coordinates, evaluates, and conducts research and development to advance the knowledge of interdisciplinary ATR and sensor fusion systems for all Air Force aircraft, missile and space systems. Rogers leads collaboration across AFRL in object detection, tracking, geo-location, identification and supporting technologies. He also conducts research and development activities in the broad area of ATR and sensor-fusion technology including phenomenology modeling, model-based and learning algorithms, evaluation and tracking. He also conducts research and development in image and signal processing, synthetic target and scene modeling, resource allocation and evidence accrual aimed at decreasing the cost and improving the performance of Air Force and Department of Defense systems. Full Bio