You are currently reviewing an older revision of this page.
In the year 2031, or 2040 and beyond, there are outstanding questions to consider to explore significant advances in learning and reasoning.
Join host X, and co-host AFOSR program officer for the Science of Information, Computation, Learning, and Fusion, Doug Riecken, on November 8, 2022 from 2-4pm EDT for a lively discussion with A.I. leaders: X as they debate the next big question in the science of artificial intelligence.
This is an ongoing series of 2-hour sessions with thought leaders on the subject.
Agenda
INTRO SECTION
2:00-2:10 EDT
Welcome from AFOSRXDr. Doug Riecken, AFOSR
THINKER/SPEAKER SECTION
2:10-3:25 EDT
Remarks and Panel Discussion
Each speaker will present in ~7-8 min their question(s) and a couple of comments to communicate the key ideas – then at least two or more of the other four speakers will comment on the question(s) for ~7-8 min in order to explore more details. Speaking order:
X
OPEN DISCUSSION BY SPEAKERS WITH ALL ATTENDING
3:25-4:00 EDT
Interactive Discussion
We invite all attendees to pose questions/topics for the panel speakers
Panel Bios
Christine Schubert Kabban, Air Force Institute of Technology (AFIT)
Question: One of the main purposes of artificial intelligence (AI) is to ingest information in order to provide a decision, or at least inform a decision. Although we may imagine what AI looks like in 2040 or beyond, a leading question should be: “What do you want AI to look like in 2040 or beyond”? In particular, what do you need AI to do and how do you get there? Under the sense-learn-adapt-infer paradigm for AI, current and planned systems can be examined in order to determine what is missing and how the system can be improved. For example, does the system need more advanced sensor technology, learning capability, better adaptation rules for new environments, or better prediction and inference. These features can be explored and addressed through mathematical constructs that act on the data inputs; rules and means can be built to enhance the system’s adaptability, efficiency, and data security and integrity. Finally, systems need to dynamic and self-assessing in order to assure system performance is not degraded, and in particular, whether inference and prediction is reliable and retained over time. This discussion element 1) addresses the proposed leading question and considerations towards solutions through the use of recent experiences that inspire advances in sensing technology, adaptable algorithms and inferential methods, and 2) ends with two parting challenges as drivers for all AI applications and technology.
Link to bio:
Dr. Ram M. Narayanan, Penn State University
Question: Technological advances in data processors and storage devices have resulted in sensors grappling with the problem of big data, namely too much data. Sensors frequently accumulate vast stores of data they have no idea how to process, and may not be able to learn anything useful from. To add to the problem, a lot of data has a lifespan. At some point in time, data become irrelevant, inaccurate, or outdated. But often they are held onto anyway in the mistaken belief that they might come in useful at some point.
In addition, data acquisition costs money – requiring storage, electricity to power, and security and data compliance considerations. The problem becomes even larger when we consider the predicted growth in the data acquired by sensors by the year 2040. A sensor’s ability to be effective will increasingly be driven by how well it can leverage data, apply analytics, and implement innovative technologies. If sensors want to avoid drowning in data while thirsting for insights, they must develop a smart data strategy that focuses on the few things they really need.
Rather than worrying about “big data,” sensors should instead focus on “smart data” – in other words, defining the questions they need answered, and then collecting and analyzing only that data which will serve them in answering the question. The information elasticity formulation permits the selection of the proper amount of data for optimum decision effectiveness in typical sensor applications. The next big question in sensor information processing by 2040 is, “How can we learn to avoid information glut from BIG DATA”?
Some leading questions to answer the big question are:
How much of data is just right to perform the task at hand efficiently and economically in a timely manner using the information elasticity framework?
How can we make sensors understand simple spoken and written language to understand our needs and requirements as humans would?
How can we intelligently blend diverse types of data, namely, multisensor, multiscale, multitemporal, and multidomain so as to capitalize on cross domain data dependencies and minimize data redundancies?
Speaker, Organization
Question:
Doug Riecken, AFOSR program officer for the Science of Information, Computation, Learning, and Fusion
Riecken is a trained concert pianist with a B.A. from the Manhattan School of Music and studies at the Juilliard School of Music. He spent many years performing classical, jazz, and rock styles on international concert tours with world-renowned artists before he switched to a career in cognitive and computing science. He received his PhD from Rutgers University under thesis advisor Dr. Marvin Minsky from MIT; a founding father of artificial intelligence. Riecken and Minsky spent 30+ years in friendship researching learning and the mind. Riecken is a thought leader in the areas of big data analytics and machine learning, human-computer interaction and design, knowledge discovery and data mining, global cloud enterprise architectures, and privacy management. He joined the Air Force Office of Scientific Research as a program officer in 2014 and is a senior member of the AFRL ACT3 team. Full Bio