You are currently reviewing an older revision of this page.
In the year 2031, or 2040 and beyond, there are outstanding questions to consider to explore significant advances in learning and reasoning.
Join host Dr. Scott Clouse, Autonomy Capability Team 3 (ACT3), and co-host AFOSR program officer for the Science of Information, Computation, Learning, and Fusion, Dr. Doug Riecken, on November 8, 2022 from 2-4pm EDT for a lively discussion with A.I. leaders: Dr. Christine Schubert Kabban, Air Force Institute of Technology (AFIT), Dr. Ram M. Narayanan, Penn State University, Dr. Muralidhar Rangaswamy, Air Force Research Laboratory (AFRL), Dr. Antonia Papandreou-Suppappola, Arizona State University, Dr. Mark E. Oxley, Air Force Institute of Technology (AFIT), Dr. Scott Clouse, Autonomy Capability Team 3 (ACT3) as they debate the next big question in the science of artificial intelligence.
This is an ongoing series of 2-hour sessions with thought leaders on the subject.
Click here to register.
Agenda
INTRO SECTION
2:00-2:10 EDT
Welcome from AFOSRDr. Scott Clouse, Autonomy Capability Team 3 (ACT3)Dr. Doug Riecken, AFOSR
THINKER/SPEAKER SECTION
2:10-3:25 EDT
Remarks and Panel Discussion
Each thinker will present their ideas in ~6-7 min. Following each presenting thinker, all the thinkers will interact. Speaking order:
Dr. Christine Schubert Kabban, Air Force Institute of Technology (AFIT)Dr. Ram M. Narayanan, Penn State UniversityDr. Muralidhar Rangaswamy, Air Force Research Laboratory (AFRL)Dr. Antonia Papandreou-Suppappola, Arizona State UniversityDr. Mark E. Oxley, Air Force Institute of Technology (AFIT)Dr. Scott Clouse, Autonomy Capability Team 3 (ACT3)
OPEN DISCUSSION BY SPEAKERS WITH ALL ATTENDING
3:25-4:00 EDT
Interactive Discussion
We invite all attendees to pose questions/topics for the panel speakers
Panel Bios
Dr. Christine Schubert Kabban, Air Force Institute of Technology (AFIT)
Question: One of the main purposes of artificial intelligence (AI) is to ingest information in order to provide a decision, or at least inform a decision. Although we may imagine what AI looks like in 2040 or beyond, a leading question should be: “What do you want AI to look like in 2040 or beyond”? In particular, what do you need AI to do and how do you get there? Under the sense-learn-adapt-infer paradigm for AI, current and planned systems can be examined in order to determine what is missing and how the system can be improved. For example, does the system need more advanced sensor technology, learning capability, better adaptation rules for new environments, or better prediction and inference. These features can be explored and addressed through mathematical constructs that act on the data inputs; rules and means can be built to enhance the system’s adaptability, efficiency, and data security and integrity. Finally, systems need to dynamic and self-assessing in order to assure system performance is not degraded, and in particular, whether inference and prediction is reliable and retained over time. This discussion element 1) addresses the proposed leading question and considerations towards solutions through the use of recent experiences that inspire advances in sensing technology, adaptable algorithms and inferential methods, and 2) ends with two parting challenges as drivers for all AI applications and technology.
Bio
Dr. Ram M. Narayanan, Penn State University
Question: Technological advances in data processors and storage devices have resulted in sensors grappling with the problem of big data, namely too much data. Sensors frequently accumulate vast stores of data they have no idea how to process, and may not be able to learn anything useful from. To add to the problem, a lot of data has a lifespan. At some point in time, data become irrelevant, inaccurate, or outdated. But often they are held onto anyway in the mistaken belief that they might come in useful at some point.
In addition, data acquisition costs money – requiring storage, electricity to power, and security and data compliance considerations. The problem becomes even larger when we consider the predicted growth in the data acquired by sensors by the year 2040. A sensor’s ability to be effective will increasingly be driven by how well it can leverage data, apply analytics, and implement innovative technologies. If sensors want to avoid drowning in data while thirsting for insights, they must develop a smart data strategy that focuses on the few things they really need.
Rather than worrying about “big data,” sensors should instead focus on “smart data” – in other words, defining the questions they need answered, and then collecting and analyzing only that data which will serve them in answering the question. The information elasticity formulation permits the selection of the proper amount of data for optimum decision effectiveness in typical sensor applications. The next big question in sensor information processing by 2040 is, “How can we learn to avoid information glut from BIG DATA”?
Some leading questions to answer the big question are:
How much of data is just right to perform the task at hand efficiently and economically in a timely manner using the information elasticity framework?
How can we make sensors understand simple spoken and written language to understand our needs and requirements as humans would?
How can we intelligently blend diverse types of data, namely, multisensor, multiscale, multitemporal, and multidomain so as to capitalize on cross domain data dependencies and minimize data redundancies?
Dr. Muralidhar Rangaswamy, Air Force Research Laboratory (AFRL)
Question: What are the enablers for concurrent sensing, communications, and countermeasures come 2040?
This calls for a multitude of sensors from multiple platforms, operating in different modes, performing a variety of functions. Due to the interdependent nature of related technologies, advances in any one domain (advanced mathematical models, enabling components and devices, improved understanding of phenomenology, circuit design, flash memories, computer hardware and software, dedicated processors, and the like), tend to have a synergistic impact on overall system performance. However, the challenges correspondingly increase in complexity. Open loop processing will simply not be able to address these demands in a meaningful manner. Thus, it becomes imperative to close the loop on sensor operation at multiple levels, and scales leading to the use of meta-cognition. In this context, it becomes important to pay attention to the meta-cognition anchors of:
Reliably gaining knowledge about the environment
Monitoring the multiple cognition cycles (senselearnadapt) that are invoked to gain this knowledge
Strategies for governing the multiple cognition cycle interactions
Transferring the previously gained environment knowledge to a new environment.
Due to the massive volumes of high dimensional data arriving from heterogeneous sources, there will be a pressing need to go from high dimensional data to reduced dimension data to relevant data to meet the latency and throughput requirements. The key challenge involves the development of mathematical principles and algorithms to enable rapid selection of the cognition cycles for a given scenario to meet a performance specification guided by four tenets of metacognition.
Dr. Antonia Papandreou-Suppappola, Arizona State University
Question: Object tracking is becoming increasingly more challenging with advances in autonomous system technology and big data science, and with demands for space exploration and long-range multiple system coexistence. A big question to examine is what intelligence is required to track any object in any domain and under any conditions by 2040 and beyond. There has been a multitude of recent developments in many fields, including multimodal sensing, signal processing and probabilistic modeling, dynamic decision making, multiobjective optimization, machine learning, artificial intelligence, and cognitive computing. However, given a vast amount of heterogenous data, it is unlikely for a single field to provide real-time knowledge to enable, for example, the autonomous tracking of multiple hypersonic spacecrafts in deep space in the presence of massive debris, meteor storm interference and solar flare rising temperatures. Instead, it becomes imperative to develop a sense-learn-adapt paradigm that can select and process smart data, learn varying environmental conditions, adapt resources, adopt relevant inference methods, and operate in a closed loop to achieve reasoning at all levels.
Some considerations toward answering the big question include the following.
Integration of physics-based models, data-based models and intuition-based models in dynamic decision making
Use of learning methods to sequentially track dynamically varying environments and optimize model selection
Intelligent processing of meta-level cognitive tasks to achieve resource adaptation and autonomous tracking
Dr. Mark E. Oxley, Air Force Institute of Technology (AFIT)
Question: An important question is ongoing research to investigate the concepts of combining multiple detection systems such that a collection of special detection systems will yield the best collection with respect to an objective function. There is a relationship between a detection system and its Receiver Operator Characteristic (ROC) curve. There is a one-to-one relationship between the ROC curve and the ROC function. Specifically, the ROC curve IS the graph of the ROC function. For example, there are two operations that act on a pair of detection systems. Given two detection systems, say, A and B, then there are two binary operations, denoted as A ^ B (OR) and A v B (AND) that yields a Boolean Algebra. This is a significant area of research to advance the state of the art.
I am excited with the opportunity to mix my research with engineers, and statisticians and look forward to generating new concepts using AI.
Dr. Scott Clouse, Autonomy Capability Team 3 (ACT3)
Question: Language evolves. New words are constantly being created/added and words are likewise constantly falling into disuse. Grammatical structures grow as one language contacts others and other uses. Novel languages are even sometimes developed by twins or close siblings. Such adaptability has allowed us to work with people from different backgrounds and also communicate increasingly more complex ideas.
While some technology enables interaction through so-called natural language, most technological agents (i.e. software and/or hardware) interact with people (users) and other technological agents through pre-defined and rigid protocols (e.g. phone-trees in support centers or browsers connecting to web servers).
Given the rate of increase and prevalence of technological agents (e.g. Internet of Things), the need for flexible interfaces is not going to diminish or be solved with one protocol. We’ll eventually need agents’ languages that can grow and evolve alongside those of humans. Work on increasing the flexibility of current protocols is underway.
The questions that will need to be addressed more broadly are around what capabilities will agents need to interact flexible with one another and with people?
Doug Riecken, AFOSR program officer for the Science of Information, Computation, Learning, and Fusion
Riecken is a trained concert pianist with a B.A. from the Manhattan School of Music and studies at the Juilliard School of Music. He spent many years performing classical, jazz, and rock styles on international concert tours with world-renowned artists before he switched to a career in cognitive and computing science. He received his PhD from Rutgers University under thesis advisor Dr. Marvin Minsky from MIT; a founding father of artificial intelligence. Riecken and Minsky spent 30+ years in friendship researching learning and the mind. Riecken is a thought leader in the areas of big data analytics and machine learning, human-computer interaction and design, knowledge discovery and data mining, global cloud enterprise architectures, and privacy management. He joined the Air Force Office of Scientific Research as a program officer in 2014 and is a senior member of the AFRL ACT3 team.