APAN Community
APAN Community
  • Site
  • User
  • Community  Chat Connect  Maps Translate  Support
  • Site
  • Search
  • User
AFOSR
  • Working Groups
AFOSR
Research Areas Verifiable, Control-Oriented Learning On The Fly
  • Research Areas
  • Events
  • Upload Report Deliverables
  • Request a No Cost Extension
  • Public File Share
  • Presentations Directory
  • More
  • Cancel
  • New
Join this community to post and share content - click to join
  • -AFOSR Funded Projects
    • Centers of Excellence
    • -Multidisciplinary Research Program of the University Research Initiative (MURI)
      • -Active AFOSR MURI Grants
        • -2018 MURI Grants
          • Empty State Electronics
          • Fundamentals of Doping and Defects in Ga2O3 for High Breakdown Field
          • Gallium Oxide Materials Science and Engineering GAME
          • Hybrid-Materials Valley Optoelectronics For Photon Spin Communication
          • Innovations in Mean-Field Game Theory for Scalable Computation and Diverse Applications
          • Magnet-Free Non-Reciprocal Metamaterials Based on Spatio-Temporal Modulation
          • Molecular Level Studies of Solid-Liquid Interfaces in Electrochemical Processes
          • Piezoenergetics Coupled Piezoelectric and Nanoenergetic Materials with Tailorable and Switchable Reactivity
          • Verifiable, Control-Oriented Learning On The Fly
        • +2017 MURI Grants
        • +2016 MURI Grants
        • +2015 MURI Grants
        • +2014 MURI Grants
        • +2013 MURI Grants
      • +Inactive AFOSR MURI Grants
  • +Research Areas
  • +AFOSR Workshops & Reviews
  • +Educational and Special Programs
  • +International Division (Research Areas - International Office)
  • 2018 Joint AFOSR High-Speed Aerodynamics and ONR Hypersonics Programs Annual Review
  • +Special Events and Lectures

Verifiable, Control-Oriented Learning On The Fly

2018 AFOSR MURI
Verifiable, Control-Oriented Learning On The Fly
PO: Dr. Frederick Leve, Dynamics and Control: dycontrol@us.af.mil
PI: Dr. Ufuk Topcu, The University of Texas at Austin
MURI Website

The proposed effort will develop a theoretical and algorithmic foundation for run-time learning and control for physical, autonomous systems. The resulting algorithms will (i) adapt to unforeseen, possibly abrupt changes in the system and its environment; (ii) establish verifiable guarantees---in a sense to be precisely defined---with respect to high-level safety and performance specifications; and (iii) obey and leverage the laws of physics and contextual knowledge. They will also provide quantitative trade-offs between the strengths of their guarantees, the amount of run-time data and a priori side information necessary to establish such guarantees, and the computational requirements.

The proposed approach treats the (limited) data that the system generates and the existing side information as the first-class objects of control-oriented learning. Such side information includes the physical laws that the system obeys, the context in which it serves, and the structure in its mathematical representation. The research plan builds on a coherent composition of learning, verification and synthesis:

Thrust I, on learning, will merge the side information with run-time data to derive bounds on model uncertainty with both finite-dimensional, parametric and infinite-dimensional, functional components.

Thrust II, on verification, will determine---using the learned models---whether a given control strategy retains the possibility of viable operation (e.g., making progress toward the mission objectives without jeopardizing future safety) and assess the likelihood of failure over practically relevant time horizons.

Thrust III, on joint learning and control, will develop strategies that effectively prioritize the dual tasks of learning the dynamics and minimizing the likelihood of mission failure.

The back-end computation engine will provide cross-cutting, efficient optimization algorithms addressing the needs of all thrusts. All three thrusts build on a common working principle supported by the front-end constraint engine: Data and side information must be jointly utilized for control-oriented learning.

The approach also embraces the fact that developing truly autonomous systems---and, in particular, control-oriented learning on the fly---is beyond the reach of any single discipline. It distills ideas---and in many cases discovers novel extensions---from a number of conventionally disparate disciplines into a unified foundation. These disciplines include analysis, control theory, dynamical systems, learning theory, optimization and formal methods. Many of these interdisciplinary connections are pursued for the first time in the proposed effort.

The fundamental research outcomes of the proposed work have the potential to pave the road for the DoD to develop truly autonomous systems aware of their high-level tasks, low-level, physical capabilities, and computational resources; operate in contested environments; and survive disruptions or recognize its impossibility.

The background and tight integration of the team will contribute toward creating a new field of on-the-fly learning receptive to the needs and opportunities in control for autonomous systems. The foundation to be established by the proposed effort will set the stage for more research by a broader community in the years to come.

  • Share
  • History
  • More
  • Cancel
Click to hide this icon and message
Select Your Language
  • Support
  • /
  • Hotline: Help Desk 808-472-7855
  • /
  • Privacy
  • /
  • Terms
  • Powered by All Partners Access Network