Safe Autonomy

center for safe ai logo

safe ai graphic

We are in the midst of a foundational shift.  From self-driving cars and voice assistants to smart thermostats and recommendation engines, Artificial Intelligence (AI) and Machine Learning (ML) are becoming an integral part of our daily lives.  The emergence of these technologies opens up countless opportunities to transform any industry and to revolutionize traditional ways of thinking, operating, and solving problems.  But...

How do you know if you can trust AI?

While AI+ML-based technologies have undoubtedly enhanced our daily lives, given that they require and rely on massive amounts of historical data to work effectively, they are by no means perfect. For many of these modern uses of AI+ML, 99% reliability may be sufficient.  However, for mission-critical or life-dependent applications, 99% is not good enough. What about that 1% of uncertainty? Hospitals and doctors will remember the time that an autonomous system failed and someone died. Everyone remembers when a self-driving car or airplane crashes.  It may not be often, but it does happen. That 1% is magnified when it is the difference between life and death.

Addressing this 1% of uncertainty is where PRECISE's Center for Safe AI is making significant inroads.  We have a team of multidisciplinary experts developing highly-scalable tools and technologies that help companies and organizations verify safety in the edge cases of autonomous systems where failure is unacceptable.  The PRECISE Center for Safe AI is focused on working with existing AI designs and systems, making them safer, and providing formal verification of their safety. We are building run-time monitoring for anomaly detection and taking a real-time systems approach to autonomy across multiple domains (e.g., healthcare, transportation, buildings, infrastructure).

The Center's tools, technologies, and expertise are helping industries to answer the hard questions and giving them the full confidence in the safety of their autonomous systems in the following areas:

  • Formally-verified ML models

  • Robustness to adversarial attacks (via data and systems)

  • Importance of simulation and the open question about its reliability

In addition to the on-going research mentioned above, the Center has also launched a new Masters course, entitled "CIS 700-002: Topics in Safe Autonomy", this Spring at the Engineering School of the University of Pennsylvania.  The course, led by Insup Lee, PhD, and James Weimer, PhD (PRECISE faculty members) and Justin Gottschlich, PhD (Lead Artificial Intelligence Researcher at Intel Labs), will explore selected topics in Safe Autonomy, beginning with Anomaly Detection.

With this combination of research and curriculum, pioneered by the vision and expertise of PRECISE's faculty, this new center aims to be the defacto leader in the world in Safe Autonomy.

Contact the PRECISE Center for Safe AI today to see how we can help ensure the safety of your autonomous systems.