About

center for safe ai logo

Safe AI banner

We are in the midst of a foundational shift. From self-driving cars and voice assistants to smart thermostats and recommendation engines, Artificial Intelligence (AI) and Machine Learning (ML) are becoming an integral part of our daily lives. The emergence of these technologies opens up countless opportunities to transform any industry and to revolutionize traditional ways of thinking, operating, and solving problems. But...

How do you know if you can trust AI?

While AI+ML-based technologies have undoubtedly enhanced our daily lives, given that they require and rely on massive amounts of historical data to work effectively, they are by no means perfect. For many of these modern uses of AI+ML, 99% reliability may be sufficient. However, for mission-critical or life-dependent applications, 99% is not good enough. What about that 1% of uncertainty? Hospitals and doctors will remember the time that an autonomous system failed and someone died. Everyone remembers when a self-driving car or airplane crashes. It may not be often, but it does happen. That 1% is magnified when it is the difference between life and death.

Addressing this 1% of uncertainty is where PRECISE's Center for Safe AI is making significant inroads. We have a team of multidisciplinary experts developing highly-scalable tools and technologies that help companies and organizations verify safety in the edge cases of autonomous systems where failure is unacceptable. The PRECISE Center for Safe AI is focused on working with existing AI designs and systems, making them safer, and providing formal verification of their safety. We are building run-time monitoring for anomaly detection and taking a real-time systems approach to autonomy across multiple domains (e.g., healthcare, transportation, buildings, infrastructure). Through advanced automated techniques (using machine programming), we hope to accelerate the programmer productivity and software correctness, performance, and security. 

The Center's tools, technologies, and expertise are helping industries to answer the hard questions and giving them the full confidence in the safety of their autonomous systems in areas such as:

  • Formally-verified ML models

  • Robustness to adversarial attacks (via data and systems)

  • Importance of simulation and its reliability

Current/Past Projects:

  1. Robust Concept Learning and Lifelong Adaptation Against Adversarial Attacks (ARO MURI)
  2. Symbiotic Design for Cyber Physical Systems
  3. Verifying Safety of Neural Network Controllers
  4. Model Repair for Hybrid Systems
  5. Confidence Metrics for Neural Network Classifiers
  6. Human-in-the-Loop Autonomous System
  7. Data-Driven Operator Behavior Modeling
  8. Closed-Loop Clinical Care / Autonomous Systems in Medicine  
  9. Robustness Analysis of Neural Networks
  10. Bridging Machine Learning and Controls
  11. Computer-Aided Clinical Trials
  12. Autonomous Vehicle Plan Verification and Execution
  13. Learning Verifiable Control Policies
  14. Sample Complexity of Reinforcement Learning
  15. Interpretable Machine Learning for Precision Medicine
  16. Leveraging Programmatic Structure in Machine Learning
  17. Software Bodycams for Dynamic Safety Analyze 
  18. Neural Learning for Program Synthesis and Verification
  19. Automated Software Debloating for Security Hardening
  20. AI-based Software Analysis for Vulnerability Detection
  21. Predictable Real-Time Decision Making over Streaming Data
  22. Composable Tasks in Reinforcement Learning
  23. Resilient Execution with Bounded-Time Recovery
  24. Diagnosing CPS with quantitative provenance
  25. Predictable Platforms for Safe Adaptability

In addition to the on-going research mentioned above, the Center has also launched a new Masters course, entitled "CIS 700-002: Topics in Safe Autonomy", this Spring at the Engineering School of the University of Pennsylvania. The course, led by Insup Lee, PhD, and James Weimer, PhD (PRECISE faculty members) and Justin Gottschlich, PhD (Lead Artificial Intelligence Researcher at Intel Labs), will explore selected topics in Safe Autonomy, beginning with Anomaly Detection.

With this combination of research and curriculum, pioneered by the vision and expertise of PRECISE's faculty, this new center aims to be the defacto leader in the world in Safe Autonomy.

Contact the PRECISE Center for Safe AI today to see how we can help ensure the safety of your autonomous systems.