Research

We study a wide range of problems in artificial intelligence, automated planning and learning, autonomous systems, reasoning under uncertainty, multi-agent systems, and resource-bounded reasoning. We are particularly interested in the implications of uncertainty and limited computational resources on the design of autonomous agents. In most practical settings, it is not feasible or desirable to find the optimal action, making it necessary to resort to some form of approximate reasoning. This raises a fundamental question: what does it mean for an agent to be “rational” when it does not have enough knowledge or computational power to derive the best course of action? Our overall approach to this problem involves meta-level control mechanisms that reason explicitly about the cost of decision-making and can optimize the amount of deliberation (or “thinking”) an agent does before taking action. We have also developed new planning techniques for situations involving multiple decision makers operating in either collaborative or adversarial domains.

Human Compatible AI

How can we design AI systems that are compatible with human needs: accountable, explainable, equitable, ethical, and mindful of human cognitive biases and shortcomings?

Anytime Algorithms

How can we design “well behaved” algorithms that can be interrupted at any time and still return useful results, and how can we use such algorithms as components of a complex AI system?

Models of Bounded Rationality

What does it mean for an agent to be “rational” when it does not have enough knowledge or computational power to derive the best course of action?

Scalable Algorithms for Probabilistic Reasoning

How can AI systems cope with uncertainty in large sequential decision problems, and how to leverage heuristic search and reachability analysis to solve complex probabilistic planning problems?

Belief-Space Planning and POMDPs

How to select actions based on partial and imprecise information about the environment, and how to design efficient algorithms to do planning in belief space?

Multiagent Planning and DEC-POMDPs

How can a group of intelligent agents coordinate their decisions in spite of stochasticity and limited information, and how to extend decision-theoretic models to such complex multiagent settings?

Generalized Planning

How can agents create generalized plans, which are algorithm-like plans that include loops and branches, can handle unknown quantities of objects, and work for large classes of problem instances?

Introspective Autonomy

How can autonomous AI systems acquire a model of their own capabilities and limitations, seek human assistance when needed, and become progressively independent?

Building Safe AI systems

How can we create AI systems that are safe, transparent, and ethical?

Plan and Activity Recognition

How can agents recognize the plans, activities, and intents of other agents and use that information to plan their response?

Stochastic Network Design and Optimization

How to develop scalable algorithms to optimize diffusion processes and use them to control the spread of various phenomena such as information over a social network or species over fragmented landscape?