Research
We study a wide range of problems in artificial intelligence, automated planning and learning, autonomous systems, reasoning under uncertainty, multi-agent systems, and resource-bounded reasoning. We are particularly interested in the implications of uncertainty and limited computational resources on the design of autonomous agents. In most practical settings, it is not feasible or desirable to find the optimal action, making it necessary to resort to some form of approximate reasoning. This raises a fundamental question: what does it mean for an agent to be “rational” when it does not have enough knowledge or computational power to derive the best course of action? Our overall approach to this problem involves meta-level control mechanisms that reason explicitly about the cost of decision-making and can optimize the amount of deliberation (or “thinking”) an agent does before taking action. We have also developed new planning techniques for situations involving multiple decision makers operating in either collaborative or adversarial domains.
Human Compatible AI
Sorry, no publications matched your criteria.
Anytime Algorithms
Sorry, no publications matched your criteria.
Models of Bounded Rationality
Sorry, no publications matched your criteria.
Scalable Algorithms for Probabilistic Reasoning
Sorry, no publications matched your criteria.
Belief-Space Planning and POMDPs
Sorry, no publications matched your criteria.
Multiagent Planning and DEC-POMDPs
Becker, Raphen; Lesser, Victor; Zilberstein, Shlomo Analyzing Myopic Approaches for Multi-Agent Communication Conference Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Compiegne, France, 2005, (Best Paper Award). @conference{SZ:BLZiat05, Choosing when to communicate is a fundamental problem in multi-agent systems. This problem becomes particularly hard when communication is constrained and each agent has different partial information about the overall situation. Although computing the exact value of communication is intractable, it has been estimated using a standard myopic assumption. However, this assumption--that communication is only possible at the present time--introduces error that can lead to poor agent behavior. We examine specific situations in which the myopic approach performs poorly and demonstrate an alternate approach that relaxes the assumption to improve the performance. The results provide an effective method for value-driven communication policies in multi-agent systems. |
Becker, Raphen; Zilberstein, Shlomo; Lesser, Victor; Goldman, Claudia V Solving Transition Independent Decentralized Markov Decision Processes Journal Article In: Journal of Artificial Intelligence Research (JAIR), vol. 22, pp. 423–455, 2004. @article{SZ:BZLGjair04, Formal treatment of collaborative multi-agent systems has been lagging behind the rapid progress in sequential decision making by individual agents. Recent work in the area of decentralized Markov Decision Processes (MDPs) has contributed to closing this gap, but the computational complexity of these models remains a serious obstacle. To overcome this complexity barrier, we identify a specific class of decentralized MDPs in which the agents' transitions are independent. The class consists of independent collaborating agents that are tied together through a structured global reward function that depends on all of their histories of states and actions. We present a novel algorithm for solving this class of problems and examine its properties, both as an optimal algorithm and as an anytime algorithm. To the best of our knowledge, this is the first algorithm to optimally solve a non-trivial subclass of decentralized MDPs. It lays the foundation for further work in this area on both exact and approximate algorithms. |
Goldman, Claudia V; Zilberstein, Shlomo Decentralized Control of Cooperative Systems: Categorization and Complexity Analysis Journal Article In: Journal of Artificial Intelligence Research (JAIR), vol. 22, pp. 143–174, 2004. @article{SZ:GZjair04, Decentralized control of cooperative systems captures the operation of a group of decision-makers that share a single global objective. The difficulty in solving optimally such problems arises when the agents lack full observability of the global state of the system when they operate. The general problem has been shown to be NEXP-complete. In this paper, we identify classes of decentralized control problems whose complexity ranges between NEXP and P. In particular, we study problems characterized by independent transitions, independent observations, and goal-oriented objective functions. Two algorithms are shown to solve optimally useful classes of goal-oriented decentralized processes in polynomial time. This paper also studies information sharing among the decision-makers, which can improve their performance. We distinguish between three ways in which agents can exchange information: indirect communication, direct communication and sharing state features that are not controlled by the agents. Our analysis shows that for every class of problems we consider, introducing direct or indirect communication does not change the worst-case complexity. The results provide a better understanding of the complexity of decentralized control problems that arise in practice and facilitate the development of planning algorithms for these problems. |
Becker, Raphen; Zilberstein, Shlomo; Lesser, Victor Decentralized Markov Decision Processes with Event-Driven Interactions Conference Proceedings of the 3rd International Conference on Autonomous Agents and Multi Agent Systems (AAMAS), New York, NY, 2004. @conference{SZ:BZLaamas04, Decentralized MDPs provide a powerful formal framework for planning in multi-agent systems, but the complexity of the model limits its usefulness. We study in this paper a class of DEC-MDPs that restricts the interactions between the agents to a structured, event-driven dependency. These dependencies can model locking a shared resource or temporal enabling constraints, both of which arise frequently in practice. The complexity of this class of problems is shown to be no harder than exponential in the number of states and doubly exponential in the number of dependencies. Since the number of dependencies is much smaller than the number of states for many problems, this is significantly better than the doubly exponential (in the state space) complexity of DEC-MDPs. We also demonstrate how an algorithm we previously developed can be used to solve problems in this class both optimally and approximately. Experimental work indicates that this solution technique is significantly faster than a naive policy search approach. |
Goldman, Claudia V; Allen, Martin; Zilberstein, Shlomo Decentralized Language Learning Through Acting Conference Proceedings of the 3rd International Conference on Autonomous Agents and Multi Agent Systems (AAMAS), New York, NY, 2004. @conference{SZ:GAZaamas04, This paper presents an algorithm for learning the meaning of messages communicated between agents that interact while acting optimally towards a cooperative goal. Our reinforcement-learning method is based on Bayesian filtering and has been adapted for a decentralized control process. Empirical results shed light on the complexity of the learning problem, and on factors affecting the speed of convergence. Designing intelligent agents able to adapt their mutual interpretation of messages exchanged, in order to improve overall task-oriented performance, introduces an essential cognitive capability that can upgrade the current state of the art in multi-agent and human-machine systems to the next level. Learning to communicate while acting will add to the robustness and flexibility of these systems and hence to a more efficient and productive performance. |
Hansen, Eric A; Bernstein, Daniel S; Zilberstein, Shlomo Dynamic Programming for Partially Observable Stochastic Games Conference Proceedings of the 19th National Conference on Artificial Intelligence (AAAI), San Jose, California, 2004. @conference{SZ:HBZaaai04, We develop an exact dynamic programming algorithm for partially observable stochastic games (POSGs). The algorithm is a synthesis of dynamic programming for partially observable Markov decision processes (POMDPs) and iterated elimination of dominated strategies in normal form games. We prove that when applied to finite-horizon POSGs, the algorithm iteratively eliminates very weakly dominated strategies without first forming a normal form representation of the game. For the special case in which agents share the same payoffs, the algorithm can be used to find an optimal solution. We present preliminary empirical results and discuss ways to further exploit POMDP theory in solving POSGs. |
Becker, Raphen; Zilberstein, Shlomo; Lesser, Victor; Goldman, Claudia V Transition-Independent Decentralized Markov Decision Processes Conference Proceedings of the 2nd International Conference on Autonomous Agents and Multi Agent Systems (AAMAS), Melbourne, Australia, 2003, (Best Paper Award). @conference{SZ:BZLGaamas03, There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied together through a global reward function that depends upon both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions. |
Goldman, Claudia V; Zilberstein, Shlomo Optimizing Information Exchange in Cooperative Multi-agent Systems Conference Proceedings of the 2nd International Conference on Autonomous Agents and Multi Agent Systems (AAMAS), Melbourne, Australia, 2003. @conference{SZ:GZaamas03, Decentralized control of a cooperative multi-agent system is the problem faced by multiple decision-makers that share a common set of objectives. The decision-makers may be robots placed at separate geographical locations or computational processes distributed in an information space. It may be impossible or undesirable for these decision-makers to share all their knowledge all the time. Furthermore, exchanging information may incur a cost associated with the required bandwidth or with the risk of revealing it to competing agents. Assuming that communication may not be reliable adds another dimension of complexity to the problem. This paper develops a decision-theoretic solution to this problem, treating both standard actions and communication as explicit choices that the decision maker must consider. The goal is to derive both action policies and communication policies that together optimize a global value function. We present an analytical model to evaluate the trade-off between the cost of communication and the value of the information received. Finally, to address the complexity of this hard optimization problem, we develop a practical approximation technique based on myopic meta-level control of communication. |
Bernstein, Daniel S; Givan, Robert; Immerman, Neil; Zilberstein, Shlomo The Complexity of Decentralized Control of Markov Decision Processes Journal Article In: Mathematics of Operations Research (MOR), vol. 27, no. 4, pp. 819–840, 2002, (IFAAMAS Influential Paper Award). @article{SZ:BGIZmor02, We consider decentralized control of Markov decision processes and give complexity bounds on the worst-case running time for algorithms that find optimal solutions. Generalizations of both the fully-observable case and the partially-observable case that allow for decentralized control are described. For even two agents, the finite-horizon problems corresponding to both of these models are hard for non-deterministic exponential time. These complexity results illustrate a fundamental difference between centralized and decentralized control of Markov decision processes. In contrast to the problems involving centralized control, the problems we consider provably do not admit polynomial-time algorithms. Furthermore, assuming EXP =/= NEXP, the problems require super-exponential time to solve in the worst case. |
Bernstein, Daniel S; Zilberstein, Shlomo Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control Conference Proceedings of the 6th European Conference on Planning (ECP), Toledo, Spain, 2001. @conference{SZ:BZecp01, The progressive processing model allows a system to trade off resource consumption against the quality of the outcome by mapping each activity to a graph of potential solution methods. In the past, only semi-linear graphs have been used. We examine the application of the model to control the operation of an autonomous rover which operates under tight resource constraints. The task structure is generalized to directed acyclic graphs for which the optimal schedule can be computed by solving a corresponding Markov decision problem. We evaluate the complexity of the solution analytically and experimentally and show that it provides a practical approach to building an adaptive controller for this application. |
Xuan, Ping; Lesser, Victor; Zilberstein, Shlomo Communication Decisions in Multiagent Cooperation: Model and Experiments Conference Proceedings of the 5th International Conference on Autonomous Agents (AGENTS), Montreal, Canada, 2001. @conference{SZ:XLZagents01, In multi-agent cooperation, agents share a common goal, which is evaluated through a global utility function. However, an agent typically cannot observe the global state of an uncertain environment, and therefore they must communicate with each other in order to share the information needed for deciding which actions to take. We argue that, when communication incurs a cost (due to resource consumption, for example), whether to communicate or not also becomes a decision to make. Hence, communication decision becomes part of the overall agent decision problem. In order to explicitly address this problem, we present a multi-agent extension to Markov decision processes in which communication can be modeled as an explicit action that incurs a cost. This framework provides a foundation for a quantied study of agent coordination policies and provides both motivation and insight to the design of heuristic approaches. An example problem is studied under this framework. From this example we can see the impact communication policies have on the overall agent policies, and what implications we can find toward the design of agent coordination policies. |
Bernstein, Daniel S; Zilberstein, Shlomo; Immerman, Neil The Complexity of Decentralized Control of Markov Decision Processes Conference Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence (UAI), Stanford, California, 2000, (IFAAMAS Influential Paper Award). @conference{SZ:BZIuai00, Planning for distributed agents with partial state information is considered from a decision-theoretic perspective. We describe generalizations of both the MDP and POMDP models that allow for decentralized control. For even a small number of agents, the finite-horizon problems corresponding to both of our models are complete for nondeterministic exponential time. These complexity results illustrate a fundamental difference between centralized and decentralized control of Markov processes. In contrast to the MDP and POMDP problems, the problems we consider provably do not admit polynomial-time algorithms and most likely require doubly exponential time to solve in the worst case. We have thus provided mathematical evidence corresponding to the intuition that decentralized planning problems cannot easily be reduced to centralized problems and solved exactly using established techniques. |
Xuan, Ping; Lesser, Victor R; Zilberstein, Shlomo Communication in Multi-Agent Markov Decision Processes Conference Proceedings of the 4th International Conference on Multi-Agent Systems, Boston, Massachusetts, 2000. @conference{SZ:XLZicmas00, In this paper, we formulate agent's decision process under the framework of Markov decision processes, and in particular, the multi-agent extension to Markov decision process that includes agent communication decisions. We model communication as the way for each agent to obtain local state information in other agents, by paying a certain communication cost. Thus, agents have to decide not only which local action to perform, but also whether it is worthwhile to perform a communication action before deciding the local action. We believe that this would provide a foundation for formal study of coordination activities and may lead to some insights to the design of agent coordination policies, and heuristic approaches in particular. An example problem is studied under this framework and its implications to coordination are discussed. |
Generalized Planning
Sorry, no publications matched your criteria.
Introspective Autonomy
Sorry, no publications matched your criteria.
Building Safe AI systems
Sorry, no publications matched your criteria.
Plan and Activity Recognition
Sorry, no publications matched your criteria.
Stochastic Network Design and Optimization
Sorry, no publications matched your criteria.