Analog Computation and Dynamical Systems
Discrete state space or discrete time update are not necessarily naturally features and when considering natural based computational may be released. Many biological systems, neuro-morphically designed, and some control systems are better described in this analog manner.
Together with physicist Fishman and student Ben-Hur, we developed a unified computational theory for dissipative dynamical systems updating either by continuous ODE’s or in discrete time. Prior to this work, no tool existed for analyzing continuous time algorithms and analog VLSI systems; ODE-based algorithms could only be analyzed by means of time discretization that could ruin the main characteristics of these systems. Substituting the number-of-steps complexity-measure with a new measure that reflects the convergence time of a physical realization of the continuous flow, we made possible a comparison of continuous time algorithms with discrete ones and related the notion of complexity to the true relaxation time. As part of this line of work, we have showed that while fixed points can be computed efficiently, chaotic attractors could only be computed efficiently by means of non-determinism. The inherent difference between fixed points and chaotic attractors has led us to propose that, in the realm of dynamical systems, efficient deterministic computation differs from efficient non-deterministic computation: Pd < > NPd.
We illustrated our theory using familiar computer science problems, showing a continuous algorithm for the Maximum Network Flow problem that has linear time complexity, an algorithm for MAX that converges in logarithmic time, and a probabilistic continuous algorithm for the Linear Programming problem that, for a Gaussian distribution over instances of LP, converges in linear time. We also provided an extensive probabilistic analysis of a differential equation that describes an efficient analog algorithm for the linear programming problem.
We defined a model of computation with continuous state space in terms of analog recurrent neural network. The surprising finding has been that when analog networks assume real weights, their power encompasses and transcends that of digital computers. The computational power of this model is not tautologically strong but rather is well constrained and well defined. I showed then that many dynamical systems and idealized chaotic systems that cannot be described by the Turing machine are well captured within the framework of analog neural networks. This work supports the proposition that, in parallel to the Turing machine, which describes all digital computation, it is possible that the ARNN can serve as a standard in the field of analog computation. Stated analogously to the Church-Turing thesis of computation, we proposed that: No possible abstract analog device can have more computational capabilities (up to polynomial time) than Analog Recurrent networks. For more on Analog models click here.
The analog neural networks have the property of weak robustness to noise and implementation error, in the sense that up to T time of computation only the first O(T) bits in the representation of the values in both the neuron and in their weight may affect the computation. Any flipping of bits beyond the first T bits will not affect the result for up to T bits of computation. This weak property of robustness enabled the suggestion of interleaving learning steps and computation steps for achieving best computational power.
With students Roitershtein and Ben-Hur, we developed a theory for noisy versions of analog models with the focus on their strong robustness properties. The theory developed includes any kind of continuous of analog computational systems with arbitrary state space, and with any kinds of transition probability functions. We generalizeds the concept of weak ergodicity and showed that the computational power of abstract weakly ergodic systems is limited to definite languages, which are a subset of regular languages, and that the resulting computational systems are stable with respect to small perturbations. These properties are a result of the asymptotic dynamical behavior of the system. Conditions for weak ergodicity were provided using the framework of the ergodicity coefficient of Dobrushin. Our results include as a special case the work of noisy analog neural networks but go far beyond it to include general stochastic analog computation devices.
R. Edwards, H.T. Siegelmann, K. Aziza and L. Glass, “Symbolic dynamics and computation in model gene networks”, Chaos 11(1) , 2001
A. Ben-Hur, H.T. Siegelmann, “Computing with Gene Networks,” Chaos: An Interdisciplinary Journal of Nonlinear Science, 14(1) pp. 145-151, March 2004 – was chosen as the work to describe in Physics News Update
L. Glass, T. J. Perkins, J. Mason, H. T. Siegelmann and R. Edwards. “Chaotic Dynamics in an Electronic Model of a Genetic Network ,” Journal of Statistical Physics, Volume 121 Numbers 5-6: 969-994, 2006
F. Roth, H.T. Siegelmann and R. J. Douglas, “The Self-Construction and -Repair of a Foraging Organism by Explicitly Specified Development from a Single Cell” Artificial Life, in press
M. Olsen and H. Siegelmann, “Multi-Agent System that Attains Longevity via Death”, Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI), Jan 2007
M. Olsen and H. Siegelmann, “Artificial Death for Attaining System Longevity”, Proceedings of the 50th Anniversary Summit of Artificial Intelligence Summit. Switzerland, pp. 217-218, July 2006
Kyle Harrington and Hava Siegelmann, “Adaptive Multi-Modal Sensors”, Proceedings of the 50th Anniversary Summit of Artificial Intelligence Summit, Switzerland