MODEL OF INTELLIGENT AGENT IN THE ONE DIMENSIONAL DICRETE WORLD

Authors

  • Vyacheslav Mykolaiovych Osaulenko NTUU I. Sikorsky KPI

Keywords:

intelligent agent, neural networks, Kolmogorov's complexity, entropy

Abstract

Modern computing is mostly performed on a sequential architecture with a separate processor and memory. Living organisms use parallel computing, and it is believed that it provides them with the ability to solve such complex tasks as pattern recognition and to demonstrate intellectual behavior in general. There is a need for more biologically based approaches to information processing. The aim of the study is to propose a new way of measuring complexity on an example of a one-dimensional binary vector. This will help to better formalize the agent's purpose, and impose upper bounds of it’s the ability to recognize and predict patterns in the environment. As a compromise between
entropy and Kolmogorov's complexity, it is proposed to use joint entropy of n-grams, which has no deficiencies of the first one and can be
computed exactly, in contrast to the second. It is shown that by varying this measure it is possible to move from the states of completely
random world to fully organized with a gradual change in complexity. Taking the agent's goal to make the best predictions, the esense of a
new measure manifests itself, indicating the resources needed to achieve the goal for a specific environment. Also, a brief overview of the
unresolved problems of modeling the intelligence of the agent and possible ways to solve them is presented. Overall, a new degree of complexity is proposed, joint entropy of n-gram, which can be calculated precisely for input data. A measure is deduced for the agent, which
predicts patterns in the data, and can be applied to assess the required complexity of the agent model. The disadvantage of the new measure
is its high computational complexity for high-dimensional data. New research is needed for simplification and further application.

Author Biography

Vyacheslav Mykolaiovych Osaulenko, NTUU I. Sikorsky KPI

Master's degree, post-graduate student of NTUU KPI

References

A. Turing, “On computable numbers,” Proc. London Math. Soc., vol. 42, pp. 230–265, 1936.

C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J., vol. 27, no. July 1928, pp. 379–423, 1948.

J. Schmidhuber, “Deep Learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, 2015.

M. Hutter, Universal artificial intelligence. 2005.

A. Kolmogorov, “Three Approaches to the quantitive definition of Information,” Prob Info Trans, vol. 1, no. 1, pp. 3–11, 1965.

R. J. Solomonoff, “A Formal theory of Inductive Inference,” Info Cont, vol. 7, pp. 1-22-254, 1964.

F. Wörgötter and B. Porr, “Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms,” Neural Comput., vol. 17, no. 2, pp. 245–319, 2005.

P. Kanerva, “Hyperdimensional computing: An introduction to computing in distributed representa-tion with high-dimensional random vectors,” Cognit. Comput., vol. 1, no. 2, pp. 139–159, 2009.

A. Luczak, B. L. McNaughton, and K. D. Harris, “Packet-based communication in the cortex.,” Nat. Rev. Neurosci., vol. 16, no. 12, pp. 745–755, 2015.

K. D. Harris and G. M. G. Shepherd, “The neocortical circuit: themes and variations,” Nat. Neurosci., vol. 18, no. 2, pp. 170–181, 2015.

G. Palm, “Neural associative memories and sparse coding,” Neural Networks, vol. 37, pp. 165–171, 2013.

H. Barlow, “Redundancy reduction revisited,” Netw. Comput. Neural Syst., vol. 12, no. 3, pp. 241–253, 2001.

K. Morita, J. Jitsev, and A. Morrison, “Corticostriatal circuit mechanisms of value-based action se-lection: Implementation of reinforcement learning algorithms and beyond,” Behav. Brain Res., vol. 311, pp. 110–121, 2016.

M. B. Mirza, R. A. Adams, C. D. Mathys, and K. J. Friston, “Scene Construction, Visual Foraging, and Active Inference,” Front. Comput. Neurosci., vol. 10, no. June, 2016.

D. C. Knill and A. Pouget, “The Bayesian brain: The role of uncertainty in neural coding and compu-tation,” Trends Neurosci., vol. 27, no. 12, pp. 712–719, 2004.

C. L. Baker, R. R. Saxe, and J. B. Tenenbaum, “Bayesian Theory of Mind: Modeling Joint Belief-Desire Attribution,” Proc. thirty-second Annu. Conf. Cogn. Sci. Soc., vol. 1, no. 2006, pp. 2469–2474, 2009.

J. T. Abbott, J. B. Hamrick, and T. L. Griffiths, “Approximating Bayesian inference with a sparse dis-tributed memory system,” Proc. 35th Annu. Conf. Cogn. Sci. Soc., pp. 1686–1691, 2013.

N. Frémaux and W. Gerstner, “Neuromodulated Spike-Timing-Dependent Plasticity and Theory of Three-Factor Learning Rules,” Front. Neural Circuits, vol. 9, no. 85, p. 85, 2016.

Downloads

Abstract views: 325

Published

2018-12-21

How to Cite

[1]
V. M. Osaulenko, “MODEL OF INTELLIGENT AGENT IN THE ONE DIMENSIONAL DICRETE WORLD”, ІТКІ, vol. 43, no. 3, pp. 30–36, Dec. 2018.

Issue

Section

Information technology and coding theory

Metrics

Downloads

Download data is not yet available.