“Some of the most celebrated results in decision theory address, to some extent, these challenges. They consist in showing what conditions on preferences over “real world options” suffice for the existence of a pair of utility and probability functions relative to which the agent can be represented as maximising expected utility.”
subjective belief is consistent with the reality
individual choices are maximizing the utility given subjective belief
References:
Steele, Katie and H. Orri Stefánsson, "Decision Theory", The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2020/entries/decision-theory/>.
Bounded Rationality
“Herbert Simon introduced the term ‘bounded rationality’ (Simon 1957b: 198; see also Klaes & Sent 2005) as a shorthand for his brief against neoclassical economics and his call to replace the perfect rationality assumptions of homo economicus with a conception of rationality tailored to cognitively limited agents.”
References:
Wheeler, Gregory, "Bounded Rationality", The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2020/entries/bounded-rationality/>.
Formal learning theory:
“Formal learning theory is the mathematical embodiment of a normative epistemology. It deals with the question of how an agent should use observations about her environment to arrive at correct and informative conclusions. Philosophers such as Putnam, Glymour and Kelly have developed learning theory as a normative framework for scientific reasoning and inductive inference.”
References:
Schulte, Oliver, "Formal Learning Theory", The Stanford Encyclopedia of Philosophy (Summer 2022 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2022/entries/learning-formal/>.
Machine Learning
“A huge part of AI’s growth in applications has been made possible through invention of new algorithms in the subfield of machine learning. Machine learning is concerned with building systems that improve their performance on a task when given examples of ideal performance on the task, or improve their performance with repeated experience on the task.”
Rferences:
Bringsjord, Selmer and Naveen Sundar Govindarajulu, "Artificial Intelligence", The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/fall2022/entries/artificial-intelligence/>.
“Reinforcement Learning: Here a machine is set loose in an environment where it constantly acts and perceives (similar to the Russell/Hutter view above) and only occasionally receives feedback on its behavior in the form of rewards or punishments. The machine has to learn to behave rationally from this feedback. One use of reinforcement learning has been in building agents to play computer games. The objective here is to build agents that map sensory data from the game at every time instant to an action that would help win in the game or maximize a human player’s enjoyment of the game. In most games, we know how well we are playing only at the end of the game or only at infrequent intervals throughout the game (e.g., a chess game that we feel we are winning could quickly turn against us at the end). The field of Reinforcement Learning tries to tackle this problem through a variety of methods. Though a bit dated, Sutton and Barto (1998) provide a comprehensive introduction to the field.”
References:
Bringsjord, Selmer and Naveen Sundar Govindarajulu, "Artificial Intelligence", The Stanford Encyclopedia of Philosophy (Fall 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/fall2022/entries/artificial-intelligence/>.
The dichotomy of reinforcement learning:
The decision environment of the state, action, and reward cycles
The iterative algorithms that reinforce decisions closer to optimal based on the historical observations of trials and errors
Zhang, Kaiqing, Zhuoran Yang, and Tamer Başar. "Multi-agent reinforcement learning: A selective overview of theories and algorithms." Handbook of Reinforcement Learning and Control (2021): 321-384: https://arxiv.org/abs/1911.10635
Prof. Jakob Foerster and Foerster Lab for AI Research at Oxford University:
https://www.jakobfoerster.com/
https://foersterlab.com/
Tang, Pingzhong. "Reinforcement mechanism design." IJCAI. 2017. https://www.ijcai.org/proceedings/2017/739