Learning

Learning refers to the ability of agents to acquire knowledge, modify their behavior, or improve their performance over time through experience or feedback. Learning mechanisms can include simple rule-based learning, reinforcement learning, or more complex cognitive processes such as neural networks. Learning allows agents to adapt and optimize their strategies based on their interactions and experiences within the model.

Introduction

Many individuals or agents (but also organizations and institutions) change their adaptive traits over time due to their experience? If so, how?

Explanation

How do agents learn?

Re-enforcement learning : they repeat a certain task and get penalized when they do something wrong. Then try to improve their own score

This concept refers to agents that change how they produce adaptive behavior over time as a consequence of their experience. Learning does not refer to how adaptation depends on state variables that change over time; instead, it refers to how agents change their decision-making methods (the algorithms or perhaps only the parameters of those algorithms) as a consequence of their experience. While memory can be essential to learning, not all adaptive behaviors that use memory also use learning. Few ABMs so far have included learning, even though a great deal of research and theory addresses how humans, organizations, and other organisms learn. Describe: • Which adaptive behaviors of agents are modeled in a way that includes learning. • How learning is represented, especially the extent to which the representation is based on existing learning theory. • The rationale for including (or, if relevant, excluding) learning in the adaptive behavior, and the rationale for how learning is modeled.

Examples

  • The adaptive behavior of land managers—deciding what land use to select—is modeled using an approach that includes learning. This submodel (fully explained below at “Land use selection submodel”) is based on “case-based reasoning” theory (Aamodt and Plaza 1994). This approach assumes decisions are based on memory of previous decisions in similar cases and their outcomes. The land manager agents use their own memories of previous decisions, but if their memory contains no similar cases they can use the memory of neighboring land managers. As a land manager executes more land use decisions it therefore develops a base of information that affects future decisions.
  • The reinforcement learning mechanism implemented in the model is an adaptation of Bush and Mosteller’s model of reinforcement learning (Bush and Mosteller 1955). At the end of a generation (learning-generation constant) each agent considers the times without shortage (n-non-shortage-ticks variable) and compares it with the aspiration-threshold (Th). Taking as reference the most frequent action she has undertaken in the former generation, the agent updates her strategy, i.e. the probability to cooperate (Pi), following a simple reinforcement learning rule: do it more often, if it led to more steady satisfaction (i.e. fulfilling the aspiration), otherwise try more often the alternative action. The strategy updating takes place in three steps: …

Outgoing relations