Agent-agent interactions: include sharing information, adapting behaviors based on others' action and influencing each other's decision-making processes.
If interaction is sharing info, the information will be stored. So Interactions are not always lead to behaviour change in the agent.
Agent-environment interaction: Env affect agent's behavior
- static env: Agent retrieve the distance to exit value from raster env to decide which cells that they will move
- dynamic env: smoke develop and trap the the agents so agents need to change their behavior Agent affect environment
- Sheep eat grass, agent build the house
Environment-environment interaction:
- fire develop, smoke also develop
- grass and ground water
Centroid Initialization -> different centroid initialization can lead to different results because K-Means relies on distance calculations to assign data points to clusters and update centroids.
Curse of dimensionality
risk of overfitting (model fits the training data very well)
difference in distance between observations become less and it is harder for model to distinguish/cluster the observations
increase computational resource
Solving Curse of dimensionality -> reducing the dimensionality (features)
feature selection - select subsets of original features
feature extraction - create new set of features by transforming original features
Principal Component Analysis (PCA) - reducing the number of dimensions (features) while retaining most of the important information
Main difference is Deep learning can automatically learn and extract features from raw data
SOM vs K-Means
SOM -> consider neighbourhood (ensure that not only the Best Matching Unit (BMU) but also its neighboring neurons are updated during the training process), show the smooth transition between cluster, SOM is particularly useful for visualizing high-dimensional data and preserving topological relationships