I am creating a Netlogo model about a zoo. I need my zoo guests (multiple turtles) to follow a circular pathway that starts at the entrance of the zoo every 24 ticks (1 tick is 1 hour in my model). It has to move around cages that hold animals because I cannot have my guests enter the areas for animals. The path doesn't have to be fast or the shortest, I just need the turtle not to stray from it. I would prefer not to use GIS to create a pathway.
My world's dimensions are -30 to 30 in both directions and does not wrap around.
The whereabouts of the cages are described below:
patches-own [ tigerhabitat?
flamingohabitat?
monkeyhabitat?
hippohabitat?
giraffehabitat?
]
to create-habitats
ask patches with [ pxcor < -12 and pycor > 23 ]
[ set tigerhabitat? true
set pcolor green ]
ask patches with [ pxcor > 20 and pycor > 20 ]
[ set hippohabitat? true
set pcolor blue ]
ask patches with [ pxcor > 18 and pycor < 15 and -1 < pycor ]
[ set flamingohabitat? true
set pcolor 96 ]
ask patches with [ pxcor > -10 and pxcor < 10 and pycor < 10 and -10 < pycor ]
[ set monkeyhabitat? true
set pcolor green ]
ask patches with [ pxcor < -12 and pycor < -20 ]
[ set giraffehabitat? true
set pcolor 67 ]
end
Paula- from your comment I think I understand a little better, thanks. One simple way to control where turtles can move is to use logical operators to exclude patches that they "consider" as they walk along. For a basic (non-path, yet) version of what you want, you could tell turtles that they can only move on patches that are not cages. You can set up a patch-only variable that explicitly says if a patch is caged or not, but in your example above all non-cage patches are black- you can use that to tell turtles that they should only walk onto a path if it is black. For example, you could add the procedures below to your code as above:
to setup
ca
reset-ticks
crt 10 [
setxy -25 0
]
create-habitats
end
to go
exclude-cage-walk
tick
end
to exclude-cage-walk
ask turtles [
rt random 30 - 15
let target one-of patches in-cone 1.5 180 with [ pcolor = black ]
if target != nobody [
face target
move-to target
]
]
end
You can see that before moving forward, each turtle assesses whether or not the patch it has chosen to move-to is black, and if it is not black, the turtle will not move there. Of course, you would have to modify this to suit your needs and have the turtles walk in a one-directional circuit, but it is a simple way to constrain turtle movement.
Related
I'm working on the theoretical framework for my own simulation environment. I want to simulate evolutionary algorithms on populations, but I don't know how to handle conflicting actions between multiple individuals.
The simulation has discrete time steps and takes place on a board (or tile grid) with a random population of different individuals. Each individual has some inputs, internal state and a list of possible actions to take each step. For example, an individual can read its position on the grid (input) and move one tile in a certain direction (action). Now, lets say I have two individuals A and B. Both perform a certain action during the same simulation step which would result in both individuals ending up on the same tile. This, however, is forbidden by the rules of the environment.
In more abstract terms: my simulation is in a valid state S1. Due to independent actions taken by multiple individuals, the next state S2 (after one simulation step) would be an invalid state.
How does one resolve these conflicts / collisions so I never end up in an invalid state?
I want the simulation to be replayable so the behavior should be deterministic.
Another question is fairness. Lets say I resolve conflicts by whoever comes first passes. Because, in theory, all actions happen at the same time (discrete time steps), "whoever comes first" isn't a measurement of time but data layout. Individuals that are processed earlier now have an advantage because they happen to be in favorable locations in the internal data structures (i.e. lower index in the array).
Is there a way to guarantee fairness? If not, how can I reduce unfairness?
I know these are very broad questions but since I haven't worked out all the constraints and rules of the simulation I wanted to get an overview of what's even possible, or perhaps common practice in these systems. I'm happy about any pointers for further research.
The question is overwhelmingly broad but here is my answer for the following case:
Agents move on a square (grid) board with cyclic boundaries conditions
Possible moves are: a) stay where you are, b) move to one of the 9 adjacent positions
The possible moves are assigned a probability
Conflicts will happen for every cell targeted by two agents since we assume no two agents can occupy the same cell (exclusion). We'll solve the conflict by re-rolling the dice until we obtain no conflict.
The general idea:
Roll the dices for every agent i --> targeted cell by agent i
Any two agents targeting the same cell ?
if yes:
for every pair of conflicting agents:
re-roll the dice for BOTH agents
Notes:
a) Conflicts are detected when an agent is missing because it has been "crushed" by an other agent (because of relative positions in agents' list).
b) Here, I assume that re-rolling the dice for both is a "fair" treatment since no arbitrary decision has to be taken.
Did it solve the problem? If not, go back to 2)
Move the agents to the new positions and go back to 1)
I provide a python program. No fancy graphics. Runs in a terminal.
The default parameters are:
Board_size = 4
Nb_of_agents=8 (50 % occupation)
If you want to see how it scales with problem size, then put VERBOSE=False
otherwise you'll be flooded with output. Note: -1 means an empty cell.
EXAMPLES OF OUTPUT:
Note: I used a pretty high occupancies (50% and 25%) for
examples 1 and 2. Much lower occupancies will result in
no conflict most of the time.
##################################################################
EXAMPLE 1:
VERBOSE=True
Board = 4 x 4
Nb. of agents: 8 (occupation 50%)
==============================================================
Turn: 0
Old board:
[[-1. 7. 3. 0.]
[ 6. -1. 4. 2.]
[-1. -1. 5. -1.]
[-1. 1. -1. -1.]]
Proposed new board:
[[ 1. -1. -1. -1.]
[-1. 4. -1. -1.]
[-1. 6. -1. 2.]
[-1. 7. 5. -1.]]
# of conflicts to solve: 2
Conflicts to solve: [agent_a, agent_b, targeted cell]:
[0, 1, array([0, 0])]
[3, 5, array([2, 3])]
Proposed new board:
[[-1. -1. 1. 3.]
[-1. 4. -1. 5.]
[-1. 6. -1. 2.]
[-1. 7. -1. 0.]]
No conflicts
<<< OUTPUT >>>
Old board:
[[-1. 7. 3. 0.]
[ 6. -1. 4. 2.]
[-1. -1. 5. -1.]
[-1. 1. -1. -1.]]
Definitive new board:
[[-1. -1. 1. 3.]
[-1. 4. -1. 5.]
[-1. 6. -1. 2.]
[-1. 7. -1. 0.]]
==============================================================
Turn: 1
Old board:
[[-1. -1. 1. 3.]
[-1. 4. -1. 5.]
[-1. 6. -1. 2.]
[-1. 7. -1. 0.]]
Proposed new board:
[[ 3. -1. -1. -1.]
[ 5. -1. 4. -1.]
[ 7. -1. -1. -1.]
[ 6. 1. -1. 2.]]
# of conflicts to solve: 1
Conflicts to solve: [agent_a, agent_b, targeted cell]:
[0, 6, array([0, 3])]
Proposed new board:
[[ 3. -1. -1. -1.]
[ 5. -1. 4. -1.]
[ 7. -1. -1. -1.]
[-1. 6. -1. 2.]]
# of conflicts to solve: 2
Conflicts to solve: [agent_a, agent_b, targeted cell]:
[0, 7, array([0, 2])]
[1, 6, array([1, 3])]
Proposed new board:
[[ 3. 1. -1. -1.]
[ 5. -1. 4. -1.]
[ 0. 7. -1. -1.]
[ 6. -1. -1. 2.]]
No conflicts
<<< OUTPUT >>>
Old board:
[[-1. -1. 1. 3.]
[-1. 4. -1. 5.]
[-1. 6. -1. 2.]
[-1. 7. -1. 0.]]
Definitive new board:
[[ 3. 1. -1. -1.]
[ 5. -1. 4. -1.]
[ 0. 7. -1. -1.]
[ 6. -1. -1. 2.]]
==============================================================
##################################################################
EXAMPLE 2:
VERBOSE=False
Board = 200 x 200
Nb. of agents: 10000 (occupation 25%)
==============================================================
Turn: 0
# of conflicts to solve: 994
# of conflicts to solve: 347
# of conflicts to solve: 137
# of conflicts to solve: 63
# of conflicts to solve: 24
# of conflicts to solve: 10
# of conflicts to solve: 6
# of conflicts to solve: 4
# of conflicts to solve: 2
No conflicts
==============================================================
Turn: 1
# of conflicts to solve: 1002
# of conflicts to solve: 379
# of conflicts to solve: 150
# of conflicts to solve: 62
# of conflicts to solve: 27
# of conflicts to solve: 9
# of conflicts to solve: 2
No conflicts
==============================================================
The program (in python):
#!/usr/bin/env python
# coding: utf-8
import numpy
import numpy as np
np.random.seed(1) # will reproduce the examples
# Verbose: if True: show the boards (ok for small boards)
Verbose=True
# max nb of turns
MaxTurns=2
Board_size= 4
Nb_of_cells=Board_size**2
Nb_of_agents=8 # should be < Board_size**2
agent_health=np.ones(Nb_of_agents) # Example 1: All agents move (see function choose_move)
#agent_health=np.random.rand(Nb_of_agents) # With this: the probability of moving is given by health
#agent_health=0.8*np.ones(Nb_of_agents) # With this: 80% of the time they move, 20% the stay in place
possible_moves=np.array([[0,0], [-1,-1],[-1, 0],[-1,+1], [ 0,-1], [ 0,+1], [+1,-1],[+1, 0],[+1,+1]])
Nb_of_possible_moves=len(possible_moves)
def choose_move(agent, health):
# Each agent chooses randomly a move among the possible moves
# with a mobility proportional to health.
prob[0]=1-agent_health[agent] # low health --> low mobility
prob[1:9]=(1-prob[0])/8
move=np.random.choice(Nb_of_possible_moves,1,p=prob)
return move
def identify_conflicts_to_solve(missing_agents, Nb_of_agents, Old_X, Old_Y):
# 1) Identify conflicts to solve:
target_A=[]
target_B=[]
[target_A.append([a,(Old_X[a]+possible_moves[move[a]][0])%Board_size, (Old_Y[a]+possible_moves[move[a]][1])%Board_size]) for a in missing_agents];
[target_B.append([a,(Old_X[a]+possible_moves[move[a]][0])%Board_size, (Old_Y[a]+possible_moves[move[a]][1])%Board_size]) for a in range(Nb_of_agents) if not a in missing_agents];
target_A=np.array(target_A)
target_B=np.array(target_B)
conflicts_to_solve=[]
for m in range(len(target_A)):
for opponent in range(len(target_B[:,0])):
if all(target_A[m,1:3] == target_B[opponent,1:3]): # they target the same cell
conflicts_to_solve.append([target_A[m,0], target_B[opponent,0], target_A[m,1:3]])
return conflicts_to_solve
# Fill the board with -1 (-1 meaning: empty cell)
Old_Board=-np.ones(len(np.arange(0,Board_size**2)))
# Choose a cell on the board for each agent:
# position = index of the occupied cell
Old_indices = np.random.choice(Nb_of_cells, size=Nb_of_agents, replace=False)
# We populate the board
for i in range(Nb_of_agents):
Old_Board[Old_indices[i]]=i
New_Board=Old_Board
# Coordinates: We assume a cyclic board
Old_X=np.array([Old_indices[i] % Board_size for i in range(len(Old_indices))]) # X position of cell i
Old_Y=np.array([Old_indices[i] // Board_size for i in range(len(Old_indices))])# Y position of cell i
# Define other properties
move=np.zeros(Nb_of_agents,dtype=int)
prob=np.zeros(Nb_of_possible_moves)
print('==============================================================')
for turn in range(MaxTurns):
print("Turn: ",turn)
if Verbose:
print('Old board:')
print(New_Board.reshape(Board_size,Board_size))
Nb_of_occupied_cells_before_the_move=len(Old_Board[Old_Board>-1])
Legal_move=False
while not Legal_move:
for i in range(0,Nb_of_agents):
move[i]=choose_move(agent=i, health=agent_health[i])
conflicts_to_solve=-1
while conflicts_to_solve!=[]:
# New coordinates (with cyclic boundary conditions):
New_X=np.array([(Old_X[i]+possible_moves[move[i]][0]) % Board_size for i in range(Nb_of_agents)])
New_Y=np.array([(Old_Y[i]+possible_moves[move[i]][1]) % Board_size for i in range(Nb_of_agents)])
# New board
New_indices=New_Y*Board_size+New_X
New_Board=-np.ones(Board_size**2) # fill the board with -1 (-1 meaning: empty cell)
for i in range(Nb_of_agents): # Populate new board
New_Board[New_indices[i]]=i
# Look for missing agents: an agent is missing if it has been "overwritten" by another agent,
# indicating conflicts in reaching a particular cell
missing_agents=[agent for agent in range(Nb_of_agents) if not agent in New_Board]
# 1) identify conflicts to solve:
conflicts_to_solve = identify_conflicts_to_solve(missing_agents, Nb_of_agents, Old_X, Old_Y)
if Verbose:
print('Proposed new board:')
print(New_Board.reshape(Board_size,Board_size))
if len(conflicts_to_solve)>0:
print("# of conflicts to solve: ", len(conflicts_to_solve))
if Verbose:
print('Conflicts to solve: [agent_a, agent_b, targeted cell]: ')
for c in conflicts_to_solve:
print(c)
else:
print("No conflicts")
# 2) Solve conflicts
# The way we solve conflicting agents is "fair" since we re-roll the dice for all of them
# Without making arbitrary decisions
for c in conflicts_to_solve:
# re-choose a move for "a"
move[c[0]]=choose_move(c[0], agent_health[c[0]])
# re-choose a move for "b"
move[c[1]]=choose_move(c[1], agent_health[c[1]])
Nb_of_occupied_cells_after_the_move=len(New_Board[New_Board>-1])
Legal_move = Nb_of_occupied_cells_before_the_move == Nb_of_occupied_cells_after_the_move
if not Legal_move:
# Note: in principle, it should never happen but,
# better to check than being sorry...
print("Problem: Illegal move")
Turn=MaxTurns
# We stop there
if Verbose:
print("<<< OUTPUT >>>")
print("Old board:")
print(Old_Board.reshape(Board_size,Board_size))
print()
print("Definitive new board:")
print(New_Board.reshape(Board_size,Board_size))
print('==============================================================')
Old_X=New_X
Old_Y=New_Y
Old_indices=New_indices
Old_Board=New_Board
Due to the "independent actions taken by multiple individuals" I suppose there is no way to avoid potential collisions and hence you need some mechanism for resolving those.
A fair version of your "whoever comes first" approach could involve shuffling the individuals randomly at the beginning of each time step, e.g. choose a new and random processing order for you individuals in each time step.
If you fix the random seed the simulation results would still be deterministic.
If the individuals aquire some type of score / fitness during the simulation this could also be used to resolve conflicts. E.g. conflict is always won by whoever has the highest fitness (you would need an additional rule for ties then).
Or choose a random winner with winning probability proportional to fitness: If individuals 1 and 2 have fitness f1 and f2, then the probability of 1 winning would be f1/(f1+f2) and the probability of 2 winning would be f2/(f1+f2). Ties (f1 = f2) would also be resolved automatically.
I guess those fitness based rules could be called fair, as long as
Every individual has the same starting fitness (or starting fitness is also set randomly)
Every individual has the same chance of aquiring a high fitness, e.g. all starting positions have the same outlook or starting positions are set randomly
Hello generous people,
I am writing a model for farmer's decision making based on last period crop production. Initially land parcels (small or large) make farmer to use either of ground or surface water. In later ticks farmer will decide using type of water groundwater or surfacewater based on crop production. A high level of crop production makes farmers to have a memory more than a number X for instance and if memory is higher than X; farmer will choose to follow the strategy he has used to obtain higher crop. I am unable understand that how memory of a farmer will be build to use as an input in the same loop/ code block which I have written for initial yield. Experts on board please extend your help.
Globals [ surface-water surface-water maximum-yield water-demand water-used ]
Turtle-own [ yield memory]
to setup
clear-all
create 5 [ set yield 0
set memory 0
set surface-water 10
set maximum yield 60
set groundwater 20
set water-demand 17
set land random 5 + 3]
reset-ticks
end
to go
tick
ask turtles with land >= 4 [ ifelse random 1 = 0 [set groundwater-use groundwater - water-demand
set yield 0.8 * maximum-yield
set memory % of yield ]
[ set groundwater-use 0.5 * water-demand
set surfacewater-use groundwater-use 0.5 * water-demand
set yield 0.85 * maximum-yield
set memory % of yield]
ask ask turtles with land < 4 [ set groundwater-use 0.5 * water-demand
set surfacewater-use groundwater-use 0.5 * water-demand
set yield 0.85 * maximum-yield
set memory % of yield]
end
I will like to use a clustering algorithm to find a clustering for a big Digraph, and I will like remove noise from this graph too. So, I was thinking to use the DBSCAN approach, because I saw that we can give to the algorithm a distance function for determining the distance/similarity between two different nodes.
My question is, how can I define a distance function which increases the similarity between two nodes closes in terms of hops and decrease when a node is isolated.
I don't have coordinates or node attributes, so I can not use those. I only have the topology of the graph.
The expected output will be something like this:
I'm really concern about the complexity of the solution. How can approximate a clustering with a linear complexity ...
What is wrong with the obvious?
Distance(a,b) = length of shortest path, or infinity if there is none.
You probably should take directions into account, so a0 to a3 ist 1.
The distance metric suggested by #Anony-Mousse is a good
and natural one, but I question the use of dbscan. Using
the proposed
distance = length of shortest path, or infinity if there is none
Any two nodes that are directly linked would be at distance 1.
If you used dbscan with epsilon < 1, all points would be noise
points. So you will want epsilon > 1. From your example, it looks
like if there is even one point at distance 1, you want them in
the same component so
it looks like you want minNumPts = 2. This will give the
result that it two points are connected by a path of any length
they would be in the same cluster. It looks to me like what
you are after has nothing to do with density and clustering,
rather, I think that what you want is connected components.
If two nodes are connected by a path of any length, they are
in the same component. Finding this via dbscan or some other clustering
method may be possible, but that is probably the
wrong way to think about this. You have a graph and a graph
theoretic problem. You should probably use methods from graph
theory.
I will illustrate using R and igraph. There are other tools
if you don't care for these.
Most of the work is simply setting up your problem.
library(igraph)
to = c("a1", "a2", "a3", "a0", "b1", "b2", "b3", "b0")
from = c("a0", "a1", "a2", "a3", "b0", "b1", "b2", "b3")
EL = data.frame(from, to)
Vert = c("a0", "a1", "a2", "a3", "b0", "b1", "b2", "b3", "c0", "d0")
Vdf = data.frame(Vert)
g = graph_from_data_frame(d = EL, vertices=Vdf)
LO = matrix(c(1.2,1,1,1.2, 2.2,2,2,2.2, 0, 3, 4,3,2,1,4,3,2,1,4,4),
ncol=2)
plot(g, layout=LO)
Now we can use a one-liner to get everything that we need
about the components.
Comp = components(g, mode="weak")
Comp
$membership
a0 a1 a2 a3 b0 b1 b2 b3 c0 d0
1 1 1 1 2 2 2 2 3 4
$csize
[1] 4 4 1 1
$no
[1] 4
This is telling us component membership of the nodes,
the number of nodes per component and the number of
components. Since you wanted to call the single node
components "noise" in the style of dbscan, you can
see that components 3 and 4 have one node each.
They are the noise. The others are "real" components.
To show how to use this and to come to closure with a
pretty picture, I will plot the graph coloring the
components and use light gray for the "noise".
ColorMap = rainbow(Comp$no)
ColorMap[Comp$csize == 1] = "lightgray"
plot(g, layout=LO, vertex.color=ColorMap[Comp$membership])
I encourage you to think about your graph problem as a graph.
I'm trying to generate a small-world type of network (https://en.wikipedia.org/wiki/Small-world_network) in my Netlogo model which is created throughout the model itself; people get to know one another while the model is running.
I know how to generate a small world model in Netlogo in the setup. But how do you generate a small world network on the go?
My code for generating a small world during the setup is as follows.
breed [interlinks interlink] ;links between different breeds
breed [intralinks intralink] ; links between same breeds
to set_sw_network
ask turtles[
let max-who 1 + max [who] of turtles
let sorted sort ([who] of turtles)
foreach sorted [ x ->
ask turtle x [
let i 1
repeat same_degree + dif_degree [
ifelse [breed] of self = [breed] of turtle (( x + i ) mod max-who)
[create-intralink-with turtle (( x + i ) mod max-who)]
[create-interlink-with turtle (( x + i) mod max-who)]
set i i + 1
]
]
]
repeat round (rewire_prop * number_of_members) [ ;rewire_prop is a slider 0 - 1 with steps of 0.1
ask one-of turtles [
ask one-of my-links [die]
create-intralink-with one-of other turtles with [link-with self = nobody]
]
]
]
end
But, I am not interested in creating a small world at the beginning. I'm interested in creating a network with small world properties throughout the model. Currently, I do have this on the go create-link feature in my model, but I'm not sure how to tweak it so it results in a small world type of network:
to select_interaction:
ommitted code: sorts pre-existing links and interacts with them
if count my-links < my_degree
[
repeat number_of_interactions_per_meeting
[
let a select_turtle ;delivers a turtle with link to self = nobody
if a != nobody
[
ifelse [breed] of a = [breed] of myself
[
create-intralink-with a
[
set color cyan
interact
]
]
[
create-interlink-with a
[
set color orange + 2
interact
]
]
]
]
]
end
At the moment, my strategy is to give every turtle a variable for my_degree that is based on the distribution of the given social network. But the question remains, if this is a good strategy at all, then what is the correct distribution for a small world network?
pseudo-code for this strategy:
to setup-turtles
If preferential attachment: set my_degree random-poisson 'mean'
If small world: set my_degree ????? 'mean'
end
Any insight would be wonderful.
I am using the Kalman Filter opencv library to use the Kalman estimator capabilities.
My program does not enforce real time recursion. My question is, when the transition matrix has elements dependent on the time step, do I have to update the transition matrix every time use it (in predict or correct) to reflect the time passed since last recursion?
Edit: The reason I ask this is because the filter works well with no corrections on the transition matrix but it does not when I update the time steps.
Many descriptions of the Kalman Filter write the transition matrix as F as if it's a constant. As you have discovered, you have to update it (along with Q) on each update in some cases, such as with a variable timestep.
Consider a simple system of position and velocity, with
F = [ 1 1 ] [ x ]
[ 0 1 ] [ v ]
So at each step x = x + v (position updates according to velocity) and v = v (no change in velocity).
This is fine, as long as your velocity is in units of length / timestep. If your timestep varies, or if you express your velocity in a more typical unit like length / s, you will need to write F like this:
F = [ 1 dt ] [ x ]
[ 0 1 ] [ v ]
This means you must compute a new value for F whenever your timestep changes (or every time, if there is no set schedule).
Keep in mind that you are also adding in the process noise Q on each update, so it likely needs to be scaled by time as well.