I'm working on the theoretical framework for my own simulation environment. I want to simulate evolutionary algorithms on populations, but I don't know how to handle conflicting actions between multiple individuals.
The simulation has discrete time steps and takes place on a board (or tile grid) with a random population of different individuals. Each individual has some inputs, internal state and a list of possible actions to take each step. For example, an individual can read its position on the grid (input) and move one tile in a certain direction (action). Now, lets say I have two individuals A and B. Both perform a certain action during the same simulation step which would result in both individuals ending up on the same tile. This, however, is forbidden by the rules of the environment.
In more abstract terms: my simulation is in a valid state S1. Due to independent actions taken by multiple individuals, the next state S2 (after one simulation step) would be an invalid state.
How does one resolve these conflicts / collisions so I never end up in an invalid state?
I want the simulation to be replayable so the behavior should be deterministic.
Another question is fairness. Lets say I resolve conflicts by whoever comes first passes. Because, in theory, all actions happen at the same time (discrete time steps), "whoever comes first" isn't a measurement of time but data layout. Individuals that are processed earlier now have an advantage because they happen to be in favorable locations in the internal data structures (i.e. lower index in the array).
Is there a way to guarantee fairness? If not, how can I reduce unfairness?
I know these are very broad questions but since I haven't worked out all the constraints and rules of the simulation I wanted to get an overview of what's even possible, or perhaps common practice in these systems. I'm happy about any pointers for further research.
The question is overwhelmingly broad but here is my answer for the following case:
Agents move on a square (grid) board with cyclic boundaries conditions
Possible moves are: a) stay where you are, b) move to one of the 9 adjacent positions
The possible moves are assigned a probability
Conflicts will happen for every cell targeted by two agents since we assume no two agents can occupy the same cell (exclusion). We'll solve the conflict by re-rolling the dice until we obtain no conflict.
The general idea:
Roll the dices for every agent i --> targeted cell by agent i
Any two agents targeting the same cell ?
if yes:
for every pair of conflicting agents:
re-roll the dice for BOTH agents
Notes:
a) Conflicts are detected when an agent is missing because it has been "crushed" by an other agent (because of relative positions in agents' list).
b) Here, I assume that re-rolling the dice for both is a "fair" treatment since no arbitrary decision has to be taken.
Did it solve the problem? If not, go back to 2)
Move the agents to the new positions and go back to 1)
I provide a python program. No fancy graphics. Runs in a terminal.
The default parameters are:
Board_size = 4
Nb_of_agents=8 (50 % occupation)
If you want to see how it scales with problem size, then put VERBOSE=False
otherwise you'll be flooded with output. Note: -1 means an empty cell.
EXAMPLES OF OUTPUT:
Note: I used a pretty high occupancies (50% and 25%) for
examples 1 and 2. Much lower occupancies will result in
no conflict most of the time.
##################################################################
EXAMPLE 1:
VERBOSE=True
Board = 4 x 4
Nb. of agents: 8 (occupation 50%)
==============================================================
Turn: 0
Old board:
[[-1. 7. 3. 0.]
[ 6. -1. 4. 2.]
[-1. -1. 5. -1.]
[-1. 1. -1. -1.]]
Proposed new board:
[[ 1. -1. -1. -1.]
[-1. 4. -1. -1.]
[-1. 6. -1. 2.]
[-1. 7. 5. -1.]]
# of conflicts to solve: 2
Conflicts to solve: [agent_a, agent_b, targeted cell]:
[0, 1, array([0, 0])]
[3, 5, array([2, 3])]
Proposed new board:
[[-1. -1. 1. 3.]
[-1. 4. -1. 5.]
[-1. 6. -1. 2.]
[-1. 7. -1. 0.]]
No conflicts
<<< OUTPUT >>>
Old board:
[[-1. 7. 3. 0.]
[ 6. -1. 4. 2.]
[-1. -1. 5. -1.]
[-1. 1. -1. -1.]]
Definitive new board:
[[-1. -1. 1. 3.]
[-1. 4. -1. 5.]
[-1. 6. -1. 2.]
[-1. 7. -1. 0.]]
==============================================================
Turn: 1
Old board:
[[-1. -1. 1. 3.]
[-1. 4. -1. 5.]
[-1. 6. -1. 2.]
[-1. 7. -1. 0.]]
Proposed new board:
[[ 3. -1. -1. -1.]
[ 5. -1. 4. -1.]
[ 7. -1. -1. -1.]
[ 6. 1. -1. 2.]]
# of conflicts to solve: 1
Conflicts to solve: [agent_a, agent_b, targeted cell]:
[0, 6, array([0, 3])]
Proposed new board:
[[ 3. -1. -1. -1.]
[ 5. -1. 4. -1.]
[ 7. -1. -1. -1.]
[-1. 6. -1. 2.]]
# of conflicts to solve: 2
Conflicts to solve: [agent_a, agent_b, targeted cell]:
[0, 7, array([0, 2])]
[1, 6, array([1, 3])]
Proposed new board:
[[ 3. 1. -1. -1.]
[ 5. -1. 4. -1.]
[ 0. 7. -1. -1.]
[ 6. -1. -1. 2.]]
No conflicts
<<< OUTPUT >>>
Old board:
[[-1. -1. 1. 3.]
[-1. 4. -1. 5.]
[-1. 6. -1. 2.]
[-1. 7. -1. 0.]]
Definitive new board:
[[ 3. 1. -1. -1.]
[ 5. -1. 4. -1.]
[ 0. 7. -1. -1.]
[ 6. -1. -1. 2.]]
==============================================================
##################################################################
EXAMPLE 2:
VERBOSE=False
Board = 200 x 200
Nb. of agents: 10000 (occupation 25%)
==============================================================
Turn: 0
# of conflicts to solve: 994
# of conflicts to solve: 347
# of conflicts to solve: 137
# of conflicts to solve: 63
# of conflicts to solve: 24
# of conflicts to solve: 10
# of conflicts to solve: 6
# of conflicts to solve: 4
# of conflicts to solve: 2
No conflicts
==============================================================
Turn: 1
# of conflicts to solve: 1002
# of conflicts to solve: 379
# of conflicts to solve: 150
# of conflicts to solve: 62
# of conflicts to solve: 27
# of conflicts to solve: 9
# of conflicts to solve: 2
No conflicts
==============================================================
The program (in python):
#!/usr/bin/env python
# coding: utf-8
import numpy
import numpy as np
np.random.seed(1) # will reproduce the examples
# Verbose: if True: show the boards (ok for small boards)
Verbose=True
# max nb of turns
MaxTurns=2
Board_size= 4
Nb_of_cells=Board_size**2
Nb_of_agents=8 # should be < Board_size**2
agent_health=np.ones(Nb_of_agents) # Example 1: All agents move (see function choose_move)
#agent_health=np.random.rand(Nb_of_agents) # With this: the probability of moving is given by health
#agent_health=0.8*np.ones(Nb_of_agents) # With this: 80% of the time they move, 20% the stay in place
possible_moves=np.array([[0,0], [-1,-1],[-1, 0],[-1,+1], [ 0,-1], [ 0,+1], [+1,-1],[+1, 0],[+1,+1]])
Nb_of_possible_moves=len(possible_moves)
def choose_move(agent, health):
# Each agent chooses randomly a move among the possible moves
# with a mobility proportional to health.
prob[0]=1-agent_health[agent] # low health --> low mobility
prob[1:9]=(1-prob[0])/8
move=np.random.choice(Nb_of_possible_moves,1,p=prob)
return move
def identify_conflicts_to_solve(missing_agents, Nb_of_agents, Old_X, Old_Y):
# 1) Identify conflicts to solve:
target_A=[]
target_B=[]
[target_A.append([a,(Old_X[a]+possible_moves[move[a]][0])%Board_size, (Old_Y[a]+possible_moves[move[a]][1])%Board_size]) for a in missing_agents];
[target_B.append([a,(Old_X[a]+possible_moves[move[a]][0])%Board_size, (Old_Y[a]+possible_moves[move[a]][1])%Board_size]) for a in range(Nb_of_agents) if not a in missing_agents];
target_A=np.array(target_A)
target_B=np.array(target_B)
conflicts_to_solve=[]
for m in range(len(target_A)):
for opponent in range(len(target_B[:,0])):
if all(target_A[m,1:3] == target_B[opponent,1:3]): # they target the same cell
conflicts_to_solve.append([target_A[m,0], target_B[opponent,0], target_A[m,1:3]])
return conflicts_to_solve
# Fill the board with -1 (-1 meaning: empty cell)
Old_Board=-np.ones(len(np.arange(0,Board_size**2)))
# Choose a cell on the board for each agent:
# position = index of the occupied cell
Old_indices = np.random.choice(Nb_of_cells, size=Nb_of_agents, replace=False)
# We populate the board
for i in range(Nb_of_agents):
Old_Board[Old_indices[i]]=i
New_Board=Old_Board
# Coordinates: We assume a cyclic board
Old_X=np.array([Old_indices[i] % Board_size for i in range(len(Old_indices))]) # X position of cell i
Old_Y=np.array([Old_indices[i] // Board_size for i in range(len(Old_indices))])# Y position of cell i
# Define other properties
move=np.zeros(Nb_of_agents,dtype=int)
prob=np.zeros(Nb_of_possible_moves)
print('==============================================================')
for turn in range(MaxTurns):
print("Turn: ",turn)
if Verbose:
print('Old board:')
print(New_Board.reshape(Board_size,Board_size))
Nb_of_occupied_cells_before_the_move=len(Old_Board[Old_Board>-1])
Legal_move=False
while not Legal_move:
for i in range(0,Nb_of_agents):
move[i]=choose_move(agent=i, health=agent_health[i])
conflicts_to_solve=-1
while conflicts_to_solve!=[]:
# New coordinates (with cyclic boundary conditions):
New_X=np.array([(Old_X[i]+possible_moves[move[i]][0]) % Board_size for i in range(Nb_of_agents)])
New_Y=np.array([(Old_Y[i]+possible_moves[move[i]][1]) % Board_size for i in range(Nb_of_agents)])
# New board
New_indices=New_Y*Board_size+New_X
New_Board=-np.ones(Board_size**2) # fill the board with -1 (-1 meaning: empty cell)
for i in range(Nb_of_agents): # Populate new board
New_Board[New_indices[i]]=i
# Look for missing agents: an agent is missing if it has been "overwritten" by another agent,
# indicating conflicts in reaching a particular cell
missing_agents=[agent for agent in range(Nb_of_agents) if not agent in New_Board]
# 1) identify conflicts to solve:
conflicts_to_solve = identify_conflicts_to_solve(missing_agents, Nb_of_agents, Old_X, Old_Y)
if Verbose:
print('Proposed new board:')
print(New_Board.reshape(Board_size,Board_size))
if len(conflicts_to_solve)>0:
print("# of conflicts to solve: ", len(conflicts_to_solve))
if Verbose:
print('Conflicts to solve: [agent_a, agent_b, targeted cell]: ')
for c in conflicts_to_solve:
print(c)
else:
print("No conflicts")
# 2) Solve conflicts
# The way we solve conflicting agents is "fair" since we re-roll the dice for all of them
# Without making arbitrary decisions
for c in conflicts_to_solve:
# re-choose a move for "a"
move[c[0]]=choose_move(c[0], agent_health[c[0]])
# re-choose a move for "b"
move[c[1]]=choose_move(c[1], agent_health[c[1]])
Nb_of_occupied_cells_after_the_move=len(New_Board[New_Board>-1])
Legal_move = Nb_of_occupied_cells_before_the_move == Nb_of_occupied_cells_after_the_move
if not Legal_move:
# Note: in principle, it should never happen but,
# better to check than being sorry...
print("Problem: Illegal move")
Turn=MaxTurns
# We stop there
if Verbose:
print("<<< OUTPUT >>>")
print("Old board:")
print(Old_Board.reshape(Board_size,Board_size))
print()
print("Definitive new board:")
print(New_Board.reshape(Board_size,Board_size))
print('==============================================================')
Old_X=New_X
Old_Y=New_Y
Old_indices=New_indices
Old_Board=New_Board
Due to the "independent actions taken by multiple individuals" I suppose there is no way to avoid potential collisions and hence you need some mechanism for resolving those.
A fair version of your "whoever comes first" approach could involve shuffling the individuals randomly at the beginning of each time step, e.g. choose a new and random processing order for you individuals in each time step.
If you fix the random seed the simulation results would still be deterministic.
If the individuals aquire some type of score / fitness during the simulation this could also be used to resolve conflicts. E.g. conflict is always won by whoever has the highest fitness (you would need an additional rule for ties then).
Or choose a random winner with winning probability proportional to fitness: If individuals 1 and 2 have fitness f1 and f2, then the probability of 1 winning would be f1/(f1+f2) and the probability of 2 winning would be f2/(f1+f2). Ties (f1 = f2) would also be resolved automatically.
I guess those fitness based rules could be called fair, as long as
Every individual has the same starting fitness (or starting fitness is also set randomly)
Every individual has the same chance of aquiring a high fitness, e.g. all starting positions have the same outlook or starting positions are set randomly
Related
Here is my code:
set.seed(1)
#Boruta on the HouseVotes84 data from mlbench
library(mlbench) #has HouseVotes84 data
library(h2o) #has rf
#spin up h2o
myh20 <- h2o.init(nthreads = -1)
#read in data, throw some away
data(HouseVotes84)
hvo <- na.omit(HouseVotes84)
#move from R to h2o
mydata <- as.h2o(x=hvo,
destination_frame= "mydata")
#RF columns (input vs. output)
idxy <- 1
idxx <- 2:ncol(hvo)
#split data
splits <- h2o.splitFrame(mydata,
c(0.8,0.1))
train <- h2o.assign(splits[[1]], key="train")
valid <- h2o.assign(splits[[2]], key="valid")
# make random forest
my_imp.rf<- h2o.randomForest(y=idxy,x=idxx,
training_frame = train,
validation_frame = valid,
model_id = "my_imp.rf",
ntrees=200)
# find importance
my_varimp <- h2o.varimp(my_imp.rf)
my_varimp
The output that I am getting is "variable importance".
The classic measures are "mean decrease in accuracy" and "mean decrease in gini coefficient".
My results are:
> my_varimp
Variable Importances:
variable relative_importance scaled_importance percentage
1 V4 3255.193604 1.000000 0.410574
2 V5 1131.646484 0.347643 0.142733
3 V3 921.106567 0.282965 0.116178
4 V12 759.443176 0.233302 0.095788
5 V14 492.264954 0.151224 0.062089
6 V8 342.811554 0.105312 0.043238
7 V11 205.392654 0.063097 0.025906
8 V9 191.110046 0.058709 0.024105
9 V7 169.117676 0.051953 0.021331
10 V15 135.097076 0.041502 0.017040
11 V13 114.906586 0.035299 0.014493
12 V2 51.939777 0.015956 0.006551
13 V10 46.716656 0.014351 0.005892
14 V6 44.336708 0.013620 0.005592
15 V16 34.779987 0.010684 0.004387
16 V1 32.528778 0.009993 0.004103
From this my relative importance of "Vote #4" aka V4, is ~3255.2.
Questions:
What units is that in?
How is that derived?
I tried looking in documentation, but am not finding the answer. I tried the help documentation. I tried using Flow to look at parameters to see if anything in there indicated it. In none of them do I find "gini" or "decrease accuracy". Where should I look?
The answer is in the docs.
[ In the left pane, click on "Algorithms", then "Supervised", then "DRF". The FAQ section answers this question. ]
For convenience, the answer is also copied and pasted here:
"How is variable importance calculated for DRF? Variable importance is determined by calculating the relative influence of each variable: whether that variable was selected during splitting in the tree building process and how much the squared error (over all trees) improved as a result."
I have tried to detect something from a tutorial. When training have finished, stage files and cascade file is created. I have knowledge about the algorithm but I don't know meaning of information inside these file.
<internalNodes>
0 -1 13569 2.8149113059043884e-003</internalNodes>
<leafValues>
9.8837211728096008e-002 -8.5897433757781982e-001</leafValues></_>
and
<rects>
<_>
0 0 3 1 -1.</_>
<_>
1 0 1 1 3.</_></rects>
<tilted>0</tilted></_>
What are the meanings of these values?
Let's start with first block:
<internalNodes>
0 -1 13569 2.8149113059043884e-003</internalNodes>
<leafValues>
9.8837211728096008e-002 -8.5897433757781982e-001</leafValues></_>
It describes one of the weak classifier. In such case it's stump based, i.e. it's tree with max depth is equal to 1. 0 and -1 it's indexes of left and right child of root node. If indexes less or equal to zero it indicates that it's leaf nodes. Note that to calculate leaf index you need to negate it. Next number (13569) is index of feature in <features> section. And next number (2.8149113059043884e-003) is node threshold. In leafValues section presented weights of leafs in cascade tree.
For example, in this weak classifier we need to calculate value of 13569 feature. Next, compare this value with threshold (2.8149113059043884e-003) and if it less that threshold than you need to add the first leaf value (9.8837211728096008e-002) else you need to add the second leaf value (-8.5897433757781982e-001).
Next section describes one of the Haar feature:
<rects>
<_>
0 0 3 1 -1.</_>
<_>
1 0 1 1 3.</_></rects>
<tilted>0</tilted></_>
It obviously describes parameters of rectangle (x, y, width, height) and the weight of rectangle. It also may be tilted, that indicates by <tilted>0</tilted> flag.
I hope it will help.
I do regression analysis with multiple features. Number of features is 20-23. For now, I check each feature correlation with output variable. Some features show correlation coefficient close to 1 or -1 (highly correlated). Some features show correlation coefficient near 0. My question is: do I have to remove this feature if it has close to 0 correlation coefficient? Or I can keep it and the only problem is that this feature will no make some noticeable effect to regression model or will have faint affect on it. Or removing that kind of features is obligatory?
In short
High (absolute) correlation between a feature and output implies that this feature should be valuable as predictor
Lack of correlation between feature and output implies nothing
More details
Pair-wise correlation only shows you how one thing affects the other, it says completely nothing about how good is this feature connected with others. So if your model is not trivial then you should not drop variables because they are not correlated with output). I will give you the example which should show you why.
Consider following sample, we have 2 features (X, Y), and one output value (Z, say red is 1, black is 0)
X Y Z
1 1 1
1 2 0
1 3 0
2 1 0
2 2 1
2 3 0
3 1 0
3 2 0
3 3 1
Let us compute the correlations:
CORREL(X, Z) = 0
CORREL(Y, Z) = 0
So... we should drop all values? One of them? If we drop any variable - our prolem becomes completely impossible to model! "magic" lies in the fact that there is actually a "hidden" relation in the data.
|X-Y|
0
1
2
1
0
1
2
1
0
And
CORREL(|X-Y|, Z) = -0.8528028654
Now this is a good predictor!
You can actually get a perfect regressor (interpolator) through
Z = 1 - sign(|X-Y|)
I am going to do some work for transition-based dependency parsing using LIBLINEAR. But I am confused how to utilize it. As follows:
I set 3 feature templates for my training&testing processes of transition-based dependency parsing:
1. the word in the top of the stack
2. the word in the front of the queue
3. information from the current tree formed with the steps
And the feature defined in LIBLINEAR is:
FeatureNode(int index, double value)
Some examples like:
LABEL ATTR1 ATTR2 ATTR3 ATTR4 ATTR5
----- ----- ----- ----- ----- -----
1 0 0.1 0.2 0 0
2 0 0.1 0.3 -1.2 0
1 0.4 0 0 0 0
2 0 0.1 0 1.4 0.5
3 -0.1 -0.2 0.1 1.1 0.1
But I want to define my features like(one sentence 'I love you') at some stage:
feature template 1: the word is 'love'
feature template 2: the word is 'you'
feature template 3: the information is - the left son of 'love' is 'I'
Does it mean I must define features with LIBLINEAR like: -------FORMAT 1
(indexes in vocabulary: 0-I, 1-love, 2-you)
LABEL ATTR1(template1) ATTR2(template2) ATTR3(template3)
----- ----- ----- -----
SHIFT 1 2 0
(or LEFT-arc,
RIGHT-arc)
But I have go thought some statements of others, I seem to define feature in binary so I have to define a words vector like:
('I', 'love', 'you'), when 'you' appears for example, the vector will be (0, 0, 1)
So the features in LIBLINEAR may be: -------FORMAT 2
LABEL ATTR1('I') ATTR2('love') ATTR3('love')
----- ----- ----- -----
SHIFT 0 1 0 ->denoting the feature template 1
(or LEFT-arc,
RIGHT-arc)
SHIFT 0 0 1 ->denoting the feature template 2
(or LEFT-arc,
RIGHT-arc)
SHIFT 1 0 0 ->denoting the feature template 3
(or LEFT-arc,
RIGHT-arc)
Which is correct between FORMAT 1 and 2?
Is there some something I have mistaken?
Basically you have a feature vector of the form:
LABEL RESULT_OF_FEATURE_TEMPLATE_1 RESULT_OF_FEATURE_TEMPLATE_2 RESULT_OF_FEATURE_TEMPLATE_3
Liblinear or LibSVM expect you to translate it into integer representation:
1 1:1 2:1 3:1
Nowadays, depending on the language you use there are lots of packages/libraries, which would translate the string vector into libsvm format automatically, without you having to know the details.
However, if for whatever reason you want to do it yourself, the easiest thing would be maintain two mappings: one mapping for labels ('shift' -> 1, 'left-arc' -> 2, 'right-arc' -> 3, 'reduce' -> 4). And one for your feature template result ('f1=I' -> 1, 'f2=love' -> 2, 'f3=you' -> 3). Basically every time your algorithms applies a feature template you check whether the result is already in the mapping and if not you add it with a new index.
Remember that Liblinear or Libsvm expect a sorted list in ascending order.
During processing you would first apply your feature templates to the current state of your stacks and then translate the strings to the libsvm/liblinear integer representation and sort the indexes in ascending order.
I thought of implementing a simple check digit using a weighted sum of the digits modulo 10. In addition as serving as a check digit, I want to "abuse" the check digit to detect which of two pools (for example Article Numbers and Customer Numbers) a number belongs to.
According to Wikipedia it is recommended to use 1, 3, 7 and 9 as weight, so for example I could choose:
Article Numbers: Weights 1, 3, 7, 1, 3, 7, ...
Customer Numbers: Weights 7, 9, 1, 7, 9, 1, ...
Number 1234 as an Article Number (1*1+2*3+3*7+4*1 mod 10 = 2): 12342
Number 1234 as a Customer Number (1*7+2*9+3*1+4*7 mod 10 = 6): 12346
The problem is, that sometimes this gives the same check digit for both weight settings:
Number 1098 as an Article Number (1*1+0*3+9*7+8*1 mod 10 = 2): 10982
Number 1098 as a Customer Number (1*7+0*9+9*1+8*7 mod 10 = 2): 10982
Can I choose the weights of the number pools in a way that for any given original number it is ensured that the check digit is never the same for both pools?
I doubt it's possible, although I'd have to run an exhaustive check to be sure.
Have you thought about using even numbers for Article Numbers and odd numbers for Customer Numbers, or something like that?