Adding Bias to Preferential Attachment Network in NetLogo - network-programming

I'm very new to NetLogo, and am attempting to incorporate bias into the Preferential Attachment model by making the probability of attachment depend on a node's color as well as degree; nodes will have a % bias (determined by a slider) to choose to link with a node of the same color as themselves.
So far, I've made the nodes heterogenous, either blue or red, with blue being ~70% of the population:
to make-node [old-node]
create-turtles 1
[
set color red
if random 100 < 70
[ set color blue ]
if old-node != nobody
[ create-link-with old-node [ set color green ]
;; position the new node near its partner
move-to old-node
fd 8
]
]
end
I understand the main preferential attachment code, which has the new node select a partner proportional to its link count:
to-report find-partner
report [one-of both-ends] of one-of links
end
Where I encounter issues is in the following procedure. Sometimes it works as expected, but sometimes I get the following error message: OF expected input to be an agent or agentset but got NOBODY instead - regarding the statement "[one-of both-ends] of one-of links" under my 'while' loop.
to make-node [old-node]
create-turtles 1
[
set color red
set characteristic 0
if random 100 < 70
[ set color blue
set characteristic 1]
if (old-node != nobody) and (random 100 < bias-percent) and (any? turtles with [color = red]) and (any? turtles with [color = blue])
[
while [characteristic != [characteristic] of old-node]
[
set old-node [one-of both-ends] of one-of links
]
]
if old-node != nobody
[ create-link-with old-node [ set color green ]
;; position the new node near its partner
move-to old-node
fd 8
]
]
end
Any help is appreciated!!!

Related

Netlogo ecosystem model die command

I am trying to simulate what happens to vultures if they randomly stumble upon a carcass that has been poisoned by poachers. The poisoned carcass needs to be random. I also need to plot the deaths, so do i need to set up a dead/poisoned state in order to plot the deaths, do I need to code a to die section. Im not sure. TIA
to go
; repeat 10800 [
ask vultures
[
if state = "searching" [ search-carcasses ]
if state = "following" [follow-leaders-find-carcasses]
if state = "searching"
[ if random-float 1 < ( 1 / 360 )
[ ifelse random 2 = 0
[ rt 45 ]
[ lt 45 ] ] ]
if state != "feeding"
[ fd 0.009 ]
if state = "leader" [set time-descending time-descending + 1]
if mycarcass != 0
[ if distance mycarcass <= 0.009
[ set state "feeding"
ask mycarcass
[ set occupied? "yes" ] ] ] if state = "feeding" [
ask mycarcass
[if poisoned? = "yes"
[set state "poisoned"] ] ] if state = "poisoned" [die] ] tick ; ]

Matrix Transformation for image

I am working on an image processing project in python in which I am required to change the coordinate system
I thought it is analogous to matrix transformation and tried but it is not working, I have taken the coordinates of the red dots
Simply subtract by 256 and divide by 512. The connection is that you see values of 256 get mapped to 0. Therefore, 0 gets mapped to -256, 256 gets mapped to 0 and 512 gets mapped to 256. However, you further need the values to be in the range of [-0.5, 0.5]. Dividing everything by 512 finishes this off.
Therefore the relationship is:
out = (in - 256) / 512 = (in / 512) - 0.5
Try some values from your example input above to convince yourself that this is the correct relationship.
If you want to form this as a matrix multiplication, this can be interpreted as an affine transform with scale and translation, but no rotation:
[ 1/512 0 -0.5 ]
K = [ 0 1/512 -0.5 ]
[ 0 0 1 ]
Take note that you will need to use homogeneous coordinates to achieve the desired result.
For example:
(x, y) = (384, 256)
[X] [ 1/512 0 -0.5 ][384]
[Y] = [ 0 1/512 -0.5 ][256]
[1] [ 0 0 1 ][ 1 ]
[X] [384/512 - 0.5] [ 0.25 ]
[Y] = [256/512 - 0.5] = [ 0 ]
[1] [ 1 ] [ 1 ]
Simply remove the last coordinate to get the final answer of (0.25, 0).

Find the Borders from a position array Python 3

I'm finding the borders in an image with python, first i create neighborhoods, with a gray scale image, then i have to find the points that connect the neighborhoods i created (edges).
Here is an example of my neighborhoods array. The values [x,y] represent the positions of the pixel that forms part of the neighborhood. This is the diagram of the neighborhood (vecinos) array
[ [ [0,0], [0,1],[0,2] ], [ [1,0], [1,1],[1,2] ] ]
A google drive link to the full neighborhood array in a txt form
And this is the function that i use to detect borders
def getPoints(vecinos, img):
print('puntos')
length = len(vecinos)
bol = np.zeros(img.shape,img.dtype)
for i in range(length):
for j in range(length):
if not i == j:
for val2 in vecinos[i]:
for val1 in vecinos[j]:
res1 = abs(val1[0] - val2[0])
res2 = abs(val1[1] - val2[1])
if( (res1 == 0 or res1 == 1) and (res2 == 1 or res2 == 0) ):
print('borde')
bol[ val1[0],val1[1] ] = 1
bol[ val2[0],val2[1] ] = 1
return bol
the result of this function returns an array with the same height and width the 1 are identified as borders the rest is 0. Here is an example of the numpy array that results. this is a txt of a result i loaded to my google drive
this function returns an array with 0 and 1, the 1 are the borders detected.
I want to make this faster, it works fine but it takes long when the image is bigger than 100 px

How does TersorFlow expect flattened images?

When performing 2D convolutions in TersorFlow using the conv_2d layer, does it expect the pixels to be lined up as
[
[img[i].red, img[i].green, img[i].blue],
[img[i+1].red, etc.],
]
Or
[
[mg[i].red, img[i+1].red, etc.],
[mg[i].green, img[i+1].green, etc.],
]
or some other way?
2D convolutions expect a 4-d tensor as input with the following shape:
[batch_size, image_height, image_width, channel_size]
In case of rgb images the channels are the three colors. Therefore the pixel should be lined up as:
[
[
[img[i,j].red, img[i,j].green, img[i,j].blue],
[img[i, j+1].red, img[i, j+1].green, img[i, j+1].blue],
etc
],
[
[img[i+1,j].red, img[i+1,j].green, img[i+1,j].blue],
[img[i+1, j+1].red, img[i+1, j+1].green, img[i+1, j+1].blue],
etc
],
etc
]
(with img[y_coordinate, x_coordinate] and img[i,j] = img[i*image_width + j])

NetLogo: histogram relative frequency

I'm still having problems with [histogram].
I have a global variable (age-sick) that stores the age of the turtles when they got sick...and I want to plot the distribution: histogram age-sick
However I do not want the absolute number of turtles who got sick per every age, rather the relative one.
Is there a way to do so?
I have tried to overcome the problem in the following way:​​
let age-freq (list)
let i 0
while [ i <= (max age-sick)] [
let a filter [? = i] age-sick
repeat (length a / length age-sick * 1000) [set age-freq lput i age-freq]
set i i + 1]
histogram age-freq]

Resources