I'm trying to generate a small-world type of network (https://en.wikipedia.org/wiki/Small-world_network) in my Netlogo model which is created throughout the model itself; people get to know one another while the model is running.
I know how to generate a small world model in Netlogo in the setup. But how do you generate a small world network on the go?
My code for generating a small world during the setup is as follows.
breed [interlinks interlink] ;links between different breeds
breed [intralinks intralink] ; links between same breeds
to set_sw_network
ask turtles[
let max-who 1 + max [who] of turtles
let sorted sort ([who] of turtles)
foreach sorted [ x ->
ask turtle x [
let i 1
repeat same_degree + dif_degree [
ifelse [breed] of self = [breed] of turtle (( x + i ) mod max-who)
[create-intralink-with turtle (( x + i ) mod max-who)]
[create-interlink-with turtle (( x + i) mod max-who)]
set i i + 1
]
]
]
repeat round (rewire_prop * number_of_members) [ ;rewire_prop is a slider 0 - 1 with steps of 0.1
ask one-of turtles [
ask one-of my-links [die]
create-intralink-with one-of other turtles with [link-with self = nobody]
]
]
]
end
But, I am not interested in creating a small world at the beginning. I'm interested in creating a network with small world properties throughout the model. Currently, I do have this on the go create-link feature in my model, but I'm not sure how to tweak it so it results in a small world type of network:
to select_interaction:
ommitted code: sorts pre-existing links and interacts with them
if count my-links < my_degree
[
repeat number_of_interactions_per_meeting
[
let a select_turtle ;delivers a turtle with link to self = nobody
if a != nobody
[
ifelse [breed] of a = [breed] of myself
[
create-intralink-with a
[
set color cyan
interact
]
]
[
create-interlink-with a
[
set color orange + 2
interact
]
]
]
]
]
end
At the moment, my strategy is to give every turtle a variable for my_degree that is based on the distribution of the given social network. But the question remains, if this is a good strategy at all, then what is the correct distribution for a small world network?
pseudo-code for this strategy:
to setup-turtles
If preferential attachment: set my_degree random-poisson 'mean'
If small world: set my_degree ????? 'mean'
end
Any insight would be wonderful.
Related
Hello generous people,
I am writing a model for farmer's decision making based on last period crop production. Initially land parcels (small or large) make farmer to use either of ground or surface water. In later ticks farmer will decide using type of water groundwater or surfacewater based on crop production. A high level of crop production makes farmers to have a memory more than a number X for instance and if memory is higher than X; farmer will choose to follow the strategy he has used to obtain higher crop. I am unable understand that how memory of a farmer will be build to use as an input in the same loop/ code block which I have written for initial yield. Experts on board please extend your help.
Globals [ surface-water surface-water maximum-yield water-demand water-used ]
Turtle-own [ yield memory]
to setup
clear-all
create 5 [ set yield 0
set memory 0
set surface-water 10
set maximum yield 60
set groundwater 20
set water-demand 17
set land random 5 + 3]
reset-ticks
end
to go
tick
ask turtles with land >= 4 [ ifelse random 1 = 0 [set groundwater-use groundwater - water-demand
set yield 0.8 * maximum-yield
set memory % of yield ]
[ set groundwater-use 0.5 * water-demand
set surfacewater-use groundwater-use 0.5 * water-demand
set yield 0.85 * maximum-yield
set memory % of yield]
ask ask turtles with land < 4 [ set groundwater-use 0.5 * water-demand
set surfacewater-use groundwater-use 0.5 * water-demand
set yield 0.85 * maximum-yield
set memory % of yield]
end
I will like to use a clustering algorithm to find a clustering for a big Digraph, and I will like remove noise from this graph too. So, I was thinking to use the DBSCAN approach, because I saw that we can give to the algorithm a distance function for determining the distance/similarity between two different nodes.
My question is, how can I define a distance function which increases the similarity between two nodes closes in terms of hops and decrease when a node is isolated.
I don't have coordinates or node attributes, so I can not use those. I only have the topology of the graph.
The expected output will be something like this:
I'm really concern about the complexity of the solution. How can approximate a clustering with a linear complexity ...
What is wrong with the obvious?
Distance(a,b) = length of shortest path, or infinity if there is none.
You probably should take directions into account, so a0 to a3 ist 1.
The distance metric suggested by #Anony-Mousse is a good
and natural one, but I question the use of dbscan. Using
the proposed
distance = length of shortest path, or infinity if there is none
Any two nodes that are directly linked would be at distance 1.
If you used dbscan with epsilon < 1, all points would be noise
points. So you will want epsilon > 1. From your example, it looks
like if there is even one point at distance 1, you want them in
the same component so
it looks like you want minNumPts = 2. This will give the
result that it two points are connected by a path of any length
they would be in the same cluster. It looks to me like what
you are after has nothing to do with density and clustering,
rather, I think that what you want is connected components.
If two nodes are connected by a path of any length, they are
in the same component. Finding this via dbscan or some other clustering
method may be possible, but that is probably the
wrong way to think about this. You have a graph and a graph
theoretic problem. You should probably use methods from graph
theory.
I will illustrate using R and igraph. There are other tools
if you don't care for these.
Most of the work is simply setting up your problem.
library(igraph)
to = c("a1", "a2", "a3", "a0", "b1", "b2", "b3", "b0")
from = c("a0", "a1", "a2", "a3", "b0", "b1", "b2", "b3")
EL = data.frame(from, to)
Vert = c("a0", "a1", "a2", "a3", "b0", "b1", "b2", "b3", "c0", "d0")
Vdf = data.frame(Vert)
g = graph_from_data_frame(d = EL, vertices=Vdf)
LO = matrix(c(1.2,1,1,1.2, 2.2,2,2,2.2, 0, 3, 4,3,2,1,4,3,2,1,4,4),
ncol=2)
plot(g, layout=LO)
Now we can use a one-liner to get everything that we need
about the components.
Comp = components(g, mode="weak")
Comp
$membership
a0 a1 a2 a3 b0 b1 b2 b3 c0 d0
1 1 1 1 2 2 2 2 3 4
$csize
[1] 4 4 1 1
$no
[1] 4
This is telling us component membership of the nodes,
the number of nodes per component and the number of
components. Since you wanted to call the single node
components "noise" in the style of dbscan, you can
see that components 3 and 4 have one node each.
They are the noise. The others are "real" components.
To show how to use this and to come to closure with a
pretty picture, I will plot the graph coloring the
components and use light gray for the "noise".
ColorMap = rainbow(Comp$no)
ColorMap[Comp$csize == 1] = "lightgray"
plot(g, layout=LO, vertex.color=ColorMap[Comp$membership])
I encourage you to think about your graph problem as a graph.
I am creating a Netlogo model about a zoo. I need my zoo guests (multiple turtles) to follow a circular pathway that starts at the entrance of the zoo every 24 ticks (1 tick is 1 hour in my model). It has to move around cages that hold animals because I cannot have my guests enter the areas for animals. The path doesn't have to be fast or the shortest, I just need the turtle not to stray from it. I would prefer not to use GIS to create a pathway.
My world's dimensions are -30 to 30 in both directions and does not wrap around.
The whereabouts of the cages are described below:
patches-own [ tigerhabitat?
flamingohabitat?
monkeyhabitat?
hippohabitat?
giraffehabitat?
]
to create-habitats
ask patches with [ pxcor < -12 and pycor > 23 ]
[ set tigerhabitat? true
set pcolor green ]
ask patches with [ pxcor > 20 and pycor > 20 ]
[ set hippohabitat? true
set pcolor blue ]
ask patches with [ pxcor > 18 and pycor < 15 and -1 < pycor ]
[ set flamingohabitat? true
set pcolor 96 ]
ask patches with [ pxcor > -10 and pxcor < 10 and pycor < 10 and -10 < pycor ]
[ set monkeyhabitat? true
set pcolor green ]
ask patches with [ pxcor < -12 and pycor < -20 ]
[ set giraffehabitat? true
set pcolor 67 ]
end
Paula- from your comment I think I understand a little better, thanks. One simple way to control where turtles can move is to use logical operators to exclude patches that they "consider" as they walk along. For a basic (non-path, yet) version of what you want, you could tell turtles that they can only move on patches that are not cages. You can set up a patch-only variable that explicitly says if a patch is caged or not, but in your example above all non-cage patches are black- you can use that to tell turtles that they should only walk onto a path if it is black. For example, you could add the procedures below to your code as above:
to setup
ca
reset-ticks
crt 10 [
setxy -25 0
]
create-habitats
end
to go
exclude-cage-walk
tick
end
to exclude-cage-walk
ask turtles [
rt random 30 - 15
let target one-of patches in-cone 1.5 180 with [ pcolor = black ]
if target != nobody [
face target
move-to target
]
]
end
You can see that before moving forward, each turtle assesses whether or not the patch it has chosen to move-to is black, and if it is not black, the turtle will not move there. Of course, you would have to modify this to suit your needs and have the turtles walk in a one-directional circuit, but it is a simple way to constrain turtle movement.
Hi i am new to Python SKLearn and ML in general. Im encountering a Memory Error when using MultinomialNB partial fit, Im trying to do Multi Label Classification on the DMOZ directory data.
My questions:
What am i doing wrong? Is it my lack of memory or is the data wrong?
Am i using the right approach ?
Anything i can do to improve my appraoch ?
Approach:
Store DMOZ DB directories into MongoDB/TokuMX
{
"_id": {
"$oid": "54e758c91d41c804d8ace196"
},
"docs": [
{
"url": "http://www.awn.com/",
"description": "Provides information resources to the international animation community. Features include searchable database archives, monthly magazine, web animation guide, the Animation Village, discussion forums and other useful resources.",
"title": "Animation World Network"
}
],
"labels": [
"Top",
"Arts",
"Animation"
]
}
Itterate over the docs array and pass docs elements into my classifier function.
Vectorizer and Classifier
classifier = MultinomialNB()
vectorizer = HashingVectorizer(
stop_words='english',
strip_accents='unicode',
norm='l2'
)
My classifier function
def classify(doc, labels, classifier, vectorizer, *args):
r = requests.get(doc['url'], verify=False)
print "Retrieving URL = {0}\n".format(doc['url'])
if r.status_code == 200:
html = lxml.html.fromstring(r.text)
doc['content'] = []
tags = ['font', 'td', 'h1', 'h2', 'h3', 'p', 'title']
for tag in tags:
for x in html.xpath('//'+tag):
try:
bag_of_words = nltk.word_tokenize(x.text_content())
pos_tagged = nltk.pos_tag(bag_of_words)
for word, pos in pos_tagged:
if pos[:2] == 'NN':
doc['content'].append(word)
except AttributeError as e:
print e
x_train = vectorizer.fit_transform(doc['content'])
#if we are the first one to run partial_fit, pass all classes
if len(args) == 1:
classifier.partial_fit(x_train, labels, classes=args[0])
else:
classifier.partial_fit(x_train, labels)
return doc
X: doc['content'] consists of a array with NOUNS. (600)
Y: labels consists of a array with labels inside the mongo document showed above. (3)
Classes args[0] consists of array with all the (UNIQUE)labels in the database. ( 17490)
Running inside VirtualBox on a Quadcore laptop with 4gb ram assigned to VM.
What are the 17490 unique labels? There will be one coefficient for each label and each feature, which is likely where your memory error comes from.
I am using the Kalman Filter opencv library to use the Kalman estimator capabilities.
My program does not enforce real time recursion. My question is, when the transition matrix has elements dependent on the time step, do I have to update the transition matrix every time use it (in predict or correct) to reflect the time passed since last recursion?
Edit: The reason I ask this is because the filter works well with no corrections on the transition matrix but it does not when I update the time steps.
Many descriptions of the Kalman Filter write the transition matrix as F as if it's a constant. As you have discovered, you have to update it (along with Q) on each update in some cases, such as with a variable timestep.
Consider a simple system of position and velocity, with
F = [ 1 1 ] [ x ]
[ 0 1 ] [ v ]
So at each step x = x + v (position updates according to velocity) and v = v (no change in velocity).
This is fine, as long as your velocity is in units of length / timestep. If your timestep varies, or if you express your velocity in a more typical unit like length / s, you will need to write F like this:
F = [ 1 dt ] [ x ]
[ 0 1 ] [ v ]
This means you must compute a new value for F whenever your timestep changes (or every time, if there is no set schedule).
Keep in mind that you are also adding in the process noise Q on each update, so it likely needs to be scaled by time as well.