How do you find loops in a BGP routing - network-programming

I'm trying to program a simplified version of a eBGP speaker. For the import policy, I want to make sure that any new route that I import doesn't have any loops. The routing table that I build looks something like this.
10.0.0.0/8 3 8 11
10.8.0.0/8 4
192.168.0.0/16 3 5 6
192.168.43.0/24 (local)
My question is, how do you check for a routing loop?
Now I've tried multiple things and I think the right way to see if there would be any loops would be to construct a DAG using all the routes in the routing table. Then checking if the new route creates a cycle in the graph. But I can't understand if I should only look at certain prefixes when creating the DAG.

In eBGP loop detection is based on the ASPATH. You can NOT accept routes with ASPATHs that include your own ASN.
This is defined in RFC4271:
If the AS_PATH attribute of a BGP route contains an AS loop, the BGP
route should be excluded from the Phase 2 decision function. AS loop
detection is done by scanning the full AS path (as specified in the
AS_PATH attribute), and checking that the autonomous system number of
the local system does not appear in the AS path. Operations of a BGP
speaker that is configured to accept routes with its own autonomous
system number in the AS path are outside the scope of this document.
Hint: If you want to implement a BGP Speaker, I suggest your meticulously read all related RFCs starting with RFC4271.
As for iBGP there is no loop detection needs as an iBGP Speaker will NOT forward iBGP learned routes to another iBGP speaker. You need to have all your iBGP speaker connected in full-mesh, use a route-reflector or confederations.
Article regarding those principles (for iBGP): http://www.rogerperkin.co.uk/routing-protocols/bgp/bgp-confederation-vs-route-reflector/

Related

Distributed Hash Tables. Do I need shortcuts (even in a large system)?

I am learning about DHT and I came upon shortcuts. It seems they are used to make the routing faster and skip going directly back-the-chain for better performance. What I don't understand is: Suppose we have a circular DHT made out of 100 servers/nodes/HT. You get some key data to server/node/HT 10 and it must be sent to server/node/HT 76. When the destination is reached, and the value is taken couldn't I just provide the IP of the requester (server 10) and then it will directly send the value to 10, which seems to make shortcuts useless?
Thank you in advance.
Edit: Useless for returning the value. Not getting to it.
You're assuming a circular network layout and forwading-based routing. Both of which only apply to a subset of DHTs.
Anyway, the forward path would still go through all the nodes, any of which might be down or have transient network problems. As the number of hops goes up so does the cumulative error probability. Additionally it increases latency, which matters on a global scale because at least simple DHT routing algorithms don't account for physical proximity.
For the return it can also matter if reachability is asymmetrical, e.g. due to firewalls.

Juniper: How to see OSPF routes that didn't make the routing table

I have a specific scenario:
There are 2 paths leading to a given OSPF prefix:
-Both of the paths are learnt via OSPF
-Prefix CIDRs are the same size
-Both OSPF "preference" value is unchanged (so is the default: 10)
My problem is that only a single Path is put inside my routing-table
( I would assume it is because one of the OSPF path's cost was lower than the other's).
I got 2 questions:
-What command do I need to put into Juniper CLI in order to see the "not-so-best" OSPF paths (that actually didnt make it into the routing table)
-How do I see the metric of each individual path toward a given prefix.
Thanks in advance.
OSPF does not advertise routes, it advertises LSAs. You need look at the OSPF "database" to see the different LSAs, and you need to understand the LSA types and the Djikstra algorithm.

How to use namespace in ROS?

I can not understand how the namespace works in ROS http://wiki.ros.org/Names
Can you hang a couple of real examples of how this works?
And the same question on the parameters http://wiki.ros.org/Parameter%20Server
What do these names mean?
Are the names of the package-node-parameter or what?
Namespaces are the best option to deal with name collision which is quite oft in robotics, especially when the system is going bigger and more complex...
Imagine you have a robot with 2 sensors for the distance, the front and back, then you can think I have 2 topics with the same info
distance=10 and distance=10
now what? how can a 3rd node knows which distance is which???
now using namespaces you can avoid that issue by just doing
back/distance=10 and front/distance=10

using AUGraph vs direct connections between Audio Units by means of a GCD queue

So i've gone thru the Audio Unit Hosting Guide for iOS and they hint all along that one might always want to use an AUGraph instead of direct connections between AUs. Among the most notable reasons they mention a high-priority thread, being able to reconfigure the graph while it is running and general thread-safety.
My problem is that I'm trying to get as close to "making my own custom dsp effects" as possible given that iOS does not really let you dynamically load custom code. So, my approach is to create a generic output unit and write the DSP code in its render callback. Now the problem with this approach is if I wanted to chain two of these units with custom callbacks in a graph. Since a graph must have a single output au (for head), trying to add any more output units won't fly. That is, I cant have 2 Generic I/O units and a Remote I/O unit in sequence in a graph.
If I really wanted to use AUGraphs, I can think of one solution along the lines of:
A graph interface that internally keeps an AUGraph with a single Output unit plus, for each connected node in the graph, a list of "custom callback" generic output nodes that are in theory connected to such node. Maybe it could be a class / interface over AUNode instead, but hopefully you get the idea.
If I add non output units to this graph, it essentially connects them in the usual way to the backing AUGraph.
If however, I add a generic output node, this really means adding the node's au to the list and whichever node I am connecting in the graph to, actually gets its input scope / element 0 callback set to something like:
for each unit in your connected list:
call AudioUnitRender(...) and merge in ioData;
So that the node which is "connected" to any number of those "custom" nodes basically pulls the processed output from them and outputs it to whatever next non-custom node.
Sure there might be some loose ends in the idea above, but I think this would work with a bit more thought.
Now my question is, what if instead I do direct connections between AUs without an AUGraph whatsoever? Without an AUGraph in the picture, I can definitely connect many generic output units with callbacks to one final Remote I/O output and this seems to work just fine. Thing is kAudioUnitProperty_MakeConnection is a property. So I'm not sure that once an AU is initialized I can re set properties. I believe if I uninitialize then it's ok. If so, couldn't I just get GCD's high priority queue and dispatch any changes in async blocks that uninitialize, re connect and initialize again?

How does "DHT search engine" work?

I'm interested in the Btdigg.org which is called a "DHT search engine". According to this article, it doesn't store any content and even has no database. Then how does it work? Doesn't it need to gather meta infos and store them in database like other normal search engines? After a user submit a query, it scans the DHT network and return the results in "real time"? Is this possible?
I don't have specific insight into BTDigg, but I believe the claim that there is not database (or something that acts like a database) is a false statement. The author of that article might have been referring to something more specific that you might encounter in a traditional torrent site, where actual .torrent files are stored for instance.
This is how a BTDigg-like site works:
You run a bunch of DHT nodes, specifically with the purpose of "eaves dropping" on DHT traffic, to be introduced to info-hashes that people talk about.
join those swarms and download the metadata (.torrent file) by using the ut_metadata extension
index the information you find in there, map it to the info-hash
Provide a front-end for that index
If you want to luxury it up a bit you can also periodically scrape the info-hashes you know about to gather stats over time and maybe also figure out when swarms die out and should be removed from the index.
So, the claim that you don't store .torrent files nor any content is true.
It is not realistic to search the DHT in real-time, because the DHT is not organized around keyword searches, you need to build and maintain the index continuously, "in the background".
EDIT:
Since this answer, an optimization (BEP 51) has been implemented in some DHT clients that lets you query which info-hashes they are hosting, significantly reducing the cost of indexing.
For a deep understanding of DHT and its applications, see Scott Wolchok's paper and presentation "Crawling BitTorrent DHTs for Fun and Profit". He presents the autonomous search engine idea as a sidenote to his study of DHT's security:
PDF of his paper:
https://www.usenix.org/legacy/event/woot10/tech/full_papers/Wolchok.pdf
His presentation at DEFCON 18 (parts 1 & 2)
http://www.youtube.com/watch?v=v4Q_F4XmNEc
http://www.youtube.com/watch?v=mO3DfLtKPGs
https://www.usenix.org/legacy/event/woot10/tech/full_papers/Wolchok.pdf
The method used in Section 3 seems to suggest a database to store all the torrent data is required. While performance is better, it may not be a true DHT search engine.
Section 8, while less efficient, seems to be a DHT search engine as long as the keywords are the store values.
From Section 3, Bootstrapping Bittorent Search:
"The system handles user queries by treating the
concatenation of each torrent's filenames and description as a
document in the typical information retrieval model and using an
inverted index to match keywords to torrents. This has the advantage
of being well supported by popular open-source relational DBMSs. We
rank the search results according to the popularity of the torrent,
which we can infer from the number of peers listed in the DHT"
From Section 8, Related Work:
the usual approach to distributing search using a DHT is
with an inverted index, by storing each (keyword, list of matching
documents) pair as a key-value pair in the DHT. Joung et al. [17]
describe this approach and point out its performance problems: the
Zipf distribution of keywords among files results in very skewed load
balance, document information is replicated once for each keyword in
the document, and it is difficult to rank documents in a distributed
environment
It is divided into two steps.
To achieve bep_0005 protocol got infohash, you do not need to implement all protocol requires only now find_node (request), get_peers (response), announce_peer (response). Here's one of my open source dhtspider.
To achieve bep_0009 protocol got metainfo index it, here are my own a bittorrent search engine, every day can get unique infohash 300w +, effective metainfo 50w +.

Resources