Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I'm working on a my own PXE server (so I could install new OS's I want to test easily without the need to find and format USB's). I've stated by examining psychomario/PyPXE project, but quickly implemented my own TFTP Server. I'm testing it agains Intel UNDI PXE-2.1I have on my laptop.
One of the things psychomario doesn't support is sending large files (>32M). The RFC's (1350, 2347) don't discuss how it should be done, but apparently I had two option. The first option, increasing the block size, didn't work since the PXE client apparently ignores fragmented IP packets.
The second option is using rolling block, i.e. starting the counting from the beginning when reaching the end. The client acks the data, but when the data ends, the client starts sending ack's for block 0xffff (even if that's not the last block).
I tried closing the connection and sending empty data packets for that block. The first resulted on error on the PXE, the second resulted in infinite loop with the PXE.
What packet do I need to send in response for the ack of block 0xffff in order to end the session?
1) your TFTP server should really implement the block size option if not you will be limited to 512 byte blocks. Please see RFC 2348. Fragmentation can always be avoided negotiating a blksize such that the whole packet never gets bigger than the minimum MTU (1500 in a typical Ethernet environment).
2) You have to implement a TFTP "roll over"; after sending and getting acked block # = 0xFFFF you should send the next block as block # = 0x0000 and so on until you finish your transfer. When you test this feature be sure to use a TFTP client able to deal with TFTP block roll over; virtually all the PXE clients available today do this very well.
Besides your learning experience coding your own PXE server please consider you will run into countless isuess down the road. If you need to get quick results just google "pxe server" for a list of ready to use PXE server options.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
So, i have been reading up on NAT-Punchthrough. I seem to be getting the idea, but i have a hard time implementing it, and i feel that i am missing a step here.
Testing this functionality is kind of hard because i have little control over the environment when it comes to a internet based connection.
I have a SQL server to run as my "facilitator" it keeps the external address of both server and client, and their port as seen by the outside.
Here are steps so far:
- I connect to my SQL server through a web request (PHP script) that stores server/client IP/PORT
- When both are known, both client and server attempt connecting (server hosts on a set port, client connects over a set port)
- Nothing significant happens
There are 2 unknowns here, and i would like to check one with you.
Is it true that NAT-Punchthrough requires that i do the first step with the exact (internal/LAN) port i plan to connect with in the step after that?
If so, i don't know how exactly my server works underwater, so it might need more ports then my initial given static port to connect over, but that at least gives me a hint.
If anyone has more documentation on this then me, please let me know.
Sources:
Programming P2P application
http://www.mindcontrol.org/~hplus/nat-punch.html
NAT punch through works on the principle of educated guesswork. It is usually used to create connections with devices that do IP Masquerading. This is the technology used in most home internet modems to the point that NAT has become interchangeably used to refer to IP Masquerading.
When you connect out from a device which is behind a NAT system like a home modem. You have no control of the port that will be used for the outbound connection to the Internet. However many of these devices allocate ports using specific patterns. For example, incremental numbers.
NAT punch through involves trying to directly connect two source systems that are both behind independent NAT devices. A third system, your "facilitator" acts as a detector for the origin port numbers currently being assigned by both NAT devices on outbound connections. The origin port number, along with the IP address is then sent to the other parties.
So now the clever bit to answer your question. Both systems that want to directly connect, start trying to communicate to the other. They try connecting to a range of ports, around the known port number detected by the facilitator. This is the guesswork.
It is important that both source systems start trying to connect as this will establish NAT sessions in the local devices that allow traffic from the Internet in. If either source device correctly guesses one of those NAT session port numbers, then a connection is established.
In reality, engineers from organisations that have use for NAT punch through have probably spent some time examining the more popular NAT port allocation algorithms and tuning their software. If you have control of connections through your NAT devices, then it would be fairly easy to set up some tests and see how the port numbers change between connections to different servers.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I captured some HTTP POST requests, and want to send them again. How to do it? Googling didn't yield any easy way not involving some complex stuff resulting in a script being able to send only this specific request, without any flexibility.
You might look into tcpreplay.
It's great for replaying entire streams of traffic captured by Wireshark or tcpdump in libpcap format.
PlayCap is a very easy to use solution for replaying network captures. All you need to do is point it to a PCAP file and press play.
If the HTTP requests are being sent from a browser then you can take advantage of the Web Developer mode available in most modern browsers - by going to the 'Network' section and right clicking on a particular GET/POST requests and then one can optionally modify and resend selected requests and/or using curl (e.g. see FireFox, Chrome).
It's not straightforward to just resend a HTTP interactions that have been captured by Wireshark as the the HTTP is transported over TCP which needs to set up a new connection for each interaction so things like the TCP sequence numbers would need to be different. One approach would be to extract the HTTP content from the packet trace and resend that over a new TCP connection - Wireshark does allow for HTTP traces to be extracted which could be resent. However the latest version of tcpreplay suite from AppNeta now provides a tool tcpliveplay that says it can replay TCP streams so that seems like it could be the best option.
Otherwise for more programmatic control of packet replay one could use scapy as suggested in this answer, though one would need to extract the HTTP content and resend it on new connection(s).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I need some suggestion for the erlang in-memory cache system.
The cache item is key-value based storage.
key is usually an ASCII string; value is erlang's types include number / list / tuple / etc.
The cache item can be set by any of the node.
The cache item can be get by any of the node.
The cache item is shared cross all nodes even on different servers
dirty-read is permitted, I don't want any lock or transaction to reduce the performance.
Totally distributed, no centralized machine or service.
Good performance
Easy install and deployment and configuration and maintenance
First choice seems to me is mnesia, but I have no experence on it.
Does it meet my requirement?
How the performance can I expect?
Another option is memcached --
But I am afraid the performance is lower than mnesia because extra serialization/deserialization are performed as memcached daemon is from another OS process.
Yes. Mnesia meets your requirements. However, like you said, a tool is good when the one using it understands it in depth. We have used mnesia on a distributed authentication system and we have not experienced any problem thus far. When mnesia is used as a cache it is better off than memcached, for one reason "Memcached cannot guarantee that what you write, you can read at any time, due to memory swap out issues and stuff" (follow here). However, this means that your distributed system is going to be built over Erlang. Indeed mnesia in your case beats most NoSQL cache solutions because their systems are Eventually consistent. Mnesia is consistent, as long as network availability can be ensured across the cluster. For a distributed cache system, you dont want a situation where you read different values for the same key from different nodes, hence mnesia's consistency comes in handy here. Something you should think about, is that, it is possible to have a centralised Memory cache for a distributed system. This works like this: You have RABBITMQ server running and accessible by AMQP clients on each Cluster node. Systems interact over the AMQP interface. Because, the cache is centralised, consistency is ensured by the process/system responsible for writing and reading from the cache. The other systems just place a request for a key, onto the AMQP message bus, and the system responsible for cache receives this message and replies it with the value.
We have used the Message bus Architecture using RABBITMQ for a recent system which involved integration with banking systems, an ERP system and Public online service. What we built was responsible for fusing all these together and we are glad that we used RABBITMQ. The details are many but what we did is to come up with a message format, and a system identification mechanism. All systems must have a RABBITMQ client for writing and reading from the message bus. Then you would create a read Queue for each system, so that other system write their requests into that queue, whose name inside RABBITMQ, is the same as the system owning it. Then, later, you must encrypt the messages passing over the bus. In the end, you have systems bound together over large distance/across states, but with an efficient network, you wont believe how fast RABBITMQ binds these systems. Anyhow, RABBITMQ can also be clustered, and i should tell you that it is Mnesia which powers RABBITMQ (that tells you how good mnesia can be).
Another thing is that, you should do some reading and write many programs until you are comfortable with it.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 12 days ago.
Improve this question
Imagine this situation that there are some smartphones and computer around with their WiFi adapter (wireless adapters) on, but not necessary connected to a network.
Is there a way to look the MAC addresses via a Linux machine?
Any insights are appreciated.
Disconnected clients aren't always silent. In fact, more often than not, clients send out directed and broadcast probe requests searching for access points they have connected to previously, thus revealing their MAC addresses which can be displayed through airodump-ng or by filtering capture packets in Wireshark to display probe requests.
This is the suitable Wireshark filter:
wlan.fc.type_subtype eq 4
Old question, but i'll have a go anyway.
Wifi enabled devices usually send probe requests to try to find Access points they previously have been connected to, even when they are nowhere near them.
If you're using backtrack/kali linux, try this:
Create a wireless adapter alias running in monitor mode (assuming your adapter name is wlan0):
airmon-ng start wlan0
Start scanning for devices and access points:
airodump-ng mon0
The access points will be listed first with their Mac addresses under "BSSID", followed by the devices which will have their MAC addresses listed under "STATION" and a "not associated" flag under "BSSID" if they aren't connected to an access point.
I am working on an MPI application, which uses threaded-MPI calls between processes. Threads are added and removed as per the load requirements. Now, I have a question, which I could not find answer in the open-mpi forum.
If a set of MPI processes ("ranks") already has a connection, ie, they are already making send-receive calls, and then a new thread comes in (either processes) which also makes the send-receive calls between the same MPI peers, would MPI open up new set of sockets?
I know that the details are implementation dependent, so there may not be a general answer. But, is there a way to find out?
There are questions on the scalability of this technique, which was chosen for other reasons. It would be great to get some stats, on the number of new sockets per connection.
Anyone knows how to do this? For instance, query which socket is a particular instance of MPI_Send writing to?
I already tried adding --mca btl self,sm,tcp --mca btl_base_verbose 30 -v -report-pid -display-map -report-bindings -leave-session-attached
Thanks a lot.
To answer my own question, here is what I learnt from brilliant folks at Open-MPI:
On Jan 24, 2012, at 5:34 PM, devendra rai wrote:
I am trying to find out how many separate connections are opened by MPI as messages are sent. Basically, I have threaded-MPI calls to a bunch of different MPI processes (who, in turn have threaded MPI calls).
The point is, with every thread added, are new ports opened (even if the sender-receiver pairs already have a connection between them)?
In Open MPI: no. The underlying connections are independent of how many threads you have.
Is there any way to find out? I went through MPI APIs, and the closest thing I found was related to cartographic information. This is not sufficient, since this only tells me the logical connections (or does it)?
MPI does not have a user-level concept of a connection. You send a message, a miracle occurs, and the message is received on the other side. MPI doesn't say anything about how it got there (e.g., it may have even been routed through some other process).
Reading Open MPI FAQ, I thought adding "--mca btl self,sm,tcp --mca btl_base_verbose 30 -display-map" to mpirun would help. But I am not getting what I need. Basically, I want to know how many ports each process is accessing (reading as well as writing).
For Open MPI's TCP implementation, it's basically one TCP socket per peer (plus a few other utility fd's). But TCP sockets are only opened lazily, meaning that we won't open the socket until you actually send to a peer.
--
Jeff Squyres
Credits to Jeff Squyres.