Is it possible to modify users' HTTP request to
www.example.com/options
instead of
www.example.com/options_and_params
My scenario is that about 30000 users connect to my company's network backbone and I want to add one or more server (with the code I'm current working on) between the backbone switches and Radware LoadProof to achieve this work.
After googling all the night, I have no lead but some more questions:
I don't need to intercept every packet through the network. With some helps like iptables, I can filter out the package I want. I have done it before using iptables. However, packet is not equal to HTTP stream. Do I need to do HTTP re-construct?
If I successfully find a way to modify HTTP request URL content, I still should put it back to network stream. As I know TCP packets have a checksum and after I modify the content it must be wrong. How do I calculate a new checksum and put the packet back to network?
It's my first time to do network programming or packet processing develop. Any suggestion is appreciate.
This depends on whether you are doing HTTP/1.0 or HTTP/1.1 and whether its an initial request you need to modify or all requests in a single HTTP 1.1 session.
If you have the packet and can modify it before it is sent on and you are trying to modify just the request then given the length of a typical packet and the location of the URL in the HTTP request stream (very near the beginning) and the fact that it will be the first thing sent in the TCP stream I think you can fairly safely assume that it will be present in the first N bytes of the first packet sent and therefore won't be split over multiple packets.
However, if this is an HTTP/1.1 stream then multiple requests will be being sent via the same TCP connection in which case in future requests the URL may well be split over two TCP packets.
If you can maybe force HTTP/1.0 or possibly if you modify the initial or all requests to be HTTP/1.0 then you can be pretty sure that the first packet will correspond to the first packet of the TCP stream and that you are very unlikely to see the URL split over multiple packets, meaning no reconstruction and the ability to just do a replace.
However this will come at a cost of new TCP connections which is pretty inefficient.
If you don't and you leave it as HTTP/1.1 then the URL could be at any random point in any future request and therefore split over multiple TCP packets (two realistically given the size of the URL).
If I got your question right, then this could be probably done with some fast reverse-proxy like nginx.
Related
When I use bep_0005 get_peers method to find an infohash like "1111111111111111111111111111111111111111", I can receive the response with "values' key, But when I use bep_0003 to send BitTorrent protocol handshake to each peer in "values", Peers always disconnect TCP connect, In fact, It seems Peers don't have ut_matadata.
Why node send me fake data?
There are several possible causes for this
Old utorrent versions returned values stored for the nearest target key if they did not have an exact match. this was fixed a while ago but many people are still running old clients
Various dubious implementations monitoring the DHT try to harvest data by responding to any and all get peers request with values and then recording connection attempts for whatever reasons
Malicious entities use bittorrent clients as dDoS amplifiers by inducing them to spam targets with TCP connnections
But there are various measures a node can implement to sanitize that data.
I'm playing around with Libpcap trying to send a ping but whenever I send the requests they are never responded to, no errors given and it looks identical to a regular ping sent through the ping utility.
The left packet is sent through ping on the terminal and the right through my app. As far as I can tell the data field is optional so I don't include it, and the identifier/sequence numbers can be random, so they are randomised.
Am I missing something obvious here?
I notice you haven't validated your IP header checksum. Are you sure it is in fact correct? If it isn't the next router will silently drop the packet which is consistent with what you've seen. Wireshark should be able to validate the ip header checksum for you if you switch it on.
I'm want to design a ruby / rails solution to send out to several listening sockets on a local lan at the exact same time. I want the receiving servers to receive the message at exact same time / or millisecond second level.
What is the best strategy that I can use that will effectively allow the receiving socket to receive it at the exact same time. Naturally my requirements are extremely time sensitive.
I'm basing some of my research / design on the two following articles:
http://onestepback.org/index.cgi/Tech/Ruby/MulticastingInRuby.red
http://www.tutorialspoint.com/ruby/ruby_socket_programming.htm
Now currently I'm working on a TCP solution and not UDP because of it's guaranteed delivery. Also, was going to stand up ready connected connections to all outbound ports. Then iterate over each connection and send the minimal packet of data.
Recently, I'm looking at multicasting now and possibly reverting back to a UDP approach with a return a passive response, but ensure the message was sent back either via UDP / TCP.
Note - The new guy syndrome here with sockets.
Is it better to just use UDP and send a broad packet spam to the entire subnet without guaranteed immediate delivery?
Is this really a time sensitive component? If it's truly down to the microsecond level then you may want to ensure its implemented close to native functions on the hardware. That being said a TCP ACK should be faster than a UDP send and software response.
I am using UDP network protocol to send message from various clients to a root server.
The message from client to server may not be sent directly and may be sent via other clients.
I want to know the clients via which the message is sent by looking at the message received at the root server. How to do this?
UDP does not include this information. You'll need to include something in your protocol if you want to keep track of servers through which the message has passed.
The traceroute program uses a trick to get bounced packets by setting the TTL to an increasing number. It starts with a TTL of 1 so that the first bounce comes from the closest server to the source. It then tries a TTL of 2 to get a bounce from the second server on the path, and so on.
traceroute is client-side and heuristic, i.e. works only for stable connections. Since you are essentially constructing an overlay network, the only ways to get information about the route is reconstructing the routing according to your routing algorithm (hard, and probably infeasible in a distributed network) or having each relay add a note (typically consisting of the relay's name, and the previous IP address) to the message.
How does a server knows what kind of packets his receiving ? (strcut,array...) ?
PS: i know dumb question
It doesn't - its just a bunch of bits/bytes. The application, using application level protocols, decides how to interpret those bits/bytes.
Just like memory is a bunch of bits/bytes - a pointer to a struct can be forced to point anywhere and the struct can used to read the memory, but the data may be nonsensical. Your application logic ensures (hopefully) that the struct pointer is only applied to memory that contains valid struct data. Similarly, your network application must decide how to interpret what the bits in the packets or TCP stream mean.
An application may use a well-known protocol to decide how to interpret those bytes. For example, the HTTP protocol indicates what the client should transmit and the server knows to interpret that data from the client according to how the HTTP spec. Regardless of what the client sends (e.g. if a gaming client accidentally sent a binary stream to the HTTP server), the HTTP server will nevertheless try to interpret the bits as a HTTP client request.
TCP servers receive a stream of bytes. Any interpretation beyond this is up to the application logic.