A website usually consists of multiple objects (e.g. text file, a few png files etc.), I would like to know if there's a tool that can capture the individual requests/responses in different pcap files?
So for example if I browse to http://somewebsite.com , and some http://somewebsite.com consists of say {index.html, image1.png, somestylefile.css, image2.png}, the tool would capture the entire load of http://somewebsite.com but generate {index.html.pcap, image1.png.pcap, somestylefile.css.pcap, image2.png.pcap}
I don't know of any tool that can do this, or its possible using scapy or something similar?
A HTTP connection can have multiple requests inside the same TCP connection and browsers make heavy use of this HTTP keep alive. With HTTP pipelining the requests/responses don't even need to be fully separated in time, i.e. a client can send another request even though the response for the previous one is not there. And with HTTP/2 the data can also be interleaved, i.e. several responses transferred at the same time inside the same connection.
Insofar it is not always possible to capture the data as separate pcap file because they might not be separable at the packet level. But if you don't need the original packet boundaries it would be possible to create separate pcap files for each request which not necessarily reflect the original packets but which reflect the application layer, i.e. the response matching the request.
One tool which makes this is httpflow.pl which can extract HTTP/1.x requests/response pairs from an existing pcap (or sniff directly) and writes each request/response into a separate pcap file, as if it would have been a separate TCP connection. It can also clean up the data for easier analysis, i.e. unchunk and uncompress the HTTP body.
Related
I sometimes need to use Wireshark to analyze communication issues with a particular protocol that my application uses. Wireshark already comes with a dissector for the protocol, and displays the communication in the best possible way I can imagine.
But I also need to view the actual communication together with events happening inside my application. The application is capable of generating various logs and traces. The information in them is actually more structured, but for the simplicity, let's say it is just a sequence of entries where each entry has a timestamp and a textual message.
Currently, I have to place the Wireshark and the logs alongside on the screen, and painfully correlate the timestamps in order to figure out how they belong together. In order to make my analysis much easier, I would like to view the information from my logs merged together with the communication protocol messages in Wireshark, properly sorted by their timestamps.
I found that Wireshark has a Merge capability, so this is where I am directing my investigation. I think that with some effort, I might be able to do the following:
1) Design my own "protocol", and generate PCAPNG file from my application, with the event timestamps and messages, and
2) Developer a Wireshark dissector for the above, so that I can view the events in Wireshark.
The first part of my question is whether my approach is the right one.
But I also wonder whether I cannot achieve what I want in some simpler way. Ideally, I would like to reuse something that already exists, and specifically, avoid developing a specialized dissector. Isn't there a protocol with identical features (just timestamps and textual messages), with a dissector that Wireshark already has, that I can use?
Maybe you could make use of syslog along with syslogd or rsyslogd?
One way to inject arbitrary messages into trace files without even having a syslog server is to make use of nc (netcat). For example:
echo -n "Hello World" | nc -w 0 -u 1.1.1.1 514
Wireshark will also dissect this message as syslog traffic. This can be useful when trying to insert "markers" into capture files near where an event of interest occurs.
In any case, making use of syslog facilities would save you from having to write your protocol.
Is it possible to modify users' HTTP request to
www.example.com/options
instead of
www.example.com/options_and_params
My scenario is that about 30000 users connect to my company's network backbone and I want to add one or more server (with the code I'm current working on) between the backbone switches and Radware LoadProof to achieve this work.
After googling all the night, I have no lead but some more questions:
I don't need to intercept every packet through the network. With some helps like iptables, I can filter out the package I want. I have done it before using iptables. However, packet is not equal to HTTP stream. Do I need to do HTTP re-construct?
If I successfully find a way to modify HTTP request URL content, I still should put it back to network stream. As I know TCP packets have a checksum and after I modify the content it must be wrong. How do I calculate a new checksum and put the packet back to network?
It's my first time to do network programming or packet processing develop. Any suggestion is appreciate.
This depends on whether you are doing HTTP/1.0 or HTTP/1.1 and whether its an initial request you need to modify or all requests in a single HTTP 1.1 session.
If you have the packet and can modify it before it is sent on and you are trying to modify just the request then given the length of a typical packet and the location of the URL in the HTTP request stream (very near the beginning) and the fact that it will be the first thing sent in the TCP stream I think you can fairly safely assume that it will be present in the first N bytes of the first packet sent and therefore won't be split over multiple packets.
However, if this is an HTTP/1.1 stream then multiple requests will be being sent via the same TCP connection in which case in future requests the URL may well be split over two TCP packets.
If you can maybe force HTTP/1.0 or possibly if you modify the initial or all requests to be HTTP/1.0 then you can be pretty sure that the first packet will correspond to the first packet of the TCP stream and that you are very unlikely to see the URL split over multiple packets, meaning no reconstruction and the ability to just do a replace.
However this will come at a cost of new TCP connections which is pretty inefficient.
If you don't and you leave it as HTTP/1.1 then the URL could be at any random point in any future request and therefore split over multiple TCP packets (two realistically given the size of the URL).
If I got your question right, then this could be probably done with some fast reverse-proxy like nginx.
I want to write a program which Controls all the web browsing activities on PC.
i.e. Checking all the websites users go to, filtering some of them, ... .
But I have no idea how to capture all the packets, processing them, and even act to some (think of filtering unwanted sites).
Any help, sample code, open source program...?
There are different levels you can put yourself in the middle of the communication:
By implementing a proxy and having the browser connect to the proxy
By implementing a firewall/snooper and handling the raw packets
By implementing a network driver and handling the raw packets
IMHO, number 1 is easiest. Look at SQUID for an example. Number 2 is doable too, take a look at fiddler. You could take a look at the Click Modular Router for option number 3.
Depending on the browser, maybe a simple browser plugin could do?
Does anybody know weather VNC (RFB) supports virtual channels and add-ins to them like it is in the RDP (Microsoft Terminal Services)? I just want to transfer my own data across a VNC connection...
VNC/RFB does not have virtual channels unfortunately.
Here is the best reference I've found to the RFB protocol: http://tigervnc.org/cgi-bin/rfbproto
Without knowing more about what you are trying to send and which direction(s), there are a few of options that come to mind:
The tight encoding has file-transfer support. There is a poorly formatted specification for the full tight encoding here: http://vnc-tight.svn.sourceforge.net/viewvc/vnc-tight/trunk/doc/rfbtight.odt?revision=3619
If you have control of both client and server, then you could define a custom encoding that allows you to send your data. The client would advertise that it supports the encoding and if the server supports it then it will start using it.
You could use the clipboard messages (ClientCutText and ServerCutText) and if you need to send binary data that create a custom encoding the data as ISO 8859-1 (Latin-1). The downside is that if the server doesn't support it and the client sends the data it will get pasted to the server.
If you just need to send from the server to the client, then you could use a framebufferUpdate message that sends data outside the current viewport (i.e. 123 pixels beyond the right side of the viewport). Clients without support may not handle this well though.
Another option if you just need to send from the server to the client, is that you could send a framebufferUpdate within the viewport with a special marker and then immediately send a framebufferUpdate (even in the same packet) with the real visible data to replace it. This would work with existing clients (a bit more overhead). Clients might see brief flicker though.
A client has a system which reads large files (up to 1 GB) of multiple video images. Access is via an indexing file which "points" into the larger file. This works well on a LAN. Does anyone have any suggestions as to how I can access these files through the internet if they are held on a remote server. The key constraint is that we cannot afford the time necessary to download the whole file before accessing individual images within it.
You could put your big file behind an HTTP server like Apache, then have your client side use HTTP Range headers to fetch the chunk it needs.
Another alternative would be to write a simple script in PHP, Perl or server-language-of-your-choice which takes the required offsets as input and returns the chunk of data you need, again over HTTP.
If I understand the question correctly, it depends entirely on the format chosen to contain the images as a video. If the container has been designed in such a way that the information about each image is accessible just before or just after the image, rather than at the end of the container, you could extract images from the video container and the meta-data of the images, to start working on what you have downloaded until now. You will have to have an idea of the binary format used.
FTP does let you use 'paged files' where sections of the file can be transferred independently
To transmit files that are discontinuous, FTP defines a page
structure. Files of this type are sometimes known as
"random access files" or even as "holey files".
In FTP, the sections of the file are called pages -- rfc959
I've never used it myself though.