Wireshark: Dump client-server dialogue - wireshark

If you use "Follow TCP stream" in wireshark you get a very nice display for the client server dialogue.
One color is the client, the other color is the server.
Is there a way to dump this to a ascii without loosing who said what?
For example:
server> 220 "Welcome to FTP service for foo-server."
client> USER baruser
server> 331 Please specify the password.
client> supersecret
I want to avoid screenshots. Adding "server>" and "client>" to the lines is error prone.

It may not be possible with the GUI version, but it's achievable with the console version tshark:
tshark -r capture.pcap -qz follow,tcp,ascii,<stream_id> > stream.txt
Replace <stream_id> with an actual stream ID (eg: 1):
tshark -r capture.pcap -qz follow,tcp,ascii,1 > stream.txt
This will output an ASCII file. How is it better than saving it directly from the GUI version? Well:
The data sent by the second node is prefixed with a tab to differentiate it from the data sent by the first node.
Since the output in ascii mode may contain newlines, the length of each section of output plus a newline precedes each section of output.
This makes the file easily parsable. Example output:
===================================================================
Follow: tcp,ascii
Filter: tcp.stream eq 1
Node 0: xxx.xxx.xxx.xxx:51343
Node 1: yyy.yyy.yyy.yyy:80
786
GET ...
Host: ...
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Accept: */*
User-Agent: ...
Referer: ...
Accept-Encoding: ...
Accept-Language: ...
Cookie: ...
235
HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: ...
Expires: -1
X-Request-Guid: ...
Date: Mon, 31 Aug 2015 10:55:46 GMT
Content-Length: 0
===================================================================
786\n is the length of the first output section from Node 0.
\t235\n is the legnth of the response section from Node 1 and so on.

Related

Why does Slack's files.list endpoint return an empty files array?

I'd like to use the Slack API to delete old files from my workspace, as we are running out of space to upload new ones.
I registered an application, installed it on the target workspace, granted the application the files:read and files:write permissions, and then generated a bot token for the application that has the prefix xoxb.
With this token, I made a GET request to the files.list endpoint using Postman:
GET https://slack.com/api/files.list?token=xoxb-xxxxxxxxxxxx-xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxx&count=100&page=1&show_files_hidden_by_limit=true
HTTP/1.1
User-Agent: PostmanRuntime/7.24.1
Accept: */*
Cache-Control: no-cache
Postman-Token: 29879ef1-6da3-4863-8bb9-da8b4d5f740c
Host: slack.com
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
HTTP/1.1 200 OK
date: Sat, 09 Jan 2021 18:18:42 GMT
server: Apache
x-xss-protection: 0
pragma: no-cache
cache-control: private, no-cache, no-store, must-revalidate
access-control-allow-origin: *
strict-transport-security: max-age=31536000; includeSubDomains; preload
x-slack-req-id: d972daab31ff4fe7a477eb3149ad5260
x-content-type-options: nosniff
referrer-policy: no-referrer
access-control-expose-headers: x-slack-req-id, retry-after
x-slack-backend: r
x-oauth-scopes: files:read,files:write
x-accepted-oauth-scopes: files:read
expires: Mon, 26 Jul 1997 05:00:00 GMT
vary: Accept-Encoding
access-control-allow-headers: slack-route, x-slack-version-ts, x-b3-traceid, x-b3-spanid, x-b3-parentspanid, x-b3-sampled, x-b3-flags
content-encoding: gzip
content-length: 87
content-type: application/json; charset=utf-8
x-envoy-upstream-service-time: 444
x-backend: files_normal files_canary_with_overflow files_control_with_overflow
x-server: slack-www-hhvm-files-iad-jc2v
x-via: envoy-www-iad-4782, haproxy-edge-iad-y1wa
x-slack-shared-secret-outcome: shared-secret
via: envoy-www-iad-4782
{
"ok": true,
"files": [],
"paging": {
"count": 100,
"total": 1803,
"page": 1,
"pages": 19
}
}
As you can see, the response contains an empty files[] array, but the paging attribute tells me that there are 1803 files in the workspace.
Why can't I see any of the files in the workspace, even though I have the appropriate scopes and got an HTTP 200 in response?
I don't know if this is the answer, but it's a workaround:
Grant the application the channels:read and channels:join scopes
Call the conversations.list endpoint to get a list of channels in the workspace
Grab the id attribute of one of the channels that you would like to list files for
Call the conversations.join endpoint to add the application to the channel
Call the files.list endpoint to get a list of files in that channel.
Presumably, you can iterate over every public channel in the workspace, join each, list its files, and then delete the files older than a certain date.

erlang gen_tcp connecting to erlang.org claims a 404

context: JA's "Programming Erlang" 2ed, chapter 16 on files, page 256, example on working with parsing urls from a Binary.
The steps suggested (after writing code for the scavenge_urls module) are these:
B = socket_examples:nano_get_url("www.erlang.org"),
L = scavenge_urls:bin2urls(B),
scavenge_urls:urls2htmlFile(L,"gathered.html").
And that fails (subtly) - the list L ends up being empty. Running the first step on its own, a strange thing is observed - it does return a binary, but it's not the binary I was looking for:
9> B.
<<"HTTP/1.1 404 Not Found\r\nServer: nginx\r\nDate: Sun, 19 Nov 2017 01:57:07 GMT\r\nContent-Type: text/html; charset=UTF-8\r\n"...>>
shows that this is where the problem lies.
yet in the browser all's good with the mothership! I was able to complete the exercise by replacing the call to socket_examples:nano_get_urls/1 with, first, CURLing for the same url, dumping that into a file, and then file:read_file/1. The next steps all ran fine.
Peeking inside the socket_examples module, I see this:
nano_get_url(Host) ->
{ok,Socket} = gen_tcp:connect(Host,80,[binary, {packet, 0}]), %% (1)
ok = gen_tcp:send(Socket, "GET / HTTP/1.0\r\n\r\n"), %% (2)
receive_data(Socket, []).
receive_data(Socket, SoFar) ->
receive
{tcp,Socket,Bin} -> %% (3)
receive_data(Socket, [Bin|SoFar]);
{tcp_closed,Socket} -> %% (4)
list_to_binary(reverse(SoFar)) %% (5)
end.
Nothing looks suspicious. First it establishes the connection, next it fires a GET, and then it receives the response. I've never before had to explicitly connect first, and fire a GET second, my http client libraries hid that from me. So maybe I don't know what to look for... and I sure trust Joe's code doesn't have any glaring mistakes! =) Yet the lines with comments (3),(4) and (5) aren't something I fully understand.
So, any ideas, fellow Erlangers?
Thank a bunch!
The problem is not Erlang. It looks like the server running erlang.org requires a Host header as well:
$ nc www.erlang.org 80
GET / HTTP/1.0
HTTP/1.1 404 Not Found
Server: nginx
Date: Sun, 19 Nov 2017 05:51:39 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 162
Connection: close
Vary: Accept-Encoding
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
$ nc www.erlang.org 80
GET / HTTP/1.0
Host: www.erlang.org
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 19 Nov 2017 05:51:50 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 12728
Connection: close
Vary: Accept-Encoding
<!DOCTYPE html>
<html>
...
Your Erlang code also works with the Host header after GET HTTP/1.0\r\n:
1> Host = "www.erlang.org".
"www.erlang.org"
2> {ok, Socket} = gen_tcp:connect(Host, 80, [binary, {packet, 0}]).
{ok,#Port<0.469>}
3> ok = gen_tcp:send(Socket, "GET / HTTP/1.0\r\nHost: www.erlang.org\r\n\r\n").
ok
4> flush().
Shell got {tcp,#Port<0.469>,
<<"HTTP/1.1 200 OK\r\nServer: nginx\r\n...>>
Shell got {tcp_closed,#Port<0.469>}

Getting {error,connect_timeout} message while using Hackney

i am using Hackney's erlang rest client. I followed the steps provided in README.md but I am getting the following error:
17> Method = get.
get
18> URL = <<"www.google.com">>.
<<"www.google.com">>
19> Headers = [].
[]
20> Payload = <<>>.
<<>>
21> Options = [].
[]
22>Test = hackney:request(Method, URL,Headers,Payload,Options).
{error,connect_timeout}
I used the same url using curl and wget and both are working. Is there any issue with erlang ssl or issue with tls? I have edited the question for better understanding
EDIT 1 (using curl -vv google.com)
curl -vv google.com
* About to connect() to proxy <<ip>> port 8080 (#0)
* Trying <<ip>>... connected
* Connected to <<ip>> (<<ip>>) port 8080 (#0)
* Proxy auth using Basic with user '<<user>>'
> GET http://google.com HTTP/1.1
> Proxy-Authorization: <<proxy authorization>>
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: google.com
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 301 Moved Permanently
< Location: http://www.google.com/
< Content-Type: text/html; charset=UTF-8
< Date: Tue, 07 Jun 2016 03:49:43 GMT
< Expires: Thu, 07 Jul 2016 03:49:43 GMT
< Cache-Control: public, max-age=2592000
< Server: gws
< Content-Length: 219
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< Proxy-Connection: Keep-Alive
< Connection: Keep-Alive
< Age: 2223
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
* Connection #0 to host <<ip>> left intact
* Closing connection #0
Hackney do not apply profile proxy settings automatically, so you should take care of proxy settings yourself.
According to the documentation, you should provide the following options:
{proxy, {Host, Port}} %% if http proxy is used
{proxy_auth, {User, Password}}. %% if proxy requires authentication
What do you get when you use the httpc module to do a request via the Erlang shell.
First start inets:
inets:start().
Then try:
{ok, Response} = httpc:request("https://www.google.com").
or
{ok, Response} = httpc:request("http://www.google.com").
If both of these fail to connect, odds are the issue is not hackney related, but rather an issue of Erlang as a whole.
Your error is not an connect_timeout. You are getting an exception of no match of right hand side value because you are missing the = on your last command.
Just change it to
{ok, StatusCode, RespHeaders, ClientRef} = hackney:request(Method,URL,Headers,Payload,Options).

enabling rails page caching causes http header charset to disappears

I need charset to be utf-8, which seem to be the case by default. Recently I enabled page caching for a few static pages:
caches_page :about
The caching works fine, and I see the corresponding about.html and contact.html pages generated in my /public folder, except when the page renders, it's no longer in utf-8.
After googling for a bit I tried looking at the http headers with wget, before and after caching:
first time:
$wget --server-response http://localhost:3000/about
HTTP request sent, awaiting response...
1 HTTP/1.1 200 OK
2 X-Ua-Compatible: IE=Edge
3 Etag: "f7b0b4dea015140f3b5ad90c3a392bef"
4 Connection: Keep-Alive
5 Content-Type: text/html; charset=utf-8
6 Date: Sun, 12 Jun 2011 03:44:22 GMT
7 Server: WEBrick/1.3.1 (Ruby/1.8.7/2009-06-12)
8 X-Runtime: 0.235347
9 Content-Length: 5520
10 Cache-Control: max-age=0, private, must-revalidate
cached:
$wget --server-response http://localhost:3000/about
Resolving localhost... 127.0.0.1
Connecting to localhost[127.0.0.1]:3000... connected.
HTTP request sent, awaiting response...
1 HTTP/1.1 200 OK
2 Last-Modified: Sun, 12 Jun 2011 03:34:42 GMT
3 Connection: Keep-Alive
4 Content-Type: text/html
5 Date: Sun, 12 Jun 2011 03:39:53 GMT
6 Server: WEBrick/1.3.1 (Ruby/1.8.7/2009-06-12)
7 Content-Length: 5783
as a result the page displays in ISO-8859-1 and I get a bunch of garbled text. Does anyone know how I can prevent this undesirable result? Thank you.
The solution will depend on the server used.
When you use page cache, the servers reads the server directly, so the rails stack does not provide encoding information to the server. Then the server default apply.
If you're using apache with passenger, add to the configuration:
AddDefaultCharset UTF-8
If you need specific charsets, use a solution like the one in http://www.philsergi.com/2007/06/rails-page-caching-and-mime-types.html
<LocationMatch \/(rss)\/?>
ForceType text/xml;charset=utf-8
</LocationMatch>
<LocationMatch \/(ical)\/?>
ForceType text/calendar;charset=utf-8
</LocationMatch>

Using chunked encoding in a POST request to an asmx web service on IIS 6 generates a 404

I'm using a CXF client to communicate with a .net web service running on IIS 6.
This request (anonymised):
POST /EngineWebService_v1/EngineWebService_v1.asmx HTTP/1.1
Content-Type: text/xml; charset=UTF-8
SOAPAction: "http://.../Report"
Accept: */*
User-Agent: Apache CXF 2.2.5
Cache-Control: no-cache
Pragma: no-cache
Host: uat9.gtios.net
Connection: keep-alive
Transfer-Encoding: chunked
followed by 7 chunks of 4089 bytes and one of 369 bytes, generates the following output after the first chunk has been sent:
HTTP/1.1 404 Not Found
Content-Length: 103
Date: Wed, 10 Feb 2010 13:00:08 GMT
Connection: Keep-Alive
Content-Type: text/html
Anyone know how to get IIS to accept chunked input for a POST?
Thanks
Chunked encoding should be enabled by default. You can check your setting with:
C:\Inetpub\AdminScripts>cscript adsutil.vbs get /W3SVC/AspEnableChunkedEncoding
The 404 makes me wonder if it's really a problem with the chunked encoding. Did you triple-check the URL?
You may well have URLScan running on your server. By default URLScan is configured to reject requests that have a transfer-encoding: header and URLScan sends 404 errors (which is conspicuous over a proper server-error).
UrlScan v3.1 failures result in 404 errors and not 500 errors.
Searching for 404 errors in your W3SVC log will include failures due
to UrlScan blocking.
You will need to look at the file located in (path may differ) C:\Windows\System32\inetsrv\URLScan\URLScan.ini. Somewhere in there you will find a [DenyHeaders] section, that will look a bit like this (it will probably have more headers listed).
[DenyHeaders]
transfer-encoding:
Remove transfer-encoding: from this list and it should fix your problem.

Resources