How do I use SSH Remote Capture in Wireshark - wireshark

I am using Wireshark 2.4.6 portable (downloaded from their site) and I am trying to configure the remote capture
I am not clear on what I should use in the remote capture command line.
There is a help for this but it refers to the CLI option
https://www.wireshark.org/docs/man-pages/sshdump.html
On the above page they say that using that sshdump CLI is the equivalent of this Unix CLI
ssh remoteuser#remotehost -p 22222 'tcpdump -U -i IFACE -w -' > FILE & $ wireshark FILE w

You just have to configure the SSH settings in that window to get Wireshark to log in and run tcpdump.
You can leave the capture command empty and it will capture on eth0. You'd only want to change it if you have specific requirements (like if you need to specify an interface name).
You might want to set the capture filter to not ((host x.x.x.x) and port 22) (replacing x.x.x.x with your own ip address) so the screen doesn't get flooded with its own SSH traffic.

The following works as a remote capture command:
/usr/bin/dumpcap -i eth0 -q -f 'not port 22' -w -
Replace eth0 with the interface to capture traffic on and not port 22 with the remote capture filter remembering not to capture your own ssh traffic.
This assumes you have configured dumpcap on the remote host to run without requiring sudo.
The other capture fields appear to be ignored when a remote capture command is specified.
Tested with Ubuntu 20.04 (on both ends) with wireshark 3.2.3-1.
The default remote capture command appears to be tcpdump.
I have not found any documentation which explains how the GUI dialog options for remote ssh capture are translated to the remote host command.

Pertaining to sshdump, if you're having trouble finding the command via the commandline, note that it is not in the system path by default on all platforms.
For GNU/Linux (for example in my case, Ubuntu 20.04, wireshark v3.2.3) it was under /usr/lib/x86_64-linux-gnu/wireshark/extcap/.
If this is not the case on your system, it may be handy to ensure that mlocate is installed (sudo apt install mlocate) then use locate sshdump to find its path (you may find some other interesting tools tools in the same location - use man pages or --help to learn more).

Related

Gstreamer Camera Usage within Docker Containers

I'm currently working on running DL algorithms inside docker containers and I've been successful. However, I can only get it running by passing --net=host flag to docker run command which makes container use host computer's network interface. If I don't pass that flag it throws the following error:
No EGL Display
nvbufsurftransform: Could not get EGL display connection
No protocol specified
nvbuf_utils: Could not get EGL display connection
When I do
echo $DISPLAY
it outputs :0 which is correct.
But I don't understand what Gstreamer, X11 or EGL has to do with full network feature. Is there any explanation for this or any workaround except --net=host flag? Because of this reason I can't map different ports for various containers.
I also have created a topic on this on NVIDIA DevTalk Forum but it still is a dark spot for me. I didn't satisfied with the answers I got.
But it is OK to use --net=host flag to solve this problem anyways.
Quick heads up: Gstreamer is not working over X11-Forwarding natively, you better have to use VNC solution, or have access to the physical machine.
Troubleshooting
is gstreamer installed? apt install -y gstreamer1.0-plugins-base
what does xrandr returns?
what does xauth list returns?
what does gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink returns?
For example:
On my setup because I do not use a dockerfile I copy the xauth list cookie then paste it in docker
xauth add user/unix:11 MIT-MAGIC-COOKIE cccccccccccccccccccccccccc
After this I can test display with xterm&.
Besides, once this is done I have an output with xrandr
Getting more verbose
Also I connect to the docker by an ssh connection with verbose (to host / or / guest we don't care) ssh -X -v user#192.168.123.123
therefore the EGL error is wrapped by debug details.
stream stuff
This is related to Deepstream and Gstreamer customization from nVidia.
Some nvidia threads point that EGL needs a "sink" but no X11 display.
If there is some server running on the host at a designed port, running docker with --net=host will allow a client to connect within the docker.
According to the the doc, there are some servers used by the Gpu.
Doc
$DISPLAY
According to nVidia threads: unset DISPLAY provides better results.
On my setup, without display, the EGL error is gone. Then the stream cannot be seen.

Configure Docker with proxy per host/url

I use Docker Toolbox on Windows 7 in a corporate environment. My workflow requires pulling containers from one artifactory and pushing them to a different one (eg. external and internal). Each artifactory requires a different proxy to access it. Is there a way to configure Docker daemon to select proxy based on a URL? Or, if not, what else can I do to make this work?
Since, as Pierre B. mentioned, Docker daemon does not support URL-based proxy selection, the solution is to point it to a local proxy configured to select the proper upstream proxy based on the URL.
While any HTTP[S] proxy capable of upstream selection would do, (pac4cli project being particularly interesting for it's advertised capability to select the upstream based on proxy-auto-discovery protocol used by most Web browsers a in corporate setting), I've chosen to use tinyproxy, as more mature and light-weight solution. Furthermore, I've decided to run my proxy inside the docker-machine VM in order to simplify it's deployment and make sure the proxy is always running when the Docker daemon needs it.
Below are the steps I used to set up my system. I'm especially grateful to phoenix for providing steps to set up Docker Toolbox on Windows behind a corporate proxy, and will borrow heavily from that answer.
From this point on I will assume either Docker Quickstart Terminal or GitBash, with docker in the PATH, as your command line console and that "username" is your Windows user name.
Step 1: Build tinyproxy on your target platform
Begin by pulling a clean Linux distribution, I used CentOS, and run bash inside it:
docker run -it --name=centos centos bash
Next, install the tools we'll need:
yum install -y make gcc
After that we pull the latest release of Tinyproxy from it's GitHub repository and extract it inside root's home directory (at the time of this writing the latest release was 1.10.0):
cd
curl -L https://github.com/tinyproxy/tinyproxy/releases/download/1.10.0/tinyproxy-1.10.0.tar.gz \
| tar -xz
cd tinyproxy-1.10.0
Now let's configure and build it:
./configure --enable-upstream \
--disable-filter\
--disable-reverse\
--disable-transparent\
--disable-xtinyproxy
make
While --enable-upstream is obviously required, disabling other default features is optional but a good practice. To make sure it actually works run:
./src/tinyproxy -h
You should see something like:
Usage: tinyproxy [options]
Options are:
-d Do not daemonize (run in foreground).
-c FILE Use an alternate configuration file.
-h Display this usage information.
-v Display version information.
Features compiled in:
Upstream proxy support
For support and bug reporting instructions, please visit
<https://tinyproxy.github.io/>.
We exit the container by pressing Ctrl+D and copy the executable to a special folder location accessible from the docker-machine VM:
docker cp centos://root/tinyproxy-1.10.0/src/tinyproxy \
/c/Users/username/tinyproxy
Substitute "username" with your Windows user name. Please note that double slash — // before "root" is required to disable MINGW path conversion.
Now we can delete the container:
docker rm centos
Step 2: Point docker daemon to a local proxy port
Choose a TCP port number to run the proxy on. This can be any port that is not in use on the docker-machine VM. I will use number 8618 in this example.
First, let's delete the existing default Docker VM:
WARNING: This will permanently erase all currently stored containers and images
docker-machine rm -f default
Next, we re-create the default machine setting HTTP_PROXY and HTTPS_PROXY environment variables to the local host and the port we selected, and then refresh our shell environment:
docker-machine create default \
--engine-env HTTP_PROXY=http://localhost:8618 \
--engine-env HTTPS_PROXY=http://localhost:8618
eval $(docker-machine env)
Optionally, we could also set NO_PROXY environment variable to list hosts and/or wildcards (separated by ;) to which the daemon should connect directly, bypassing the proxy.
Step 3: Set up tinyproxy inside docker-machine VM
First, we will create two files in the /c/Users/username directory (this is where our tinyproxy binary should reside after Step 1 above) and then we'll copy them to the VM.
The first file is tinyproxy.conf, the exact syntax is documented on the Tinyproxy website, but the example below should have all the settings need:
# These settings can be customized to your liking,
# the port though must be the same we used in Step 2
listen 127.0.0.1
port 8618
user nobody
group nogroup
loglevel critical
syslog on
maxclients 50
startservers 2
minspareServers 2
maxspareServers 5
disableviaheader yes
# Here is the actual proxy selection, rules apply from top
# to bottom, and the last one is the default. More info on:
# https://tinyproxy.github.io/
upstream http proxy1.corp.example.com:80 ".foo.example.com"
upstream http proxy2.corp.example.com:80 ".bar.example.com"
upstream http proxy.corp.example.com:82
In the example above:
http://proxy1.corp.example.com:80 will be used to connect to URLs that end with "foo.example.com", such as http://www.foo.example.com
http://proxy2.corp.example.com:80 will be used to connect to URLs that end with "bar.example.com", such as http://www.bar.example.com, and
http://proxy.corp.example.com:80 will be used to connect all other URLs
It is also possible to match exact host names, IP addresses, subnets and hosts without domains.
The second file is as the shell script that will launch the proxy, its name must be bootlocal.sh:
#! /bin/sh
# Terminate on error
set -e
# Switch to the script directory
cd $(dirname $0)
# Launch proxy server
./tinyproxy -c tinyproxy.conf
Now, let's connect to the docker VM, get root, and switch to boot2docker directory:
docker-machine ssh
sudo -s
cd /var/lib/boot2docker
Next, we'll copy all three files over and a set their permissions:
cp /c/Users/username/boot2docker/{tinyproxy{,.conf},bootlocal.sh} .
chmod 755 tinyproxy bootlocal.sh
chmod 644 tinyproxy.conf
Exit VM session by pressing Ctrl+D twice and restart it:
docker-machine restart default
That's it! Now docker should be able pull and push images from different URLs automatically selecting the right proxy server.

Run GUI apps via Docker without XQuartz or VNC

As an evolution on can you run GUI apps in a docker container, is it possible to run GUI applications via Docker without other tools like VNC or X11/XQuartz?
In VirtualBox, you could pass the --type gui to launch a headed VM, and this doesn't require installing any additional software. Is anything like that possible via Dockerfile or CLI arguments?
Docker doesn't provide a virtual video device and a place to render that video content in a window like a VM does.
It might be possible to run a container with --privileged and write to the Docker hosts video devices. That would possibly require a second video card that's not in use. The software that Docker runs in the container would also need to support that video device and be able write directly to it or a frame buffer. This limits what could run in the container to something like an X server or Wayland that draws a display to a device.
You could try the following which is worked in my case.
Check the Local machine display and its authentication
[root#localhost ~]# echo $DISPLAY
[root#localhost ~]# xauth list $DISPLAY
localhost:15 MIT-MAGIC-COOKIE-1 cc2764a7313f243a95c22fe21f67d7b1
Copy the above authentication and join your existing container, and add the display autherntication.
[root#apollo-server ~]# docker exec -it -e DISPLAY=$DISPLAY 3a19ab367e79 bash
root#3a19ab367e79:/# xauth add 192.168.10.10:15.0 MIT-MAGIC-COOKIE-1 cc2764a7313f243a95c22fe21f67d7b1
root#3a19ab367e79:/# firefox

Plink is not working in Jenkins

I have batch script which contains plink to load the existing putty session and run few commands on unix server. The same batch script is running fine from windows command line but when I am running it from Jenkins, it is not working and giving the below output.
PuTTY Link: command-line connection utility
Release 0.70
Usage: plink [options] [user#]host [command] ("host" can also be a
PuTTY saved session name)
Options:
-v show verbose messages
-load sessname Load settings from saved session
-ssh -telnet -rlogin -raw force use of a particular protocol (default
SSH)
-P port connect to specified port
-l user connect with specified username
-m file read remote command(s) from file
-batch disable all interactive prompts
The following options only apply to SSH connections:
-pw passw login with specified password
-L listen-port:host:port Forward local port to remote address
-R listen-port:host:port Forward remote port to local address
-X -x enable / disable X11 forwarding
-A -a enable / disable agent forwarding
-t -T enable / disable pty allocation
-1 -2 force use of particular protocol version
-C enable compression
-i key private key file for authentication
I had the same issue. I solved it by running the commands that are stored in the sessions. For example, instead of running:
plink -load my-session
I instead used
plink -serial \\.\COM9 -sercfg 115200,8,1,N,N
Since sessions are saved in Windows Registry per user, I don't think Jenkins has access to them.

Nagios Monitoring check_cpu script

I really need a script for Nagios to monitor the cpu usage on the remote hosts I have this command but it does not work
# 'check_cpu' command definition
# w = Warning level (if CPU % idle falls below this level - must be a percentage)
# c = Critical level
define command{
command_name check_cpu
command_line $USER1$/check_cpu -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p $USER3$
}
vi /etc/nagios/nrpe.cfg
command[check_uptime]=/usr/local/nagios/libexec/check_uptime
I'm a bit confused here. Your two commands are unrelated.
You need to check cpu on a remote host via NRPE? First, is NRPE installed on the remote host? Is check_nrpe plugin installed on the Nagios server?
I'm going to assume that you have NRPE loaded on the remote server (since you listed a check_uptime command in the NRPE configuration file). This means at the very least you'll need to use the check_nrpe command to grab the data you need.
On the remote host in /etc/nagios/nrpe.cfg there should be a few other commands hopefully, maybe something like:
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/hda1
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
If for some reason there isn't, you're going to need to do some reading. This is a good document to start with.
Then, on your Nagios server you can define a service using the check_nrpe like this:
define service {
use generic-service
host_name remotehost
service_description CPU Load
check_command check_nrpe!check_load
}
Now, to wrap all this up with a quick and hopefully comprehensive explanation of how NRPE works with Nagios:
You define a host as a remote host
You need your checks to run locally on that remote host, and instead of grabbing SNMP or using check_ssh you decide to use NRPE
You need a plugin that can tell the remote host what to do in a well formed and documented way
Enter check_nrpe: check_nrpe communicates via NRPE to the remote host
(where it HAS to be installed)
There are some predefined commands on the remote host side (and you can add as many as you like!) commands like check_uptime, check_load
Now that you know the commands that are specified on the remote side, you use those commands as arguments to check_nrpe on the Nagios side
Hope this helped!

Resources