How do save tcpdump monitoring container for later analysis - docker

I am using below command to monitor a single container. How can I extend this so that I can save the the tcp dump for later analysis using WireShark.
docker run -it --rm --net container:<container_name> \
nicolaka/netshoot tcpdump ...

tcpdump has an option to send raw captured packets to stdout, send it to a file on host:
docker run -it --rm --net container:<> nickolaka/netchoot tcpdump -w - > packets.dump
or wireshark directly
docker run -it --rm --net container:<> nickolaka/netchoot -i any -w - | wireshark -k -i -

Related

why would we want to use both --detach switch with --interactive and --tty in docker?

I'm reading the docker documentations, and I've seen this command:
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
As far as I know, using -d or --detach switch run the command outside of the current terminal emulator, and return the control of terminal back to the user. And also using --tty -t and --interactive -i is completely the opposite. Why would anyone want to use them in a command?
For that specific command, it doesn't make sense, since nginx does not have an interactive component. But in general, it allows you to later attach to the container with docker attach. E.g.
$ docker run --name test-no-input -d busybox /bin/sh
92c0447e0c19de090847b7a36657d3713e3795b72e413576e25ab2ce4074d64b
$ docker attach test-no-input
You cannot attach to a stopped container, start it first
$ docker run --name test-input -dit busybox /bin/sh
57e4adcc14878261f64d10eb7839b35d5fa65c841bbcb3cd81b6bf5b8fe9d184
$ docker attach test-input
/ # echo hello from the container
hello from the container
/ # exit
The first container stopped since it was running a shell, and there was no input on stdin (no -i). A shell exits when it finishes reading input (e.g. the end of a shell script).

Is it possible to host multiple aler9/rtsp-simple-server on the same machine

I am using this command to start a server on my linux machine:
docker run -d --rm -it --network=host aler9/rtsp-simple-server
And this command to connect an rtsp stream
docker run -v $(pwd):$(pwd) --network=host
linuxserver/ffmpeg:arm64v8-latest -re -stream_loop -1 -i
$(pwd)/sample.mp4 -c copy -f rtsp rtsp://localhost:8554/mystream
Is it possible to start a second rtsp server and connect rtsp streams to this second server.
What I am trying to do is to simulate multiple cameras with one sub stream for each camera
Try running multiple rtsp servers like so:
docker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8554:8554 -p 1935:1935 aler9/rtsp-simple-server
docker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8555:8554 -p 1936:1935 aler9/rtsp-simple-server
docker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8556:8554 -p 1937:1935 aler9/rtsp-simple-server
and connect like so:
# Connecting to first server
docker run -v $(pwd):$(pwd) --network=host linuxserver/ffmpeg:arm64v8-latest -re -stream_loop -1 -i
$(pwd)/sample.mp4 -c copy -f rtsp rtsp://localhost:8554/mystream
# Connecting to second server
docker run -v $(pwd):$(pwd) --network=host linuxserver/ffmpeg:arm64v8-latest -re -stream_loop -1 -i $(pwd)/sample.mp4 -c copy -f rtsp rtsp://localhost:8555/mystream
# Connecting to third server
docker run -v $(pwd):$(pwd) --network=host linuxserver/ffmpeg:arm64v8-latest -re -stream_loop -1 -i $(pwd)/sample.mp4 -c copy -f rtsp rtsp://localhost:8556/mystream
This solution basically uses docker port mapping and map each server to diffrent ports so they won't colide. According to aler9/rtsp-simple-server port mapping is working for tcp and might not work for udp.
Solution for udp will require more investigation.

How to have 2 containers connect to other container using TCP in docker network

I have this right now:
docker network rm cprev || echo;
docker network create cprev || echo;
docker run --rm -d -p '3046:3046' \
--net=cprev --name 'cprev-server' cprev-server
docker run --rm -d -p '3046:3046' \
-e cprev_user_uuid=111 --net=cprev --name 'cprev-agent-1' cprev-agent
docker run --rm -d -p '3046:3046' \
-e cprev_user_uuid=222 --net=cprev --name 'cprev-agent-2' cprev-agent
basically the 2 cprev-agents are supposed to connect to the cprev-server using TCP. The problem is I am getting this error:
docker: Error response from daemon: driver failed programming external
connectivity on endpoint cprev-agent-1
(6e65bccf74852f1208b32f627dd0c05b3b6f9e5e7f5611adfb04504ca85a2c11):
Bind for 0.0.0.0:3046 failed: port is already allocated.
I am sure it's a simple fix but frankly I don't know how to allow two way traffic from the two agent containers without using the same port etc.
So this worked (using --network=host) but I am wondering how I can create a custom network that doesn't interfere with the host network??
docker network create cprev; # unused now
docker run --rm -d -e cprev_host='0.0.0.0' \
--network=host --name 'cprev-server' "cprev-server:$curr_uuid"
docker run --rm -d -e cprev_host='0.0.0.0' \
-e cprev_user_uuid=111 --network=host --name 'cprev-agent-1' "cprev-agent:$curr_uuid"
docker run --rm -d -e cprev_host='0.0.0.0' \
-e cprev_user_uuid=222 --network=host --name 'cprev-agent-2' "cprev-agent:$curr_uuid"
so is there anyway to get this to work using my custom docker network "cprev"?

ElasticSearch on docker - 2nd instance kills the first instance

I'm trying to run multiple versions of ElasticSearch at the same time, should be easy. Here are my commands:
docker run -d --rm -p 9250:9200 -p 9350:9300 --name es_5_3_3_integration -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.3.3
docker run -d --rm -p 9251:9200 -p 9351:9300 --name es_5_4_3_integration -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:5.4.3
The first docker starts up great. The 2nd docker starts, but at the cost of killing the first docker. If I run it without the -d I don't get any info back to the UI about why the docker stopped.
By default, ES on docker tries to take 2G of memory. So 2 dockers was trying to take up 4G of memory, which my machine didn't have.
The solution: limit the amount of memory each ES instance tried to take to 200mb using the following switch -e ES_JAVA_OPTS="-Xms200m -Xmx200m"
Full, working commands for 4 concurrent dockers:
docker run -d --rm -p 9250:9200 -p 9350:9300 --name es_5_3_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.3.3
docker run -d --rm -p 9251:9200 -p 9351:9300 --name es_5_4_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.4.3
docker run -d --rm -p 9252:9200 -p 9352:9300 --name es_5_5_3_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.5.3
docker run -d --rm -p 9253:9200 -p 9353:9300 --name es_5_6_4_integration -e "xpack.security.enabled=false" -e ES_JAVA_OPTS="-Xms200m -Xmx200m" docker.elastic.co/elasticsearch/elasticsearch:5.6.4
Thank you to #Val who really answered this question in the comments.
If this is a lack of memory problem, you can check if your container was OOMKilled (OOM).
First check if the exit code of the container is 137 = (128+9) Container received a SIGKILL.
You can test it with docker ps -a or
docker inspect --format='{{.State.ExitCode}}' $INSTANCE_ID
Then you can check the state of the container with :
docker inspect --format='{{.State.OOMKilled}}' $INSTANCE_ID
If it returns true, it was a OOM problem.
Further details at https://docs.docker.com/engine/reference/run/#user-memory-constraints .
Extract :
By default, kernel kills processes in a container if an out-of-memory
(OOM) error occurs. To change this behaviour, use the
--oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option. If the -m flag is not
set, this can result in the host running out of memory and require
killing the host’s system processes to free memory.

Does Docker need a program to access peripherals?

Can't Open the card!
I wrote a driver for a peripheral. I have been able to run it in the host, but in the dokcer can not run the program. Will it need a procedure in the host to achieve reuse, like docker access to the network card needs the bridge?
Thanks!
You need to mount the device you want to use in your
docker run -v
command
Extract from
https://blog.jessfraz.com/post/docker-containers-on-the-desktop/
for example a Spotify docker image needs access to the sound device, so
$ docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
--device /dev/snd \ # sound
--name spotify \
jess/spotify
also the Skype docker image
$ docker run -d \
-v /etc/localtime:/etc/localtime \
-p 4713:4713 \ # expose the port
--device /dev/snd \ # sound
--name pulseaudio \
jess/pulseaudio

Resources