I'm having troubles with my container receiving UDP responses. I've tried to test this and the test already fails (using OSX):
~/htdocs/evcc master 17s
❯ docker run -p 7090:7090/udp -it --rm alpine
/ # nc -u -l 0.0.0.0 7090
and from the host:
~/htdocs/evcc master
❯ nc -u 0.0.0.0 7090
ljkhgdkfjhg
Yet, no echo inside the container. The same works fine when both are run on the host.
What am I doing wrong here?
Related
I am trying to deploy a web app with two podman containers. One is running gunicorn and the other runs a web server as a reverse proxy.
However, the communication between the containers is only successfull if I run them on the host with root. Is there a way around this?
Here is an example without root (which returns an empty IP Address):
$ podman run -dt --name test --rm -p 8000:8000 python:3.9 python -m http.server
$ podman container inspect test | grep IPAddress
"IPAddress": "",
With root, it is possible to do:
$ sudo podman run -dt --name test --rm -p 8000:8000 python:3.9 python -m http.server
$ sudo podman container inspect test | grep IPAddress
"IPAddress": "10.88.0.11",
"IPAddress": "10.88.0.11",
$ cat Dockerfile
FROM caddy:2
COPY Caddyfile /etc/caddy/Caddyfile
$ cat Caddyfile
:8001 {
reverse_proxy * 10.88.0.11:8000
}
$ sudo podman build . -t revproxy
$ sudo podman run -dt --rm -p 8001:8001 revproxy
and the proxy is working successfully on port 8001.
The two containers have to be in the same pod or the same network.
Using a pod, it is possible to do the following:
$ podman pod create -n test
$ podman run --rm --pod test python:3 python3 -m http.server
and in another shell:
$ podman run --rm --pod test python:3 curl localhost:8000
I have a host with ubuntu 20.04, and I run firefox in container from ubuntu:20.04 image.
When firefox is already started on the host: container stops immediately, new window of firefox appears, and I can see all my host browsing history, sessions and so on.
When firefox is NOT started on the host: container is running, new window of "firefox [container hash]" appears, I can see only container browsing history and sessions there (as expected). BUT when I start firefox on the host while container is still running: new window of "firefox [same container hash]" appears, and I can see only container browsing history and sessions.
If I run firefox as a different user, like
sudo -H -u some-user firefox
and having umask 077 - I've got perfect isolation and parallel running without docker, but that's not the full goal
My dockerfile:
FROM ubuntu:20.04
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y firefox
CMD firefox
Terminal history:
xhost +local:docker
docker build -t firefox .
docker create -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name ff firefox
docker start ff
I suppose this behavior of process launch from container is not really obvious and expected. Could you please explain what exactly is happening and why?
Docker container is not an isolated machine. The commands that run inside docker container are executed on the host machine (or the docker VM if using Docker for Mac).
This can be verified in the following way:
Run a command inside docker container docker exec -it <container-name> sleep 100
On the host machine, grep for this command ps -ef | grep sleep. For mac, docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh will provide a shell into the running docker VM.
On my machine:
# ps -ef | grep sleep
2609 root 0:00 sleep 100
2616 root 0:00 grep sleep
When you run a daemon, it creates a socket file in temp directory.
This file is the gateway to communication with the application.
For instance, when mysql is running in the system, it creates a socket file /var/run/mysqld/mysqld.sock which is used for communication by mysql client.
These daemons can also bind to a port, and be accessed through the network this way. These ports are simply socket connections to your application which are visible over the network.
Coming back to your question,
docker create -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name ff firefox
/tmp/.X11-unix is managing Unix-domain sockets. Since this is mounted within the container, the socket space between the container and host is shared.
When firefox is running on the host, the socket is occupied already. Thus the container fails to start
When firefox is not running on host and container is started, the socket is free and hence the container is able to start. This uses the filesystem inside container to store history etc. Thus you do not see the history from host.
If you run firefox from host now, it will simply connect to this unix socket and launch a firefox window.
I have a small Spring Boot API running in docker. Shown below Is the command I used to up the container.
docker run -d --rm --name factorialorialContainer --memory=$2 --cpus=$3 -p 8080:8080 -e JAVA_OPTIONS="$(cat /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/flags.txt)" suleka96/factorial:latest
Then I have a dockerized JMeter which I up using the below command
export volume_path=/Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource && export jmeter_path=/jmeter && docker run --rm --name jmeterContainer --memory='512m' --cpus=2 -e JAVA_OPTS="-Xms512 -Xmx512" --volume ${volume_path}:${jmeter_path} egaillardon/jmeter --nongui -t factorial.jmx -l jmeter_results.jtl -q user.properties
but all the tests fail and requests are not getting sent to the API. This is how the CLI of JMeter looks
test config of request:
Protocol: htttp
Server: localhost
Port:8080
Method:GET
Path:/api/factorial
This is what the complete bash file looks like:
#!/bin/bash
cd /Users/sulekahelmini/Documents/fyp/fyp_work/demo/target && docker build . -t suleka96/factorial
docker run -d --rm --name factorialorialContainer --memory='512m' --cpus=2 -p 8080:8080 -e JAVA_OPTIONS="$(cat /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/flags_base.txt)" suleka96/factorial:latest
sleep 15
#run test
export volume_path=/Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource && export jmeter_path=/jmeter && docker run --rm --name jmeterContainer --memory='512m' --cpus=2 -e JAVA_OPTS="-Xms512 -Xmx512" --volume ${volume_path}:${jmeter_path} egaillardon/jmeter --nongui -t factorial.jmx -l jmeter_results.jtl -q user.properties
sleep 15
#jtl split
java -jar /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jtl-splitter-0.4.6-SNAPSHOT.jar -f /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_resource/jmeter_results.jtl -s -t 1;
docker stop factorialorialContainer
docker stop jmeterContainer
What am I doing wrong? How can I fix this?
You're doing wrong absolutely everything.
When it comes to Spring Boot even "small" API is not small at all, if you want something really small - consider i.e. Jersey
I fail to see why do you need containers at all, in your situation they don't add any value but only consume resources
You're running the application under test and the load generator at the same physical machine. Both can be very resource intensive when it comes to high load and you won't be able to tell for sure where is the bottleneck
If you still want to ignore previous 2 points and proceed: you're using localhost in JMeter container and there is nothing deployed on port 8080 in that container. You need to run the following command:
docker inspect factorialorialContainer
you will see a line which looks like:
"IPAddress": "xxx.xxx.xxx.xxx",
you will need to get this IP address from the docker inspect command output and replace the localhost with this IP address in the JMeter's HTTP Request sampler
References:
Docker Network
JMeter Distributed Testing with Docker
According to the information on docker hub (https://hub.docker.com/r/voltdb/voltdb-community/) I was able to start the three nodes after adding the nodenames to my /etc/hosts file. Commands I executed:
docker pull voltdb/voltdb-community:latest
docker network create -d bridge voltLocalCluster
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node1 --network=voltLocalCluster voltdb/voltdb-community:latest
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node2 --network=voltLocalCluster voltdb/voltdb-community:latest
docker run -d -P -e HOST_COUNT=3 -e HOSTS=node1,node2,node3 --name=node3 --network=voltLocalCluster voltdb/voltdb-community:latest
docker exec -it node1 bash
sqlcmd
> Output:
Unable to connect to VoltDB cluster
localhost:21212 - Connection refused
According to log files the voltdb has started and is running normally.
Does anyone have an idea why the connection is refused?
You have to follow the given example and fix your HOSTS argument.
It should be HOSTS=node1,node2,node3 instead of yours, thus you let your service know about all nodes in cluster.
There might exists a bug in the docker-entrypoint.sh I don't see yet because I shouldn't need to connect into the container and run these commands manually, but doing this solved my issue:
docker exec -it node1 bash
voltdb init
voltdb start
How can I make internet http calls from inside docker on Ubuntu 16.04 over Oracle VM (5.2.4) and cntlm proxy on Windows 7?
The Proxy is configured (IP 192.168.56.1, the VMs host). Internet access is successful within Ubuntus Firefox or with wget from commandline.
Docker CE (17.12.0-ce) is configured to use also the proxy ip:
/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.56.1:3128/"
Environment="HTTPS_PROXY=http://192.168.56.1:3128/"
All docker images I could pull successful.
Only the wget or any install calls inside a docker container fails.
Many help pages later, I haven't no idea more.
My tries:
docker run --name test --network host -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test --dns 8.8.8.8 -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
docker run --name test --network host --dns 8.8.8.8 -e "https_proxy=https://192.168.56.101:3128" -it alpine:latest wget https://www.web.de
wget: bad address 'www.web.de'
All the printed calls also with "http" and without the proxy Environment.
Another ideas for me?
For docker to work with CNTLM it is important to set
Gateway yes
in the CNTLM-Config.
I run CNTLM directly on the VM and set all proxys within the container to http://172.17.0.1:3128.
For the sake of completeness, please set all Proxy-Env in docker run:
PROXY_DOCKER="http://172.17.0.1:3128/"
docker run -e HTTP_PROXY=${PROXY_DOCKER} -e http_proxy=${PROXY_DOCKER} -e HTTPS_PROXY=${PROXY_DOCKER} -e https_proxy=${PROXY_DOCKER} ...