Netcat (nc / ncat) cannot be interrupted in Docker - docker

I have a service that does some tasks and then opens a port so that other services know it's finished. I use nc to do that: nc -l -k -p 1337. I use docker-compose to manage services.
When shutting down the services, the service running nc always takes several seconds to close while it should be instant. I think the process doesn't interrupt and docker has to kill it. If I run nc on the same service via docker-compose run I cannot interrupt the process via Ctrl+C.
When running nc locally it can instantly be terminated via Ctrl+C.
How can I create a service running nc -l -k -p 1337 which can be interrupted?
Dockerfile
FROM ruby:2.6.3-alpine
RUN apk add --no-cache netcat-openbsd
COPY entrypoint.sh ./
RUN chmod +x entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
entrypoint.sh
#!/bin/sh
# ...
nc -l -k -p 1337
docker-compose.yml
services:
nc:
build:
context: .
dockerfile: Dockerfile
docker-compose up --build
OR:
entrypoint.sh
#!/bin/sh
# ...
exec "$#"
docker-compose.yml
services:
nc:
build:
context: .
dockerfile: Dockerfile
command: nc -l -k -p 1337
docker-compose up --build
docker-compose run --rm nc nc -l -k -p 1337

Assuming nc actually responds to signals under normal circumstances, you need to:
Use exec
Use the list version of command.
So in the first case, add an exec to the shell script.
And in the second case, probably need command: ["nc", "-l", "-k", "-p", "1337"] in the compose file.
See https://hynek.me/articles/docker-signals/ for full checklist.

Related

dockerFile CMD on conditional

I have two containers a Flask and a Celery container. Both containers use similar configs except the CMD. How can I change the CMD to change based on an env variable?
if [ $CONTAINER = 'flask' ] ; then \
CMD \["uwsgi", "--ini", "uwsgi.ini"\] ; else \
CMD \["celery", "--app=flask_project.celery_app.celery", "worker"\]; \
fi
Instead of having two separate CMDs, have one CMD that can run either thing.
In shell syntax, this might look like:
if [ "$CONTAINER" = flask ]; then
exec wsgi --init uwsgi.ini "$#"
else
exec celery --app=flask_project.celery_app.celery worker "$#"
fi
...in a Dockerfile, this might look like:
CMD [ "/bin/sh", "-c", "if [ \"$CONTAINER\" = flask ]; then exec uwsgi --ini uwsgi.ini \"$#\"; else exec celery --app=flask_project.celery_app.celery worker \"$#\"; fi" ]
The exec forces the copy of /bin/sh to replace itself in-place with wsgi or celery, so it doesn't exist in the process tree except for the short period it needs to make a decision.
Instead of having two separate CMDs, have the CMD do the most common thing, and override the CMD if you need to do the other thing.
For example, you might say that the "most common" thing your container will do is to launch the Flask server
CMD ["uwsgi", "--ini", "uwsgi.ini"]
and if you just run the container it will do that
docker run -d --name web --net my-net -p 5000:5000 my-image
but you can also provide an alternate command after the docker run image name, and this will replace the Dockerfile CMD.
docker run -d --name celery --net my-net my-image \
celery --app=flask_project.celery_app.celery worker
If you're in a Docker Compose setup, you can use the command: setting to specify this override, like
version: '3.8'
services:
redis:
image: redis
web:
build: .
ports: ['5000:5000']
depends_on: [redis]
celery:
build: .
command: celery --app=flask_project.celery_app.celery worker
depends_on: [redis]

docker-compose stop / start my_image

Is it normal to lose all data, installed applications and created folders inside a container when executing docker-compose stop my_image and docker-compose start my_image?
I'm creating container with docker-compose up --scale my_image=4
update no. 1
my containers have sshd server running in them. When I connect to a container execute touch test.txt I see that the file was created.
However, after executing docker-compose stop my_image and docker-compose start my_image a container is empty and ls -l shows absence of file test.txt
update no. 2
my Dockerfile
FROM oraclelinux:8.5
RUN (yum update -y; \
yum install -y openssh-server openssh-clients initscripts wget passwd tar crontabs unzip; \
yum clean all)
RUN (ssh-keygen -A; \
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config; \
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config; \
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config; \
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config)
RUN (mkdir -p /root/.ssh/; \
echo "StrictHostKeyChecking=no" > /root/.ssh/config; \
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config)
RUN echo "root:oraclelinux" | chpasswd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 22
my docker-compose
version: '3.9'
services:
my_image:
build:
context: .
dockerfile: Dockerfile
ports:
- 30000-30007:22
when I connect to a container
Execute touch test.txt
Execute docker-compose stop my_image
Execute docker-compose start my_image
Execute ls -l
I see no file test.txt (in fact I see that the folder is empty)
update no. 3
entrypoint.sh
#!/bin/sh
# Start the ssh server
/usr/sbin/sshd -D
# Execute the CMD
exec "$#"
Other details
When containers are all up and running, I choose a container running
on a specific port, say port 30001, then using putty I connect to that specific container,
execute touch test.txt
execute ls -l
I do see that the file was created
I execute docker-compose stop my_image
I execute docker-compose start my_image
I connect via putty to port 30001
I execute ls -l
I see no file (folder is empty)
I try other containers to see if file exists inside one of them, but
I see no file present.
So, after a brutal brute force debugging I realized that I lose data
only when I fail to disconnect from ssh before stopping / restarting
container. When I do disconnect data does not disappear after stopping / restarting

How to properly expose/publish ports for a WebUI-based application in Docker?

I'm trying to port this webapp to Docker. I wrote the following Dockerfile:
FROM anapsix/alpine-java
MAINTAINER <name>
COPY aard2-web-0.7-java6.jar /home/aard2-web-0.7-java6.jar
COPY start.sh /home/start.sh
CMD ["bash", "/home/start.sh"]
EXPOSE 8013/tcp
Here are the contents of start.sh:
#!/bin/bash
java -Dslobber.browse=true -jar /home/aard2-web-0.7-java6.jar /home/dicts/*.slob
Then I built the image:
docker build -t aard2-docker .
And I used the following command to run the container:
docker run --name Aard2 -p 127.0.0.1:8013:8013 -v /home/<name>/dicts:/home/dicts aard2-docker
The app is running normally, prompting that it's listening at http://127.0.0.1:8013. However, I opened the address only to find that I couldn't connect to the app.
I tried using the EXPOSE command (as shown in the Dockerfile snippet above) and variants of the -p flag, such as -p 127.0.0.1:8013:8013, -p 8013:8013, -p 8013:8013/tcp, but none of them worked.
How can I expose/publish the port to 127.0.0.1 properly? Thanks!
Here's the response from the original author:
you need to tell the server to listen on all network interfaces instead of localhost - that is you are missing -Dslobber.host=0.0.0.0
this works for me:
FROM anapsix/alpine-java
COPY ./build/libs/aard2-web-0.7.jar /home/aard2-web-0.7.jar
CMD ["bash", "-c", "java -Dslobber.host=0.0.0.0 -jar /home/aard2-web-0.7.jar /dicts/*.slob"]
EXPOSE 8013/tcp
and then run like this:
docker run -v $HOME/Downloads:/dicts -p 8013:8013 --rm aard2-web
-Dslobber.browse=true opens default browser, I don't think this has any effect in docker so don't need that.
https://github.com/itkach/aard2-web/issues/12#issuecomment-895557949

docker-compose evaluate expression in command array

I have below docker-compose.yml file. In the command section I would like to evaluate the curl expression before the command is passed to docker engine i.e. my curl should be evaluated first and then my container should run with -ip 10.0.0.2 option.
version: '2'
services:
registrator:
image: gliderlabs/registrator:v7
container_name: registrator
volumes:
- /var/run/docker.sock:/tmp/docker.sock
command: ['-ip', '$$(curl -X GET -s http://169.254.169.254/latest/meta-data/local-ipv4)']
This however is not being evaluated and my option is passed as -ip $(curl -X GET -s http://169.254.169.254/latest/meta-data/local-ipv4)
The respective docker run command however correctly evaluates the expression and my container is correctly starting with -ip 10.0.0.2 option:
docker run -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:v7 -ip $(curl -X GET -s http://169.254.169.254/latest/meta-data/local-ipv4)
The docker command on the command line will work as the command will be executed by the shell and not the docker image, so it will be resolved.
the docker-compose command will override the default command (CMD) (see https://docs.docker.com/compose/compose-file/#command) so it will not be executed before the container is started but as the primary command in the container...
you could do something like:
version: '2'
services:
registrator:
image: gliderlabs/registrator:v7
container_name: registrator
volumes:
- /var/run/docker.sock:/tmp/docker.sock
command: ['-ip', '${IP}']
and run it with:
IP="$(curl -X GET -s http://169.254.169.254/latest/meta-data/local-ipv4)" docker-compose up
this will run it in the shell again and assign it to a variable called IP witch will be available during the docker-compose up command. You could put that command in a shell script to make it easier.
After searching the internet for hours and answers posted here, I finally settled for below solution. For explanation of why this works, please refer to #Ivonet answer.
I modified the Dockerfile to run a script when the container starts.
FROM gliderlabs/registrator:v7
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh && \
apk update && \
apk add curl
ENTRYPOINT ["/entrypoint.sh"]
The script entrypoint.sh is also very simple. It first checks if it can call the endpoint. A success response would trigger my container to start with correct IP address, while an un-successful response (for local testing) would not set any value.
#!/bin/sh
LOCAL_IP=$(curl -s --connect-timeout 3 169.254.169.254/latest/meta-data/local-ipv4)
if [ $? -eq 0 ]; then
/bin/registrator -ip $LOCAL_IP $#
else
/bin/registrator $#
fi

docker, mariadb doesn't start at "init", based in debian:stable

i am trying write a Dockerfile like that
FROM debian:stable
RUN apt-get update
RUN apt-get install -y mariadb-server
EXPOSE 3306
CMD ["mysqld"]
I create the image with
docker build -t debian1 .
And i create the container with
docker run -d --name my_container_debian -i -t debian1
20 seconds after, docker ps -a tells that container is exited. Why? I want the container is up and mariadb running. Thanks. Sorry for the question.
mysqld alone would exit too soon.
If you look at a MySQL server Dockerfile, you will note its ENTRYPOINT is a script docker-entrypoint.sh which will exec mysqld in foreground.
exec "$#"

Resources