How can I manage a daemon service inside a Docker container - docker

I'm running avahi-daemon inside of a Docker container. Currently I'm starting this by simply running it from the compose file. Is there a way to start it in a "managed" fashion, so it automatically restarts if it fails? Currently, due to the lack of an init process if it fails it becomes defunct and a replacement cannot be started.

It looks like you can just run it without a --daemonize option; then it will be a foreground process that can be the main container process. You can then use a Docker restart policy to restart the container if it fails.
A minimal Dockerfile could look like:
FROM ubuntu:20.04
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
avahi-daemon
CMD ["avahi-daemon", "--no-chroot"]
And the corresponding Compose setup:
version: '3.8'
services:
avahi-daemon:
build:
context: .
dockerfile: Dockerfile.avahi-daemon
restart: on-failure

Related

Containers doesn't start after doing docker-compose up -d

i'm having some problems using dockers.
First of all, i did a docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- 8000:80
volumes:
- $HOME/sitios:/var/www/html
db:
build: .
ports:
- 3000:3306
volumes:
- $HOME/"mariadb copia":/var/lib
As you can see here, i want to make a docker with two volumes, one with HTTP and other with mariadb server.
Here is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install nano mariadb-server apache2 -y
Then, i use the command sudo docker-compose up -d, however, the docker doesn't start at all, i try sudo docker start <name> but it doesn't work.
I already googled and i already looked into the official docker documentation but i can't find anything.
Thanks for your help.
Because you're using ubuntu:latest you're missing an entrypoint.
So it starts but immediately exits with exit 0.
Another thing: do not do apt upgrade inside the dockerfile, just take another Image (with a higher version or whatever you looking for).
One more thing: you use two different services, in the same image, it's pretty weird.
And one last thing: use the official Images, so you do not have to build them yourself.

Exposing Docker Volumes to Nginx

I'm trying to connect a Json file which resides in a docker volume of the following container to my main docker container which is running a django project.
Since I am using Caprover my Docker Compose options are very limited.
So Docker Composer is not really an option. I want to instead just expose the json file over the web with a link.
Something like domain.com/folder/jsonfile.json
Can somebody tell me if this is possible inside this dockerfile?
The image I am using is crucial to the container so can I just add an nginx image or do I need any other changes to make this work?
Or is nginx not even necessary?
FROM ubuntu:devel
ENV TZ=Etc/UTC
ARG APP_HOME=/app
WORKDIR ${APP_HOME}
ENV DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
RUN echo $TZ > /etc/timezone
RUN apt-get update && apt-get upgrade -y
RUN apt-get install gnumeric -y
RUN mkdir -p /etc/importer/data
RUN mkdir /voldata
COPY config.toml /etc/importer/
COPY datasets/* /etc/importer/data/
VOLUME /voldata
COPY importer /usr/bin/
RUN chmod +x /usr/bin/importer
COPY . ${APP_HOME}
CMD sleep 999d
Using the same volume in 2 containers
docker-compose:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file'
the mechanism above replaces the volumes_from since v3, but this works for v2 as well:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes_from:
- service1
If you want to avoid unintentional altering add :ro for readonly to the target service:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file:ro'
http-server
Surely you can provide the file via http (or other protocol). There are two oppertunities:
Including a http-service to your container (quite easy depending on what is already given in the container) e.g. using nodejs you can use this https://www.npmjs.com/package/http-server very easy. Size doesn't matter? So just install:
RUN apt-get install -y nodejs npm
RUN npm install -g http-server
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/path/to/your/json"]
docker-compose (Runs per default on 8080, so open this):
existing_service:
ports:
- '8080:8080'
Run a stand alone http-server (nginx, apache httpd,..) in another container, but then you depend again on using the same volume for two services, so for local solutions quite an overkill.
Base image
If you don't have good reasons i'll would never use something like :devel, :rolling or :latest as base image. Stick to a LTS version instead like ubuntu:22.04
Testing for http-server
Dockerfile
FROM ubuntu:20.04
ENV TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN npm install -g http-server#13.1.0 # Issue with JSON-File in V14: https://github.com/http-party/http-server/issues/634
COPY ./test.json ./usr/wwwhttp/test.json
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/usr/wwwhttp/"]
# docker build -t test/httpserver:latest .
# docker run -p 8080:8080 test/httpserver:latest
Disclaimer:
I am not that familiar with node-docker-images, this is just to give a quick working solution and go on from there. I'm not using nodeJS in production, but I'm sure it can be optimized from being fat to.. well.. being rather fat. But for quick prototyping size doesn't matter.
If you want to just have two containers access the same file, just use a volume with --mount.

Is it possible to attach to a docker container that is actively running flask?

The question is self explanatory but just wanted to add some details. I am running an ubuntu container containing some python flask code:
FROM ubuntu:latest
ADD app/ /app
WORKDIR /app
RUN apt-get update -y && \
apt-get install -y python3-pip python-dev build-essential
RUN pip3 install -r requirements.txt
RUN pip3 install flask
EXPOSE 50000
ENTRYPOINT ["python3"]
CMD ["app.py"]
The docker compose file looks something like this:
version: "2"
services:
app:
container_name: flask-app
restart: always
build:
context: ./
dockerfile: app/Dockerfile
volumes:
- "./app:/app"
ports:
- "5000:5000"
stdin_open: true
tty: true
How do I attach to the container and run an interactive bash shell? Currently the attach command just hangs without returning.
Docker attach:
Attach local standard input, output, and error streams to a running container
...
Note: The attach command will display the output of the ENTRYPOINT/CMD process. This can appear as if the attach command is hung when in fact the process may simply not be interacting with the terminal at that time.
Docker exec:
The docker exec command runs a new command in a running container.
TL;DR: You want docker exec -it [docker-instance-id] /bin/sh to get to a terminal. docker attach will just show you stdout from your flask app from that point on (which might be nothing, which is why it appears to hang).

docker-compose not producting "No Such File or Directory" when files exist in container

I have a simple Dockerfile
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install
RUN apt-get install -y \
curl \
gcc \
make \
python3-psycopg2 \
postgresql-client \
libpq-dev
RUN mkdir -p /var/www/myapp
WORKDIR /var/www/myapp
COPY . /var/www/myapp
RUN chmod 700 ./scripts/*.sh
And an associated docker-compose file
version: "3"
volumes:
postgresdata:
services:
myapp:
image: ralston3/myapp_api:prod-latest
tty: true
command: /bin/bash -c "/var/www/myapp/scripts/myscript.sh && echo 'hello world'"
ports:
- 8000:8000
volumes:
- .:/var/www/myapp
environment:
SOME_ENV_VARS=SOME_VARIABLE
# ... more here
depends_on:
- redis
- postgresql
# ... other docker services defined below
When I run docker-compose up via:
docker-compose up -f /path/to/docker-compose.yml up
My myapp container/service fails with myapp_myapp_1 exited with code 127 with another error mentioning myapp_1 | /bin/sh: 1: /var/www/myapp/scripts/myscript.sh: not found
Further, if I exec into the myapp container via docker exec -it {CONTAINER_ID} /bin/bash I can clearly see that all of my files are there. I can literally run the /var/www/myapp/scripts/myscript.sh and it works fine.
However, there seems to be some issue with docker-compose (which could totally be my mistake). But I'm just confused as to how I can exec into the container and clearly see the files there. But docker-compose exists with 127 saying "No such file or directory".
You are bind mounting the current directory into "/var/www/myapp" so it may be that your local directory is "hiding/overwriting" the container directory. Try removing the volumes declaration for you myapp service and if that works then you know it is the bind mount causing the issue.
Unrelated to your question, but a problem you will also encounter: you're installing Python a second time, above and beyond the version pre-installed in the python Docker image.
Either switch to debian:buster as base image, or don't bother installing antyhign with apt-get and instead just pip install your dependencies like psycopg.
See https://pythonspeed.com/articles/official-python-docker-image/ for explanation why you don't need to do this.
in my case there were 2 stages: builder and runner.
I was getting an executable in builder and running that exe using the alpine image in runner.
My mistake here was that I didn't use the alpine version for the builder. Ex. I used golang:1.20 but when I used golang:1.20-alpine the problem went away.
Make sure you use the correct version and tag!

Can not run start script on docker and rancher

I am trying to get wso2 is running on a docker on Rancher. I have created the following dockerfile:
FROM wso2/wso2base:latest
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install default-jdk -y && \
apt-get clean
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/jre/
ENV PATH ${JAVA_HOME}/bin:${PATH}
ENV CARBON_HOME /opt/wso2is
It is uploaded to github. I have a docker-compose.ym file with the following content:
version: '2'
services:
wso2is:
build: <github-url>/wsois
stdin_open: true
tty: true
ports:
- 9443:9443/tcp
- 9763:9763/tcp
labels:
io.rancher.container.pull_image: always
volumes:
- /home/dockserver/stacks/inclouding/volume/wso2is:/opt/wso2is
The only remaining step to have the server working is to run the start script. If i run it accessing to the docker bash It starts perfectly:
docker exec -it "676d5bc5cf18" bash
/opt/wso2is/bin/wso2server.sh start
I have tried to launch it in the dockerfile with CMD:
CMD /opt/wso2is/bin/wso2server.sh start
or in the docker-compose:
command:
- /opt/wso2is/bin/wso2server.sh
- start
On both situations the docker stops and shows errors stating:
Need to restart service reconcile
Expected state running but got stopped
How can I get it running? What I am doing wrong?
When you use start command (./wso2server.sh start) at the end of the command, wso2server.sh file starts the server in the background and it is the end of the wso2server.sh script execution. You can do the following to overcome the issue.
Do not use the start command. Just execute wso2server.sh.
Use start command with wso2server.sh file and tail the wso2carbon.log as follows.
tail -f /repository/logs/wso2carbon.log

Resources