docker compose run commands on the containers without a dockerfile - docker

I am wondering if its possible to install dependancies in the containers without creating aditional dockerfiles.
Basic example is php apache + mysqli.
For mysqli I need to run:
docker-php-ext-install mysqli && apachectl restart
But I dont want to create a seperate file just to run this command.
I have tried adding it as a command to the docker-compose.yml:
services:
apache-php:
image: php:8.1-apache
volumes:
- ./:/var/www/html
command: docker-php-ext-install mysqli && apachectl restart
ports:
- "5520:80"
The commands succesfully execute but then the container shuts down.
As soon as I add any command even if its echo "hello" the container will shutdown.
Ive read so much of the documentation and I cant see a clear reason for this or hunt down any way to make the container stay open.
Ive read a lot of stack overflows on using sleep, tty interactive and other tricks to keep the container from shutting down but none of them work.
I can acheive a single file setup with simple bash script using docker cli:
#!/usr/bin/env bash
#Use the project folder as the prefix for our containers and networks.
PROJECT=`basename $(pwd)`
# HTTP: APACHE + PHP + MYSQLI
NAME="${PROJECT}_http"
PORT="5520"
docker run -dit --name "$NAME" -p $PORT:80 \
-v `pwd`:/var/www/html \
php:8.1-apache
docker exec "$NAME" docker-php-ext-install mysqli
docker exec "$NAME" apachectl restart
This script works fine, and I can also do anything I want (create networks aliases etc) with docker cli but docker-compose would be nicer.
I just dont want to create a bunch of extra files for some simple commands to run. Surely there is a way?
If its not possible I guess I just keep expanding the bash script and build my network that way.

Related

Simple docker-compose.yml to setup an Ubuntu container with ssh access and persistent files

My goal is to setup a very minimalistic Ubuntu container for hosting a server within it. It should be accessible via ssh and have a persistent configuration.
I am currently having this server running within a virtual machine. This should move to Docker now since I would like to reduce complexity and increase maintainability (I have couple of other containers running perfectly).
What I've tried so far:
Created a docker-compose.yml & Dockerfile and connected a local folder
Got access to the container with "docker container exec -ti bash"
Installed things within this machine, configured ssh
My issues are:
The build seems to work, even with configuring ssh
SSH via remote does not work - logging in via "docker container exec ... bash" works fine
Configuring things within the machine, settings up ssh etc works (same commands like in Dockerfile) - SSH via remote works
Shutting down (docker-compose down) and starting up (docker-compose up -d) resets everything I did within the machine
Do you have any ideas, or even a working docker-compose file?
Searching for this results in thousands of articles how to install Docker on Ubuntu...
THANK YOU so much!
My Dockerfile looks like this
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install openssh-server -y
RUN mkdir /var/run/sshd
RUN echo 'root:pwd' | chpasswd
# SSH allow root login via remote
RUN sed -i 's/#PermitRootLogin/PermitRootLogin/g' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
# RESTART SSH
RUN /etc/init.d/ssh restart
My docker-compose.yml looks like this
version: "3"
services:
# ubuntu server
ubuntu_test:
build: .
container_name: AAubuntuServerDockerfile
restart: always
command: ["sleep","infinity"]
volumes:
- './Docker/Data/Ubuntu_Test:/exchange:rw'
ports:
- "5555:22"
network_mode: bridge

docker run --env, --net and --volume options in docker-compose for displaying image

I'm trying to replicate docker run command with options within a docker-compose file:
My Dockerfile is:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y
RUN apt-get install -y python3-pip python3-dev python3-opencv
RUN apt-get install -y libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-module
WORKDIR /
RUN mkdir /imgs
COPY app.py ./
CMD ["/bin/bash"]
And I use the following command to run the container so that it can display images from shared volume properly:
docker build -t docker_test:v1 .
docker run -it --net=host --env=DISPLAY --volume=$HOME/.Xauthority:/root/.Xauthority docker_test:v1
In order to replicate the previous command, I tried the docker-compose file below:
version: "3.7"
services: docker_test:
container_name: docker_test
build: .
environment:
- DISPLAY=:1
volumes:
- $HOME/.Xauthority:/root/.Xauthority
- $HOME/docker_test/imgs:/imgs
network_mode: "host"
However, after building the image and running app script from inside container (image copied on container, not from shared volume):
docker-compose up
docker run -ti docker_test_docker_test
python3 app.py
The following error arises:
Unable to init server: Could not connect: Connection refused
(OpenCV Image Reading:9): Gtk-WARNING **: 09:15:24.889: cannot open display:
In addition, volumes do not seem to be shared
docker run never looks at a docker-compose.yml file; every option you need to run the container needs to be specified directly in the docker run command. Conversely, Compose is much better at running long-running process than at running interactive shells (and you want the container to run the program directly, in the much the same way you don't typically start a python REPL and invoke main() from there).
With your setup, first you're launching a container via Compose. This will promptly exit (because the main container command is an interactive bash shell and it has no stdin). Then, you're launching a second container with default options and manually running your script there. Since there's no docker run -e DISPLAY option, it doesn't see that environment variable.
The first thing to change here, then, is to make the image's CMD be to start the application
...
COPY app.py .
CMD ./app.py
Then running docker-compose up (or docker run your-image) will start the application without further intervention from you. You probably need a couple of other settings to successfully connect to the host display (propagating $DISPLAY unmodified, mounting the host's X socket into the container); see Can you run GUI applications in a Linux Docker container?.
(If you're trying to access the host display and use the host network, consider whether an isolation system like Docker is actually the right tool; it would be much simpler to directly run ./app.py in a Python virtual environment.)

Bitbucket pipelines/Docker : Connection refused

I am trying to configure a bitbucket CI pipeline to run tests.Stripping out the details I have a make file which looks as follows to run some form of integration tests.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME}
godog
docker-compose -f ${DOCKER_COMPOSE_FILE} down
Docker compose is a single webserver with ports exposed.
Pipeline looks as follows:
- step: &integration-testing
name: Run integration tests script: # do this to make go module work with private repo
- apk add libc-dev py-pip python-dev libffi-dev openssl-dev gcc libc-dev make bash
- pip install docker-compose - git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/"
- go get github.com/onsi/ginkgo/ginkgo
- go get github.com/onsi/gomega/...
- go get github.com/DATA-DOG/godog/cmd/godog
- make build-only && make test-e2e
I am facing two separate issues for both i have not been able to find a solution.
Keep getting connection refused when the tests are run.
To elaborate above, the docker compose brings up a server with proper host:port mapping ("127.0.0.1:10077:10077"). The command godog is intended to run the tests by querying the server. This however always ends in connection refused.This link has a possible solution , so i am exploring that.
The pipeline almost always runs commands before the container is up. I've tried fixing this by changing the invoke to.
test-e2e:
docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && sleep 10 && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
However the container is always brought up after the sleep (almost instantaneously).
Example:
Creating oracle-go ...
Sleep 10
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker exec -i oracle-go godog
Creating oracle-go ... done
Error response from daemon: Container 7bab5322203756b972e7f0a3c6e5827413279914a68c705221b8af7daadc1149 is not running
Please let me know if there is a way around it.
If I understood your question correctly, you want to wait for the server to start before running tests.
Instead of manually sleeping, you should use wait-for-it.sh (or an alternative). See the relevant Docker docs for more information.
For example:
test-e2e:
bash wait-for-it.sh <HOST>:<PORT> -- docker-compose -f ${DOCKER_COMPOSE_FILE} up -d ${APP_NAME} && docker exec -i oracle-go godog && docker-compose -f ${DOCKER_COMPOSE_FILE} down
Change <HOST> and <PORT> to your service's host name and port respectively. Alternatively, you could use wait-for-it.sh in your Docker Compose command or the like.

Run a command after the docker container has started running

Trying to run sendmailconfig after my PHP FPM (7.1-fpm) docker has started, but i'm having a hard time doing so without getting in the way of the FPM part of the container.
FROM php:7.1-fpm
RUN apt-get update && apt-get install
CMD "/usr/local/bin/config.sh" && /bin/bash
I've tried making a script that purely executes yes | sendmailconfig but seems to stop the image's default script from running which causes PHP-FPM to never actually run.
The reason I want this done in the image is because I have to run the sendmailconfig command every time I restart the container, which is impractical when managing multiple docker stacks.
Set your entrypoint to run a file you've copied in, that file should have something like the following in it
/usr/local/bin/config.sh
# If this isn't the correct command for you to start php-fpm look up the correct one for your image
sudo service php7.1-fpm start
# Execute the CMD passed in from the dockerfile
sudo -H bash -c "$#;"
# You'll probably be ok with just `bash -c "$#;"` if you don't have sudo installed

Docker Startup Multiple service is not working

Dockerfile
FROM drupal
RUN apt-get update
RUN apt-get install openssh-server -y
RUN apt-get install -y supervisor
#SS Related Fix : https://github.com/Microsoft/WSL/issues/3621
RUN mkdir -p /run/sshd
# SS Access Configuration
RUN echo "root:Docker!" | chpasswd
#Project Uplaod
RUN rm -rf /var/www/html/*
COPY ./html/ /var/www/html/
# Startup Configuration
COPY servername.conf /etc/apache2/conf-enabled/servername.conf
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
Start Command : docker -D run -p 80:80 -p 2222:22 -it /bin/bash
[supervisord]
nodaemon=true
[program:SSH]
command=/usr/sbin/sshd start
[program:Apache]
command=/etc/init.d/apache2 start
when i jump into Shell and run that command it works but when i start container its not starting up the web server.
As standing in documentation
To start supervisord, run $BINDIR/supervisord. The resulting process
will daemonize itself and detach from the terminal. It keeps an
operations log at $CWD/supervisor.log by default.
You may start the supervisord executable in the foreground by passing
the -n flag on its command line. This is useful to debug startup
problems.
So systemd detach from main process what means for docker that process ended - exit container. To solve your problem you need to change CMD section to
CMD ["/usr/bin/supervisord", "-n"]
When you run
docker -D run -p 80:80 -p 2222:22 -it /bin/bash
The last part of the command, /bin/bash, replaces the CMD in the Dockerfile, so you only get the GNU bash shell. You should remove that part of the line and the standard command from your image will run.
You might consider how much you actually need an interactive shell in your Docker environment. Most application images are set up to run totally on their own without manual setup steps; compare the stock mysql or nginx images, for instance, which don't include any kind of remote login system. Also consider that anyone who can run docker history can now trivially find out your root password, and you have no way to manage the sshd host keys. I'd suggest removing this entire supervisord/sshd system and just packaging your application.

Resources