Can `docker-compose` commands by executed from within a Docker container? - docker

Is it possible to run docker-compose commands from with a Docker container? As an example, I am trying to install https://datahubproject.io/docs/quickstart/ FROM within a Docker container that is built using the Dockerfile shown below. The Dockerfile creates a Linux container with the prerequisites the datahubproject.io project needs (Python) and clones the repository code to a Docker container. I then want to be able to execute the Docker compose scripts from the repository code (that is cloned to the newly built Docker container) to create the Docker containers needed to run the datahubproject.io project. This is not a docker commit question.
To try this, I have the following docker-compose.yml script:
version: '3.9'
# This is the docker configuration script
services:
datahub:
# run the commands in the Dockerfile (found in this directory)
build: .
# we need tty set to true to keep the container running after the build
tty: true
...and a Dockerfile (to setup a Linux environment with the requirements needed for datahubproject.io quickstart):
FROM debian:bullseye
ENV DEBIAN_FRONTEND noninteractive
# install some of the basics our environment will need
RUN apt-get update && apt-get install -y \
git \
docker \
pip \
python3-venv
# clone the GitHub code
RUN git clone https://github.com/kuhlaid/datahub.git --branch master --single-branch
RUN python3 -m venv venv
# # the `source` command needs the bash shell
SHELL ["/bin/bash", "-c"]
RUN source venv/bin/activate
RUN python3 -m pip install --upgrade pip wheel setuptools
RUN python3 -m pip install --upgrade acryl-datahub
CMD ["datahub version"]
CMD ["./datahub/docker/quickstart.sh"]
I run docker compose up from a command line where these two scripts are located to run the Dockerfile and create the start container that will be used to install the datahubproject.io project.
I receive this error:
datahub-datahub-1 | Quickstarting DataHub: version head
datahub-datahub-1 | Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
datahub-datahub-1 | No Datahub Neo4j volume found, starting with elasticsearch as graph service
datahub-datahub-1 | ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
I do not know if what I am trying to do is even possible with Docker. Any suggestions to make this work? - thank you

Can docker-compose commands by executed from within a Docker container?
Yes. A command like any other.
Is it possible to run docker-compose commands from with a Docker container?
Yes.
Any suggestions to make this work?
Like with docker on the host. Either run docker daemon or connect to one with DOCKER_HOST. DIND is relevant https://hub.docker.com/_/docker .

The answer seems to be to modify the docker-compose.yml script to contain two additional settings:
version: '3.9'
# This is the docker configuration script
services:
datahub:
# run the commands in the Dockerfile (found in this directory)
build: .
# we need tty set to true to keep the container running after the build
tty: true
# ---------- adding the following two settings seems to fixes the issue of the `CMD ["./datahub/docker/quickstart.sh"]` failing in the Dockerfile
stdin_open: true
volumes:
- /var/run/docker.sock:/var/run/docker.sock

Related

Gitlab CI npm cannot resolve module

Totally new to Gitlab and CI in general, so apologies for the lack of understanding. I have a repo, which is NuxtJS based, with a Dockerfile. The end goal of the pipeline is to build and push this repo to my docker account. The Dockerfile is relatively straight forward, containing an npm install and npm run build. I'm using a custom docker image as my runner, based on docker:20.10.17-dind-alpine3.16 with ansible, terraform and kubectl installed.
When building the project's docker image on my local machine, I receive no issues, however in gitlab, when running the npm run build command, I get the following error:
Module not found: Error: Can't resolve '../node_modules/vue-confirm-dialog' in '/usr/src/nuxt-app/plugins'
Here is my yml file:
stages:
- docker
docker:
stage: docker
image: <my-runner-image>
services:
- "docker:dind"
before_script:
- docker login -u $DOCKER_REGISTRY_USER -p $DOCKER_REGISTRY_PASSWORD
script:
- docker build -t <my-repo> .
- docker push <my-repo>
Any suggestions are greatly appreciated
--EDIT--
As requested, here is the project's Dockerfile:
FROM node:lts-alpine3.15
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
CMD [ "npm", "start" ]

docker-compose debugging service show `pwd` and `ls -l` at run?

I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.

docker-compose execute command

I'm trying to put some commands in my docker-compose file to be ran in my container and they don't work.
I map a volume from host to container where I have a Root certificate, all I want to do is to run this command update-ca-certificates, so it will updates the directory /etc/ssl/certs with my cert in the container, however it is not happening.
I tried to solve this in a Dockerfile and I can see that the command runs, but it seems that the cert is not present there and just appears after I login to the container.
What I end doing is to get into the container, then I run the needed commands manually.
This is the piece of my docker-compose file that I have been trying to use:
build:
context: .
dockerfile: Dockerfile
command: >
sh -c "ls -la /usr/local/share/ca-certificates &&
update-ca-certificates"
security_opt:
- seccomp:unconfined
volumes:
- "c:/certs_for_docker:/usr/local/share/ca-certificates"
In the same way I cannot run apt update or anything like this, but after connecting to the container with docker exec -it test_alerting_comp /bin/bash I can pull anything from any repo.
My goal is to execute any needed command on building time, so when I login to the container I have already the packages I will use and the Root cert updated, thanks.
Why dont you do package update/install and copy certificates in the Dockerfile?
Dockerfile
...
RUN apt-get update && apt-get -y install whatever
COPY ./local/certificates /usr/local/share/my-certificates
RUN your-command-for-certificates
docker-compose.yml
version: "3.7"
services:
your-service:
build: ./dir

How to run any commands in docker volumes?

After couple of days testing and working on docker (i am in general trying to migrate from vagrant to docker) i encountered a huge problem which i am not sure how or where to fix it.
docker-compose.yml
version: "3"
services:
server:
build: .
volumes:
- ./:/var/www/dev
links:
- database_dev
- database_testing
- database_dev_2
- mail
- redis
ports:
- "80:8080"
tty: true
#the rest are only images of database redis and mailhog with ports
Dockerfile
example_1
FROM ubuntu:latest
LABEL Yamen Nassif
SHELL ["/bin/bash", "-c"]
RUN apt-get install vim mc net-tools iputils-ping zip curl git -y
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN cd /var/www/dev
RUN composer install
Dockerfile
example_2
....
RUN apt-get install apache2 openssl php7.2 php7.2-common libapache2-mod-php7.2 php7.2-fpm php7.2-mysql php7.2-curl php7.2-dom php7.2-zip php7.2-gd php7.2-json php7.2-opcache php7.2-xml php7.2-cli php7.2-intl php7.2-mbstring php7.2-redis -y
# basically 2 files with just rooting to /var/www/dev
COPY docker/config/vhosts /etc/apache2/sites-available/
RUN service apache2 restart
....
now the example_1 composer.json file/directory not found
and example_2 apache will says the root dir is not found
file/directory = /var/www/dev
i guess its because its a volume and it wont be up until the container is fully up because if i launch the container without the prev commands which will lead to an error i can then login to the container and execute the commands from command line without anyerror
HOW TO FIX THIS ?
In your first Dockerfile, use the COPY directive to copy your application into the image before you do things like RUN composer install. It'd look something like
FROM php:7.0-cli
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN composer install
(cribbed from the php image documentation; that image may not have composer preinstalled).
In both Dockerfiles, remember that each RUN command creates a new empty container, runs its command, and cleans up after itself. That means commands like RUN cd ... have no effect, and you can't start a service in the background in one RUN command and have it available later; it will get stopped before the Dockerfile moves on to the next line.
In the second Dockerfile, commands like service or systemctl or initctl just don't work in Docker and you shouldn't try to use them. Standard practice is to start the server process as a foreground process when the container launches via a default CMD directive. The flip side of this is that, since the server won't start until docker run time, your volume will be available at that point. I might RUN mkdir in the Dockerfile just to be sure it exists.
The problem seems the execution order. At image build time /var/www/dev is available. When you start a container from that image the container /var/www/dev is overwritten with your local mount.
If you need no access from your host, the you can simple skip the extra volume.
In case you want use it in other containers to, the you should work with symlinks.

How can I write Dockerfile for Yesod? "RUN yesod init -n myApp -d postgresql" didn't work as expected

I tried to make a simple application with Yesod and PostgreSQL using Docker Compose but RUN yesod init -n myApp -d postgresql didn't seem to work as expected.
I defined Dockerfile and docker-compose.yml as below:
Dockerfile:
FROM shuny/ghc-7.8.4:latest
MAINTAINER shuny
# Create default config
RUN cabal update
# Add stackage remote repo
RUN sed -i 's/^remote-repo: [a-zA-Z0-9_\/:.]*$/remote-repo: stackage:http:\/\/www.stackage.org\/lts/g' /root/.cabal/config
# Update packages
RUN cabal update
# Generate locale otherwise happy (because of tf-random) will fail
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN echo $LANG
# Install build tools for yesod
RUN cabal install alex happy yesod-bin
# Install library for yesod-postgres
RUN apt-get update && apt-get install -y libpq-dev
RUN mkdir /code
WORKDIR /code
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
ADD . /code
WORKDIR /code
# ADD settings.yml /code/myApp/config/
docker-compose.yml:
database:
image: postgres
ports:
- "5432"
web:
build: .
tty: true
command: yesod devel
volumes:
- .:/code/
ports:
- "3000:3000"
links:
- database
and docker-compose build returned as below:
Step 0 : FROM shuny/ghc-7.8.4:latest
...
Step 17 : WORKDIR /code
---> Running in bf99d0aca48c
---> 37c3c94338d7
Removing intermediate container bf99d0aca48c
Successfully built 37c3c94338d7
but when I check like this:
$docker-compose run web /bin/bash
root#0fe5fb1a3b20:/code# ls
root#0fe5fb1a3b20:/code#
it showed nothing while this commands seem to work as expected:
docker run -ti 37c3c94338d7
root#31e94428de37:/code# ls
docker-compose.yml Dockerfile myApp settings.yml
root#31e94428de37:/code# ls myApp/
app config Handler Model.hs Settings.hs test
Application.hs dist Import myApp.cabal static
cabal.sandbox.config Foundation.hs Import.hs Settings templates
How can I fix it?
I really appliciate any feedback, thank you.
You are doing strange things with volumes and the ADD instruction.
First you build your application inside the image:
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
Then you add the content of the folder that contains the Dockerfile in the /code folder of the image. I guess this step is useless.
ADD . /code
Then, if you run a container without -volume option, everything works fine
docker run -ti 37c3c94338d7
But in your docker-compose.yml file, you specified a volume option that overides the /code folder in the container with the folder that contains the docker-compose.yml file on the host machine. Therefore, you don't have the content generated during the build of your image anymore.
There are two possibilities:
Don't use the volume instruction in the docker-compose.yml file
Put the content of the /code/myApp/ folder of the image inside the ./myApp folder of the host.
It depends on why you want to use the volume option in docker-compose.yml.
I don't really know what is your goal. But if what you are trying to do is to access to the files built inside the container from the host machine, maybe this should do what you are looking for:
Remove the build steps from your Dockerfile
Run a shell inside a "web" container: docker-compose run web bash
Launch the build commands
So you will have built your application while the volume was mounted and will see the files on the host machine.
Exit the shell
Launch Docker Compose normally
If you just want to be able to backup the content of the /code/myApp/ folder, maybe you should omit the path on the host machine from the volume section of docker-compose.yml.
volumes:
- /code/
And follow this section of the documentation
I hope it helps

Resources