I am trying to run a script inside the docker container.
I tried using docker cp but it does not work as there are no running containers. docker container ls is empty.
Locally I am able to docker cp my-custom-image:my_script.py my_script.py
so the problem is not with my docker image.
stage-name:
image: "my-custom-image:0.1.0"
stage: my-stage
script:
- python3 my_script.py
I finally found the solution
In the dockerfile of my-custom-image, I added these lines
COPY ./my_script.py /usr/local/bin/my_script.py
RUN chmod +x /usr/local/bin/my_script.py
I added python3 shebang in my_script.py
#!/usr/local/bin/python3
Then I am able to execute the script in the CI
stage-name:
image: "my-custom-image:0.1.0"
stage: my-stage
script:
- my_script.py
Related
Is it possible to run docker-compose commands from with a Docker container? As an example, I am trying to install https://datahubproject.io/docs/quickstart/ FROM within a Docker container that is built using the Dockerfile shown below. The Dockerfile creates a Linux container with the prerequisites the datahubproject.io project needs (Python) and clones the repository code to a Docker container. I then want to be able to execute the Docker compose scripts from the repository code (that is cloned to the newly built Docker container) to create the Docker containers needed to run the datahubproject.io project. This is not a docker commit question.
To try this, I have the following docker-compose.yml script:
version: '3.9'
# This is the docker configuration script
services:
datahub:
# run the commands in the Dockerfile (found in this directory)
build: .
# we need tty set to true to keep the container running after the build
tty: true
...and a Dockerfile (to setup a Linux environment with the requirements needed for datahubproject.io quickstart):
FROM debian:bullseye
ENV DEBIAN_FRONTEND noninteractive
# install some of the basics our environment will need
RUN apt-get update && apt-get install -y \
git \
docker \
pip \
python3-venv
# clone the GitHub code
RUN git clone https://github.com/kuhlaid/datahub.git --branch master --single-branch
RUN python3 -m venv venv
# # the `source` command needs the bash shell
SHELL ["/bin/bash", "-c"]
RUN source venv/bin/activate
RUN python3 -m pip install --upgrade pip wheel setuptools
RUN python3 -m pip install --upgrade acryl-datahub
CMD ["datahub version"]
CMD ["./datahub/docker/quickstart.sh"]
I run docker compose up from a command line where these two scripts are located to run the Dockerfile and create the start container that will be used to install the datahubproject.io project.
I receive this error:
datahub-datahub-1 | Quickstarting DataHub: version head
datahub-datahub-1 | Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
datahub-datahub-1 | No Datahub Neo4j volume found, starting with elasticsearch as graph service
datahub-datahub-1 | ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
I do not know if what I am trying to do is even possible with Docker. Any suggestions to make this work? - thank you
Can docker-compose commands by executed from within a Docker container?
Yes. A command like any other.
Is it possible to run docker-compose commands from with a Docker container?
Yes.
Any suggestions to make this work?
Like with docker on the host. Either run docker daemon or connect to one with DOCKER_HOST. DIND is relevant https://hub.docker.com/_/docker .
The answer seems to be to modify the docker-compose.yml script to contain two additional settings:
version: '3.9'
# This is the docker configuration script
services:
datahub:
# run the commands in the Dockerfile (found in this directory)
build: .
# we need tty set to true to keep the container running after the build
tty: true
# ---------- adding the following two settings seems to fixes the issue of the `CMD ["./datahub/docker/quickstart.sh"]` failing in the Dockerfile
stdin_open: true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.
I have this gitlab-ci file:
services:
- docker:18.09.7-dind
variables:
SONAR_TOKEN: "$PROJECT_SONAR_TOKEN"
GIT_DEPTH: 0
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
sonarqube-check:
image: maven:latest
stage: test
before_script:
- "docker version"
- "mkdir $PWD/.m2"
- "cp -f /cache/settings.xml $PWD/.m2/settings.xml"
script:
- mvn $MAVEN_CLI_OPTS clean verify sonar:sonar -Dsonar.qualitygate.wait=true -Dsonar.login=$SONAR_TOKEN -Dsonar.projectKey="project-key"
after_script:
- "rm -rf $PWD/.m2"
allow_failure: false
only:
- merge_requests
For some reason docker in docker service does not find the binaries for docker (the docker version command, line 16):
/bin/bash: line 111: docker: command not found
I'm wondering if there is a way of doing this inside of the gitlab-ci file because I need to run docker for the tests, if there is an image that contains both maven and docker binaries or if I'll have to create my own docker image.
It has to be all in one stage, I cannot divide it in two stages (or at least I don't know how to compile in maven in one stage and run the tests witha docker image in another stage)
Thank you!
As you correctly pointed out. You need mvn and docker binaries in the image you are using for that GitLab-CI job.
The quickest win is probably to install docker in your maven:latest build image during run time in the before_script section.
before_script:
- apt-get update && apt-get install -y docker.io
- docker version
If that's slowing down your job too much you might want to build your own custom docker image that contains both Maven and Docker.
Also have a look at the article about dind on Gitlab if you end up moving to Docker 19.03+
I'm trying to run e2e tests against a docker image, which is based on the offical nginx image, and built in a step before.
My idea was to make it available via service in this way:
e2e:
stage: e2e
image: weboaks/node-karma-protractor-chrome:alpine
services:
- name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
alias: app
before_script:
- yarn
- yarn run webdriver:update --standalone
script:
- yarn run e2e:ci
The docker file of the as service linked image looks like
FROM nginx:1.15-alpine
RUN rm -rf /usr/share/nginx/html/* && apk add --no-cache -vvv bash
ADD deploy/nginx/conf.d /etc/nginx/conf.d
ADD dist /usr/share/nginx/html
But it seems that app isn't available under http://app.
Do I miss something or is there any other approach to test against an already created image?
When I run the image with docker run -p 80:80 local-test locally or deploy it to a server everything works as expected.
I am building Scigraph database on my local machine and trying to move this entire folder to docker and run it, when I run the shell script on my local machine it runs without error when I add the same folder inside docker and try to run it fails
Am I doing this right way, here's my DOckerfile
FROM goyalzz/ubuntu-java-8-maven-docker-image
ADD ./SciGraph /usr/share/SciGraph
WORKDIR /usr/share/SciGraph/SciGraph-services
RUN pwd
EXPOSE 9000
CMD ['./run.sh']
when I try to run it I'm getting this error
docker run -p9005:9000 test
/bin/sh: 1: [./run.sh]: not found
if I run it using below command it works
docker run -p9005:9000 test -c "cd /usr/share/SciGraph/SciGraph-services && sh run.sh"
as I already marked the directory as WORKDIR and running the script inside docker using CMD it throws error
For scigraph as provided in their ReadMe, you can to run mvn install before you run their services. You can set your shell to bash and use a docker compose to run the docker image as shown below
Dockerfile
FROM goyalzz/ubuntu-java-8-maven-docker-image
ADD ./SciGraph /usr/share/SciGraph
SHELL ["/bin/bash", "-c"]
WORKDIR /usr/share/SciGraph
RUN mvn -DskipTests -DskipITs -Dlicense.skip=true install
RUN cd /usr/share/SciGraph/SciGraph-services && chmod a+x run.sh
EXPOSE 9000
build the scigraph docker image by running
docker build . -t scigraph_test
docker-compose.yml
version: '2'
services:
scigraph-server:
image: scigraph_test
working_dir: /usr/share/SciGraph/SciGraph-services
command: bash run.sh
ports:
- 9000:9000
give / after SciGraph-services and change it to "sh run.sh" ................ and look into run.sh file permissions also
It is likely that your run.sh doesn't have the #!/bin/bash header, so it cannot be executed only by running ./run.sh. Nevertheless, always prefer to run scripts as /bin/bash foo.sh or /bin/sh foo.sh when in docker, especially because you don't know what changes files have been sourced in images downloaded from public repositories.
So, your CMD statement would be:
CMD /bin/bash -c "/bin/bash run.sh"
You have to add the shell and the executable to the CMD array ...
CMD ["/bin/sh", "./run.sh"]