I use Gitlab CI to build and deploy my docker compose file to the docker host.
Where I see a weird issue with the COPY command where I don't see any new changes done to the files copied to the docker container.
#Copy code files to containers code directory
COPY ./src/pyvol ./src/pyvol # directory with utility libraries
COPY ./src/setup_* ./src/ # entrypoint scripts
The command works fine when the containers are deleted and recreated, I wouldn't want to do that as its not a good way to have continuous integration.
Under the gitlab-ci.yml the below are configured
.build_image_config:
script:
- docker-compose -f docker-compose.yml build --no-cache --pull
- docker-compose -f docker-compose.yml push
.deploy_config:
script:
- docker -H $DOCKER_HOST --tlsverify stack deploy --with-registry-auth -c docker-compose.yml $STACK_NAME --resolve-image always
Would I need to explicitly provide any other parameters while build and deploy to instruct the CI to take in the recent changed files.
Related
I build an Image with a Dockerfile in Jenkins.
In the Dockerfile ./gradlew build is run which generates .xml-files for the JUnit Test-results.
I want to copy these files into the Jenkins which ran docker build so that the Jenkins UI can display the results.
How would I do that?
There are no containers yet so docker cp or volumes are not options afaik.
You can create a container without starting it, then you can copy from it:
docker create --name tmp_container ci_image
docker cp tmp_container:/source ./destination
docker rm tmp_container
Documentation: docker create.
I am using the standard docker command:
docker build -t XXX -f Dockerfile .
to build my container. I would like to know if there is a way to get some more detailed logs about which files are being copied into the image.
Alternatively, is there a way from the DockerFile to list the files in a directory to the stdout (commandline from where docker command was run or log to a file).
Unfortunately my container dockerfile is crashing at a later point so I cannot examine the image as it is not being created.
What would cause a Docker image to not run the command specified in its docker-compose.yaml file?
I have a Dockerfile like:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /code
WORKDIR /code
COPY ./pip-requirements.txt pip-requirements.txt
COPY ./code /code/
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
And a docker-compose.yaml file like:
version: '3'
services:
worker:
container_name: myworker
image: registry.gitlab.com/mygitlabuser/mygitlabproject:latest
network_mode: host
build:
context: .
dockerfile: Dockerfile
command: ./myscript.py --no-wait --traceback
If I build and run this locally with:
docker-compose -f docker-compose.yaml up
The script runs for a few minutes and I get the expected output. Running docker ps -a shows a container called "myworker" was created, as expected.
I now want to upload this image to a repo and deploy it to a production environment by downloading and running it on a remote server.
I re-build the image with:
docker-compose -f docker-compose.yaml build
and then upload it with:
docker login registry.gitlab.com
docker push registry.gitlab.com/myuser/myproject:latest
This succeeds and I confirm the new image exists in my gitlab image repository.
I then login to the production server and download the image with:
docker login registry.gitlab.com
docker pull registry.gitlab.com/myuser/myproject:latest
Again, this succeeds with docker reporting:
Status: Downloaded newer image for registry.gitlab.com/myuser/myproject:latest
Running docker images and docker ps -a shows no existing images or containers.
However, this is where it gets weird. If I then try to run this image with:
docker run registry.gitlab.com/myuser/myproject:latest
nothing seems to happen. Running docker ps -a shows a single container with the command "python2" and the name "gracious_snyder" was created, which don't match my image. It also says the container exited immediately after launch. Running docker logs gracious_snyder shows nothing.
What's going on here? Why isn't my image running the correct command? It's almost like it's ignoring all the parameters in my docker-compose.yaml file and is reverting to defaults in the base python2.7 image, but I don't know why this would be because I built the image using docker-compose and it ran fine locally.
I'm running Docker version 18.09.6, build 481bc77 on both local and remote hosts and docker-compose version 1.11.1, build 7c5d5e4 on my localhost.
Without a command (CMD) defined in your Dockerfile, you get the upstream value from the FROM image. The compose file has some settings to build the image, but most of the values are defining how to run the image. When you run the image directly, without the compose file (docker vs docker-compose), you do not get the runtime settings defined in the compose file, only the Dockerfile settings baked into the image.
The fix is to either use your compose file, or define the CMD inside the Dockerfile like:
CMD ./myscript.py --no-wait --traceback
I've certain basic docker command which i run in my terminal. Now what i want is to use all the basic docker commands into one docker file and then, build that docker file.
For eg.
Consider two docker files
File - Docker1, Docker2
Docker1 contains list of commands to run
And inside Docker2 i want to build Docker1 and run it as well
Docker2:(Consider the scenario with demo code)
FROM ubuntu:16.04
MAINTAINER abc#gmail.com
WORKDIR /home/docker_test/
RUN docker build -t Docker1 .
RUN docker run -it Docker1
I want to do something like this. But it is throwing - docker: error response from daemon oci runtime create failed container_linux.go
How can I do this? Where am I going wrong
P.S - I'm new to Docker
Your example is mixing two steps, image creation and running an image, that can't be mixed that way (with a Dockerfile).
Image creation
A Dockerfileis used to create an image. Let's take this alpine3.8 docker file as a minimal example
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
It's a base image, it's not based on another image, it starts FROM scratch.
Then a tar file is copied and unpacked, see ADD and the shell is set as starting command, see CMD. You can build this with
docker build -t test_image .
Issued from the same folder, where the Dockerfile is. You will also need the rootfs.tar.xz in that folder, copy it from the alpine link above.
Running a container
From that test_image you can now spawn a container with
docker run -it test_image
It will start up and give you the shell inside the container.
Docker Compose
Usually there is no need to build your images over and over again before spawning a new container. But if you really need to, you can do it with docker-compose. Docker Compose is intended to define and run a service stack consisting of several containers. The stack is defined in a docker-compose.yml file.
version: '3'
services:
alpine_test:
build: .
build: . takes care of building the image again before starting up, but usually it is sufficient to have just image: <image_name> and use an already existing image.
I am new to docker, I installed docker as per the instructions provided in the official site.
# build docker images
docker build -t iky_backend:2.0.0 .
docker build -t iky_gateway:2.0.0 frontend/.
Now, while I am running these commands in the terminal after the installation of docker, I am getting the below error. I tried with by adding sudo also. But no use.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/esh/Dockerfile: no such file or directory
Your docker images should execute just fine (may require sudo if you are unable to connect to docker daemon).
docker build requires a Dockerfile to present at the same directory (you are executing at your home folder - dont do that) or you need to use -f to specify the path instead of .
Try this:
mkdir build
cd build
create your Dockerfile here.
docker build -t iky_backend:2.0.0 .
docker images