Gitlab runner not executing jobs docker image - docker

I created a minimal gitlab CI script to verify this error:
docker_execution_test:
image: debian:9
script:
- pwd
- ls
The output I would expect is this:
db#theia:~/git/docker_test (master*)$ docker run -it --rm debian:9 pwd
/
db#theia:~/git/docker_test (master*)$ docker run -it --rm debian:9 ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
However, the output when executed through gitlab-runner is this:
db#theia:~/git/docker_test (master)$ gitlab-runner exec docker docker_execution_test
Runtime platform arch=amd64 os=darwin pid=49585 revision=3afdaba6 version=11.5.0
WARNING: You most probably have uncommitted changes.
WARNING: These changes will not be tested.
Running with gitlab-runner 11.5.0 (3afdaba6)
Using Docker executor with image debian:9 ...
Pulling docker image debian:9 ...
Using docker image sha256:4879790bd60d439cfe39c063660eef7af525d5f6f1cbb701a14c7cfc11cbfcf7 for debian:9 ...
Running on runner--project-0-concurrent-0 via theia.local...
Cloning repository...
Cloning into '/builds/project-0'...
done.
Checking out bb973ec4 as master...
Skipping Git submodules setup
$ pwd
/builds/project-0
$ ls
README.md
Job succeeded
What the job is listing is the content of the special gitlab container that's used throughout the build. Why is the container not created? What am I missing here?

As it turns out, gitlab-runner was working as expected. What is quite confusing though is that it does some manipulations to the image it's booting up.
The entrypoint is overridden and the folder the repository is checked out into is mounted into the container with the WORKDIR pointing to it.
So while it is possible to run your own images as containers, you have to keep in mind that you might need to change the folder before running any commands.

Related

Docker bind mount directory in /tmp not working

I'm trying to mount a directory in /tmp to a directory in a container, namely /test. To do this, I've run:
docker run --rm -it -v /tmp/tmpl42ydir5/:/test alpine:latest ls /test
I expect to see a few files when I do this, but instead I see nothing at all.
I tried moving the folder into my home directory and running again:
docker run --rm -it -v /home/theolodus/tmpl42ydir5/:/test alpine:latest ls /test
at which point I see the expected output. This makes me thing I have mis-configured something and/or the permissions have bitten me. Have I missed a step in installing docker? I did it via sudo snap install docker, and then configured docker to let me run as non-root by adding myself to the docker group. Running as root doesn't help...
Host machine is Ubuntu 20.04, docker version is 19.03.11
When running docker as a snap...
all files that Docker uses, such as dockerfiles, to be in $HOME.
Ref: https://snapcraft.io/docker
The /tmp filesystem simply isn't accessible to the docker engine when it's running within the snap isolation. You can install docker directly on Ubuntu from the upstream Docker repositories for a more traditional behavior.

How to docker cp correctly?

I am trying to copy a file from my host to my container. I have already checked out many threads but neither of those work out for me.
File name, I'm trying to copy: ex.txt
Container folder where it needs to be: my_folder
user:~$ docker exec -it my_container bash
a5b13d9a55fd:~S ls
my_folder
What I have tried so far:
user:~$ docker cp ex.txt my_container:/my_folder/
no such directory
user:~$ docker cp ex.txt my_container:/my_folder/ex.txt
Error response from daemon: lstat /var/lib/docker/aufs/mnt/f7796d886aa3673be37b1d346190b7d6ba0ed64edf83bf62bff325f87eaec5eb/my_folder: no such file or directory
Please suggest where am I missing the code?
EDIT: since the Image seems to use a none ROOT User
you may try this:
docker cp ex.txt my_container:$HOME/my_folder/ex.txt
you should make sure that my_folder is already in the container, to be sure run this command at first:
docker exec my_container_name mkdir -p $HOME/my_folder
Please go through offical documentation properly.
Also check this out.
In your case this should work.
docker cp ex.txt my_container:/my_folder/
Update-1:
In your case, I doubt /my_folder is not present inside the container, this is what the error says.
Also quoting the line mentioned in official documentation.
docker cp does not create parent directories for DEST_PATH if they do
not exist.
So the /my_folder directory will not get created automatically.
Do this. docker exec -it my_container mkdir /my_folder and then run docker cp command.
Update-2:
If nothing is working then please try this, it worked for me.
$ cat /root/ex.txt
abc
$ docker run -itd alpine sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
921b31ab772b: Pull complete
Digest: sha256:ca1c944a4f8486a153024d9965aafbe24f5723c1d5c02f4964c045a16d19dc54
Status: Downloaded newer image for alpine:latest
35ad53b81c30f675b28a53e6a266f039cf49e90705d41e499deb4f17ab900255
$
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
35ad53b81c30 alpine "sh" 3 seconds ago Up 2 seconds mystifying_babbage
$
$ docker exec -it 35ad53b81c30 sh
/ # ls
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
/ # mkdir /my_folder
$
$ docker cp /root/ex.txt mystifying_babbage:/my_folder/
$
$ docker exec -it 35ad53b81c30 sh
/ # ls /my_folder/
ex.txt
/ # cat /my_folder/ex.txt
abc
/ #
Hope this helps.
Ensure also not having typos in the container name, otherwise the same error will show up too

How to copy SSH from JENKINS host into a DOCKER container?

I can't copy the file from the host into the container using the Dockerfile, because i'm simply not allowed to, as mentioned in Docker Documentation:
The path must be inside the context of the build; you cannot
COPY ../something /something, because the first step of a docker build
is to send the context directory (and subdirectories) to the docker
daemon.
I'm also unable to do so from inside jenkins job, because the job commands run inside the shell of the docker container, there is not way to talk to the parent(which is the jenkins host).
This jenkins plugin could have been a life saver, but as mentioned in the first section: distribution of this plugin has been suspended due to unresolved security vulnerabilities.
This is how I copy files from host to docker image using Dockerfile
I have a folder called tomcat
Inside that, I have a tar file and Dockerfile
Commands to do the whole process just for understanding
$ pwd
/home/user/Documents/dockerfiles/tomcat/
$ ls
apache-tomcat-7.0.84.tar.gz Dockerfile
Sample Docker file:
FROM ubuntu_docker
COPY apache-tomcat-7.0.84.tar.gz /home/test/
...
Docker commands:
$ docker build -it testserver .
$ docker run -itd --name test1 testserver
$ docker exec -it bash
Now you are inside docker container
# ls
apache-tomcat-7.0.84.tar.gz
As you can see I am able to copy apache-tomcat-7.0.84.tar.gz from host to Docker container.
Notice the Docker Documentation first line which you have shared
The path must be inside the context of the build;
So as long as the path is reachable during build you can copy.
Another way of doing this would be using volume
docker run -itd -v $(pwd)/somefolder:/home/test --name test1 testserver
Notice -v parameter
You are telling Docker to mount Current_Directory/somefolder to Docker's path at /home/test
Once the container is up and running you can simply copy any file to $(pwd)/somefolder and it will get copied
inside container at /home/test

Unable to share/mount Volume with Docker Toolbox on Windows 10

I am trying to setup my project with docker. I am using Docker Toolbox on Windows 10 Home. I am very new to docker. To my understanding I have to copy my files to new container and add a volume so that I can persist changes made by gulp.
Here is my folder structure
-- src
|- dist
|- node-modules
|- gulpfile.js
|- package.json
|- Dockerfile
The Dockerfile code
FROM node:8.9.4-alpine
RUN npm install -g gulp
CMD [ "ls", 'source' ]
I tried many solutions for *docker run -v *
e.g
docker run -v /$(pwd):/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //d/proj:/source <container image>
docker run -v /d/proj:/source <container image>
* But No luck *
Can anyone describe how would you set it up for yourself with the same structure. And why am I not able to mount my host folder.
P.S: If I use two containers one for compiling my code with gulp and one with nginx to serve the content of dist folder. How will I do that.
#sxm1972 Thank you for your effort and help.
You are probably using Windows Pro or a server edition. I am using Windows 10 Home edition
Here is how I solved it, so other people using same setup can solve their issue.
There may be a better way to solve this, please comment if there is an efficient way.
So...
First, the question... Why I don't see my shared volume from PC in my container.
Ans: If we use docker's Boot2Docker with VirtualBox (which I am) then whenever a volume is mounted it refers to a folder inside the Boot2Docker VM
Image: Result of -v with docker in VirtualBox Boot2Docker
So with this if we try to use $ ls it will show an empty folder which in my case it did.
So we have to actually mount the folder to Boot2Docker VM if we want to share our files from Windows environment to Container.
Image: Resulting Mounts Window <-> Boot2Docker <-> Container
To achieve this we have to manually mount the folder to VM with the following command
vboxmanage sharedfolder add default --name "<folder_name_on_vm>" --hostpath "<path_to_folder_on_windows>" --automount
IF YOU GET ERROR RUNNING THE COMMAND, SAYING vboxmanager NOT FOUND ADD VIRTUAL BOX FOLDER PATH TO YOUR SYSTEM PATH. FOR ME IT WAS C:\Program Files\Oracle\VirtualBox
After running the command, you'll see <folder_name_on_vm> on root. You can check it by docker-machine ssh default and then ls /. After confirming that the folder <folder_name_on_vm> exist, you can use it as volume to your container.
docker run -it -v /<folder_name_on_vm>:/source <container> sh
Hope this helps...!
P.S If you are feeling lazy and don't wan't to mount a folder, you can place your project inside your C:/Users folder as it is mounted by default on the VM as show in the image.
The problem is because the base-image you use runs the node REPL as its ENTRYPOINT. If you run the image as docker run -it node:8.9.4-alpine you will see a node prompt and it will not run the npm command like you want.
The way I worked around the problem is to create your own base image using the following Dockerfile:
FROM node:8.9.4-alpine
CMD ["sh"]
Build it as follows:
docker built -t mynodealpine .
Then build your image using this modified Dockerfile:
FROM mynodealpine
RUN npm install -g gulp
CMD [ "/bin/sh", "-c", "ls source" ]
For the problem regarding mounting of volumes, since you are using Docker for Windows, you need to go into Settings (click on the icon in the system tray) and then go and enable Shared Drives.
Here is the output I was able to get:
PS C:\users\smallya\testnode> dir
Directory: C:\users\smallya\testnode
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2/18/2018 11:11 AM dist
d----- 2/18/2018 11:11 AM node_modules
-a---- 2/18/2018 11:13 AM 77 Dockerfile
-a---- 2/18/2018 11:12 AM 26 gulpfile.js
-a---- 2/18/2018 11:12 AM 50 package.json
PS C:\users\smallya\testnode> docker run -it -v c:/users/smallya/testnode:/source mynodealpinenew
Dockerfile dist gulpfile.js node_modules package.json
PS C:\users\smallya\testnode>
Thanks for the question, possible more simple configuration via a VirtualBox graphical dialogue, worked for me, without the use of command line, albeit maybe not necessarily more versatile:
configuring the sharing folder inside VirtualBox shared folders configuration dialogue,
and then calling mount like this
docker run --volume //d/docker/nginx:/etc/nginx
I will be binding the /etc/nginx directory in my container to
D:\Program Files\Docker Toolbox\nginx
source:
https://medium.com/#Charles_Stover/fixing-volumes-in-docker-toolbox-4ad5ace0e572#fromHistory#fromHistory

Apply changes to docker container after 'exec' into it

I have successfully shelled into a RUNNING docker container using
docker exec -i -t 7be21f1544a5 bash
I have made some changes to some json files and wanted to apply these changes to reflect online.
I am a beginner and have tried to restart, mount in vain. What strings I have to replace when I mount using docker run?
Is there any online sample?
CONTAINER ID: 7be21f1544a5
IMAGE: gater/web
COMMAND: "/bin/sh -c 'nginx'"
CREATED: 4 weeks ago
STATUS: Up 44 minutes
PORTS: 443/tcp, 172.16.0.1:10010->80/tcp
NAMES: web
You can run either create a Dockefile and run:
docker build .
from the same directory where your Dockerfile is located.
or you can run:
docker run -i -t <docker-image> bash
or (if your container is already running)
docker exec -i -t <container-id> bash
once you are in the shell make all the changes you please. Then run:
docker commit <container-id> myimage:0.1
You will have a new docker image locally myimage:0.1. If you want to push to a docker repository (dockerhub or your private docker repo) you can run:
docker push myimage:0.1
There are 2 ways to do it :
Dockerfile approach
You need to know what changes you have made into Docker container after you have exec into it and also the Dockerfile of the image .
Lets say you installed additional rpm using yum install command after entering into the container (yum install perl-HTML-Format) and updated some file say /opt/test.json inside contianer (take a backup of this file in Docker host in some directory or in directory Dockerfile exist)
The above command/steps you can place in Dockerfile as
RUN yum install perl-HTML-Format
COPY /docker-host-dir/updated-test.json /opt/test.json
Once you update the Dockerfile, create the new image and push it to Docker repository
docker build -t test_image .
docker push test_image:latest
You can save the updated Dockerfile for future use.
Docker commit command approach
After you made the changes to container, use below commands to create a new image from container's changes and push it online
docker commit container-id test_image
docker push test_image
docker commit --help
Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
You don't want to do that. After you figured out what you needed you throw away the running container (git rm 7be21f1544a5), repeat the changes in the Dockerfile and build a new image to run.

Resources