Will GitLab docker executor open login shells? - docker

I've installed GitLab's omnibus installer as a docker container, as well as the GitLab runner in another container. I registered the runner to use the docker executor with a default image that I've built on top of Ubuntu 18.04.
The image that I've built has rvm and nvm for installing different versions of Ruby and Node. However, when GitLab tries to run my CI pipeline on that image, it can't launch the rvm or nvm commands.
$ nvm install $NODE_VERSION
bash: line 115: nvm: command not found
I think the default installation of both rvm and nvm require you to launch a login shell, in order to make the rvm and nvm commands available.
After a bunch of messing with .bashrc, I got the rvm command to work, but is there a way to just make the CI pipelines launch in login shells?
Is there a way to dump the exact command that the GitLab runner is using to launch the CI pipeline? I think I found the source code, but it would be nice to just dump the docker command arguments. I tried setting the log_level to debug, but I didn't notice any difference.

Related

how to run docker build command in jenkins shell prompt

I want to run docker build command in Jenkins shell prompt.
Have already installed docker and add Jenkins user in docker usergroup.
But when i hit docker build command it shows me permission denied issue and when i am using sudo prefix it ask for password with -S argument.
I am running all commands on Jenkins master, earlier i used on other node server not master.
So what is best way to resolve this.

Install package in running docker container

i've been using a docker container to build the chromium browser (building for Android on Debian 10). I've already created a Dockerfile that contains most of the packages I need.
Now, after building and running the container, I followed the instructions, which asked me to execute an install script (./build/install-build-deps-android.sh). In this script multiple apt install commands are executed.
My question now is, is there a way to install these packages without rebuilding the container? Downloading and building it took rather long, plus rebuilding a container each time a new package is required seems kind of suboptimal. The error I get when executing the install script is:
./build/install-build-deps-android.sh: line 21: lsb_release: command not found
(I guess there will be multiple missing packages). And using apt will give:
root#677e294147dd:/android-build/chromium/src# apt install nginx
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nginx
(nginx just as an example install).
I'm thankfull for any hints, as I could only find guides that use the Dockerfile to install packages.
You can use docker commit:
Start your container sudo docker run IMAGE_NAME
Access your container using bash: sudo docker exec -it CONTAINER_ID bash
Install whatever you need inside the container
Exit container's bash
Commit your changes: sudo docker commit CONTAINER_ID NEW_IMAGE_NAME
If you run now docker images, you will see NEW_IMAGE_NAME listed under your local images.
Next time, when starting the docker container, use the new docker image you just created:
sudo docker run **NEW_IMAGE_NAME** - this one will include your additional installations.
Answer based on the following tutorial: How to commit changes to docker image
Thanks for #adnanmuttaleb and #David Maze (unfortunately, they only replied, so I cannot accept their answers).
What I did was to edit the Dockerfile for any later updates (which already happened), and use the exec command to install the needed dependencies from outside the container. Also remember to
apt update
otherwise you cannot find anything...
A slight variation of the steps suggested by Arye that worked better for me:
Create container from image and access in interactive mode: docker run -it IMAGE_NAME bin/bash
Modify container as desired
Leave container: exit
List launched containers: docker ps -a and copy the ID of the container just modified
Save to a new image: docker commit CONTAINER_ID NEW_IMAGE_NAME
If you haven't followed the Post-installation steps for Linux
, you might have to prefix Docker commands with sudo.

How does the Jenkins CloudBees Docker Build Plugin set its Shell Path

I'm working with a Jenkins install I've inherited. This install has the CloudBees Docker Custom Build Environment Plugin installed. We think this plugin gives us a nifty Build inside a Docker container checkbox in our build configuration. When we configure jobs with this option, it looks like (based on Jenkins console output) Jenkins runs the build with the following command
Docker container 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q started to host the build
[WIP-Tests] $ docker exec -t 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q /bin/sh -xe /tmp/hudson7939624860798651072.sh
However -- we've found that this runs /bin/sh with a very limited set of environmental variables -- including a $PATH that doesn't include /bin! So
How does the CloudBees Docker Custom Build Environment Plugin setup its /bin/sh environment. Is this user configurable via the UI (if so, where?)
It looks like Jenkins is using docker exec -- which i think means that it must have (invisibly) setup a container with a long running process using docker run. Doesn't anyone know how the CloudBees Docker Custom Build Environment Plugin plugin invokes docker run, and if this is user manageable?
Considering this plugin is "up for adoption", I would recommend the official JENKINS/Docker Pipeline Plugin.
It source code show very few recent commits.
But don't forget any container has a default entrypoint set to /bin/sh
ENTRYPOINT ["/bin/sh", "-c"]
Then:
The docker container is ran after SCM has been checked-out into a slave workspace, then all later build commands are executed within the container thanks to docker exec introduced in Docker 1.3
That means the image you will be pulling or building/running must have a shell, to allow the docker exec to execute a command in it.

docker supposedly installed but not runnable

If I do a docker command like
docker -version
I get the error that docker is not installed and that I can do sudo apt-get install docker to install it. If I do this, it says that docker is the latest version. Do I need to set some kind of path to the binary to get it to run?
If I do which docker, there is no answer.
I have found the answer to the question.
Apparently there is a package called "docker" which has nothing to do with docker the container software which is actually docker-ce. The application I had installed was the fake docker, not the container-ware.
To install docker-ce there is a process given on Digital Ocean which can be used.
.

installing docker cloud cli on windows

I am new to docker and setting up the environment in my windows 7 laptop to begin learning. I installed docker through docker toolbox. To install docker cloud cli, I followed the official documentation
https://docs.docker.com/docker-cloud/installing-cli/#install
I opened the quick start terminal and executed :
docker run dockercloud/cli -h
but while verifying the cloud version I am getting error 'bash: docker-cloud: command not found'.
Then I tried executing with pip command but didnt work.
I have below tools installed :
Python 2.7.13
Docker version 17.05.0-ce, build 89658be
docker-machine version 0.11.0, build 5b27455
and I have also verified that the docker engine is in running status.
Any help is appreciated.
The docker-cloud commands is not installed as part of the Docker Toolbox installation.
The first command that you run is running a docker container that does run the Docker Cloud CLI. Running this container also is not a way to install the docker-cloud command directly to your windows host. You can always invoke the same 'docker run dockercloud/cli' command to run the CLI containerized.
Similarly, the pip command is not installed as part of the Docker Toolbox installation. pip is something I would expect to be installed if you install Python on your windows system.
If you take a look at: https://docs.docker.com/docker-cloud/installing-cli/#install, the section for installing docker-cloud on windows does include this piece of advice:
If you do not have Python or pip installed, you can either install Python or use this standalone pip installer. You do not need Python for our purposes, just pip.
You did mention that you have python installed, but you are still getting "command not found" when you try to run pip. That could simply be a problem with the $PATH in the quickstart terminal. I would recommend trying the pip command from a powershell window rather than the quickstart terminal. If you do have the pip command somewhere on your system, make sure that the location it is installed to does appear in your $PATH in the bash inside the quickstart terminal.
Once you have pip installed, and in your $PATH, you should be able to run the pip install docker-cloud command.
It would also be a good idea to make sure that the directory holding the installed docker-cloud binary will also appear in your $PATH inside the quickstart terminal.

Resources