Teamcity Build won't run until Build Agents is configured with Docker? - docker

I created a new build for my Teamcity pipeline. For the first time I use then Docker buildstep. After I setup everything I realized the build agent does not seem to be ready for it.
I understand that my agent does not seem to be ready for building with docker but nobody is actually telling me how you can do that. I read the official guides but no word about how to actually install docker into my agent (if that's the way to solve the problem).
Can someone tell me what I have to do to get it to work?
EDIT
#Senior Pomidor helped me to get one step closer. I added his first example to the docker run command
docker run -it -e SERVER_URL="<url to TeamCity server>" \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
After doing so I got rid of the mentioned messages in the screenshot. My Agents configuration now has the following:
docker.server.osType linux
docker.server.version 18.06.1
docker.version 18.06.1
But still Teamcity is complaining with this message:
Which kinda leaves me clueless again.
Final Solution:
The upcoming EDIT2 issue could be resolved by just restarting the teamcity server instance. The agent was actually able to run the build but teamcity was not able to realise that without a reboot.
EDIT2
Request Information:
My CI Server OS:
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
Running Container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f8e0b04d6a6 jetbrains/teamcity-agent "/run-services.sh" 19 hours ago Up 19 hours 9090/tcp teamcity-agent
20964c22b2d9 jetbrains/teamcity-server "/run-services.sh" 37 hours ago Up 37 hours 0.0.0.0:80->8111/tcp teamcity-server-instance
Container run by:
## Server
docker run -dit --name teamcity-server-instance -v /data/teamcity:/data/teamcity_server/datadir -v /var/log/teamcity:/opt/teamcity/logs -p 80:8111 jetbrains/teamcity-server
## Agent
docker run -itd --name teamcity-agent -e SERVER_URL="XXX.XXX.XXX.XXX:80" --privileged -e DOCKER_IN_DOCKER=start -v /etc/teamcity/agent/conf:/data/teamcity_agent/conf jetbrains/teamcity-agent
Build Step Information:

TC restricted the configuration because of TA doesn't start Docker daemon.
You should pass -e DOCKER_IN_DOCKER=start for automatically staring the docker daemon in the container. Also, docker daemon needs the docker socket. In a Linux container, if you need a Docker daemon available inside your builds, you have two options:
--privileged flag. New Docker daemon running within your container
-v docker_volumes:/var/lib/docker Docker from the host (in this case you will benefit from the caches shared between the host and all your containers but there is a security concern: your build may actually harm your host Docker, so use it at your own risk)
In a Linux container, if you need a Docker daemon available inside your builds, you have two options:
Docker from the host (in this case you will benefit from the caches shared between the host and all your containers but there is a security concern: your build may actually harm your host Docker, so use it at your own risk)
examples
docker run -it -e SERVER_URL="<url to TeamCity server>" \
--privileged -e DOCKER_IN_DOCKER=start \
jetbrains/teamcity-agent
docker run -it -e SERVER_URL="<url to TeamCity server>" \
-v /var/run/docker.sock:/var/run/docker.sock \
jetbrains/teamcity-agent
UPD
docker.server.osType required because in the build step was sets linux

What worked for me was changing permissions on the agent container for /var/run/docker.sock
Run a shell inside the container:
docker exec -u 0 -it <CONTAINER_ID> bash
Change permissions of the docker socket:
chmod 666 /var/run/docker.sock
Verify the docker container use the socket:
docker version

Related

Raspberry PI cannot run gitlab image on docker

I am trying to run docker image with gitlab on my Raspberry PI.
Versions:
Raspbian 10 (buster)
Docker 20.10.8, API 1.41
Gitlab CE 13.10.0-ce.0 from [this][1] image, ulm0/gitlab 12.7.2
I am using simply docker command to run gitlab:
sudo docker run --name gitlab \
-p 10080:80 -p 10022:22 -p 10443:443 \
-v /srv/gitlab/config:/etc/gitlab \
-v /srv/gitlab/logs:/var/log/gitlab \
-v /srv/gitlab/data:/var/opt/gitlab -v \
/srv/gitlab/logs/reconfigure:/var/log/gitlab/reconfigure \
ulm0/gitlab
After running a command, in sudo docker logs gitlab I've got something like this:
Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file
And restart this container to reload settings.
To do it use docker exec:
docker exec -it gitlab vim /etc/gitlab/gitlab.rb
docker restart gitlab
For a comprehensive list of configuration options please see the Omnibus GitLab readme
https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md
If this container fails to start due to permission problems try to fix it by executing:
docker exec -it gitlab update-permissions
docker restart gitlab
but after running docker exec -it gitlab update-permissions I've got this:
Error response from daemon: Container 110f1def3f669d8d180bf552aa63e50c0e4c857f8bd1ab2745a677454fef04b0
is restarting, wait until the container is running
when I ran command with permissions right after container stared, I got unable to upgrade to tcp, received 409 and now I stuck, because I can not even log into my machine, it's restarting all the time. I tried to change port to more custom, but it's also dead.
I used latest version, but I have changed it to ulm0/gitlab:12.10.0 and it works. Sounds like a bug in a new version.

Nexus container exit(1) when run

I am trying to run nexus on EC2 ubuntu machine.
docker pull sonatype/nexus3
docker run -d -p 8081:8081 --name nexus sonatype/nexus3
running containers
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a0562da202f7 sonatype/nexus3 "sh -c ${SONATYPE_DI…" 7 seconds ago **Exited (1) 5 seconds ago nexus**
#
Please do let me know what is going wrong here.
I tried to reproduce the problem and I face the same as you tried, I resolve this by setting these variables.
docker run -it --rm -p 8081:8081 --name nexus -e INSTALL4J_ADD_VM_PARAMS="-Xms2g -Xmx2g -XX:MaxDirectMemorySize=3g -Djava.util.prefs.userRoot=/some-other-dir" sonatype/nexus3
Also, you can read the system requirement
Notes
Our system requirements should be taken into account when provisioning the Docker container.
There is an environment variable that is being used to pass JVM arguments to the startup script
INSTALL4J_ADD_VM_PARAMS, passed to the Install4J startup script. Defaults to -Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs.

Install Docker in Alpine Docker

I have a Dockerfile with a classic Ubuntu base image and I'm trying to reduce the size.
That's why I'm using Alpine base.
In my Dockerfile, I have to install Docker, so Docker in Docker.
FROM alpine:3.9
RUN apk add --update --no-cache docker
This works well, I can run docker version inside my container, at least for the client. Because for the server I have the classic Docker error saying :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I know in Ubuntu after installing Docker I have to run
usermod -a -G docker $USER
But what about in Alpine ? How can I avoid this error ?
PS:
My first idea was to re-use the Docker socket by bind-mounting /var/run/docker.sock:/var/run/docker.sock for example and thus reduce the size of my image even more, since I don't have to reinstall Docker.
But as bind-mount is not allowed in Dockerfile, do you know if my idea is possible and how to do it ? I know it's possible in Docker-compose but I have to use Dockerfile only.
Thanks
I managed to do that the easy way
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker --privileged docker:dind sh
I am using this command on my test env!
You can do that, and your first idea was correct: just need to expose the docker socket (/var/run/docker.sock) to the "controlling" container. Do that like this:
host:~$ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
<my_image>
host:~$ docker exec -u root -it <container id> /bin/sh
Now the container should have access to the socket (I am assuming here that you have already installed the necessary docker packages inside the container):
root#guest:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED ...
69340bc13bb2 my_image "/sbin/tini -- /usr/…" 8 minutes ago ...
Whether this is a good idea or not is debatable. I would suggest not doing this if there is any way to avoid it. It's a security hole that essentially throws out the window some of the main benefits of using containers: isolation and control over privilege escalation.

why can i not run a X11 application?

So, as the title states, I'm a docker newbie.
I downloaded and installed the archlinux/base container which seems to work great so far. I've setup a few things, and installed some packages (including xeyes) and I now would like to launch xeyes. For that I found out the CONTAINER ID by running docker ps and then used that ID in my exec command which looks now like:
$ docker exec -it -e DISPLAY=$DISPLAY 4cae1ff56eb1 xeyes
Error: Can't open display: :0
Why does it still not work though? Also, how can I stop my running instance without losing its configured state? Previously I have exited the container and all my configuration and software installations were gone when I restarted it. That was not desired. How do I handle this correctly?
Concerning the X Display you need to share the xserver socket (note: docker can't bind mount a volume during an exec) and set the $DISPLAY (example Dockerfile):
FROM archlinux/base
RUN pacman -Syyu --noconfirm xorg-xeyes
ENTRYPOINT ["xeyes"]
Build the docker image: docker build --rm --network host -t so:57733715 .
Run the docker container: docker run --rm -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY so:57733715
Note: in case of No protocol specified errors you could disable host checking with xhost + but there is a warning to that (man xhost for additional information).

jenkins in docker - Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

I'm running https://hub.docker.com/r/jenkinsci/blueocean/ in docker. Trying to build a docker image in jenkins.
but i get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
clearly the jenkins version in docker does not have access to the docker binary.
I confirmed this by,
docker exec -it db4292380977 bash
docker images
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
"db4292380977" is the running container. It shows the same error.
Question:
how do I allow access to docker in the jenkins container?
The docker client is installed on the jenkinsci/blueocean image, but not the daemon. Docker client will use the daemon (by default via the socket unix:///var/run/docker.sock). Docker client needs a Docker daemon in order to work, you can read Docker Architecture for more info.
What you can do:
Use docker-in-docker (DinD) image
Library Docker image provides a way to run a Docker daemon in Docker, you can then use it from another container. For example, using plain docker CLI:
docker run --name docker-dind --privileged -d docker:stable-dind
docker run --name jenkins --link=docker-dind -d jenkinsci/blueocean
docker exec jenkins docker -H docker-dind images
REPOSITORY TAG IMAGE ID CREATED SIZE
Docker daemon runs in docker-dind container and can be reached using the same hostname. You just need to provide the docker client with the daemon host (-H docker-dind in the example, you can also use DOCKER_HOST env variable as described in the doc).
Mount host machine /var/run/docker.sock in your container
As described by #Herman Garcia answer:
docker run -p 8080:8080 --user root \
-v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean
You need to mount your local /var/run/docker.sock and run the container as root user
NOTE: this might be a security flaw so be careful who has access to the jenkins container
docker run -p 8080:8080 --user root \
-v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean
you will be able to execute docker inside the container
➜ ~ docker exec -it gracious_agnesi bash
bash-4.4# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
c4dc85b0d88c jenkinsci/blueocean "/sbin/tini -- /usr/…" 18 seconds ago Up 16 seconds 0.0.0.0:8080->8080/tcp, 50000
/tcp gracious_agnesi
Just only try to do the same command but with sudo in the beginning
For example
sudo docker images
sudo docker exec -it db4292380977 bash
To avoid use sudo in the future you should run this command in Unix O.S
sudo usermod -aG docker <your-user>
Change for the user that you are using at this moment. Remember to log out and back in for this to take effect! More information about Docker installation click here

Resources