Why doesn't TeamCity recognize docker server properties with the jetbrains/agent image? - docker

I have an ECS Fargate service running the jetbrains/teamcity-agent image. This is connected to my TeamCity Host which is running on an EC2 instance(windows).
When I check whether the agent is capable of running docker commands, it shows the following errors:
Unmet requirements:
docker.server.osType contains linux
docker.server.version exists
Under Agent Parameters -> Configuration Parameters, I can see the docker version and the dockerCompose.version properly. Is there a setting that I am missing?

If you are trying to access a docker socket in fargate, Fargate does not support running docker commands, there is a proposed ticket for this feature.
the issue with "docker.server.osType" not showing up usually means
that the docker command run from the agent cannot connect with the
docker daemon running. This is usually due to a lack of permissions,
as docker by default only allows connections from root and users of
the group docker
Teamcity-Unmet-requirements-docker-server-osType-contains-linux

I was facing similar issues got them fixed by adding "build agent" user in "docker" group and restarted/rebooted the server.
Where build agent user ==> Means the user with which your TeamCity services are running.
Command to add a user to group
#chmod -a -G docker <userasperyourrequirement>
Command to reboot the server:
#init 6

Related

Docker socket is not found while using Intellij IDEA and Docker desktop on MacOS

I downloaded Docker using Docker Desktop for Apple M1 chips. I can run containers, the integration with VsCode works okay but I can't integrate it with Intellij IDEA Ultimate. It keeps giving this error.
But I can run my containers and create images from the terminal, I can also see the containers and images in Docker Desktop too. What could be the reason behind this? I also tried to check whether var/run/docker.sock is existing and it really isn't, there is no such file as that.
I also tried the same steps on my second computer and the exact same thing happened. Steps to reproduce: 1- Download Intellij IDEA Ultimate, open a repo that uses docker 2- Download Docker Desktop for Mac M1 3- Try to add Docker service to Intellij
I didn't do anything else because I think Docker Desktop is enough to configure everything on Mac. I am trying to run an FT on intellij and I get the error
[main] ERROR o.t.d.DockerClientProviderStrategy - Could not find a valid Docker environment. Please check configuration. Attempted configurations were:
[main] ERROR o.t.d.DockerClientProviderStrategy - UnixSocketClientProviderStrategy: failed with exception InvalidConfigurationException (Could not find unix domain socket). Root cause NoSuchFileException (/var/run/docker.sock)
[main] ERROR o.t.d.DockerClientProviderStrategy - DockerMachineClientProviderStrategy: failed with exception ShellCommandException (Exception when executing docker-machine status ). Root cause InvalidExitValueException (Unexpected exit value: 1, allowed exit values: [0], executed command [docker-machine, status, ], output was 122 bytes:
Docker machine "" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.)
[main] ERROR o.t.d.DockerClientProviderStrategy - As no valid configuration was found, execution cannot continue
I've been trying everything for the last 2 days but I can't seem to find a solution.
EDITED 2022-10-31
As per the latest release notes for Docker Desktop (4.13.1), there is no need to create the symlink anymore, citing notes:
Added back the /var/run/docker.sock symlink on Mac by default, to increase compatibility with tooling like tilt and docker-py. Fixes docker/for-mac#6529.
The official fix now is to UPGRADE your Docker Desktop installation.
For the Docker Desktop (4.13.0) version:
By default Docker will not create the /var/run/docker.sock symlink on the host and use the docker-desktop CLI context instead. (see: https://docs.docker.com/desktop/release-notes/)
That will prevent IntelliJ from finding Docker using the default context.
You can see the current contexts in your machine by running docker context ls, which should produce an output like:
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://kubernetes.docker.internal:6443 (default) swarm
desktop-linux * moby unix:///Users/<USER>/.docker/run/docker.sock
As a workaround that will allow IntelliJ to connect to Docker you can use the TCP Socket checkbox and put in the Engine API URL the value that appears under DOCKER ENDPOINT in the active context.
The case for this example will be: unix:///Users/<USER>/.docker/run/docker.sock
Then IntelliJ will be able to connect to Docker Desktop.
**Hacky option**
Another way to make IntelliJ (and other components that rely on the default config) to find Docker will be to manually create a symlink to the new DOCKER ENDPOINT by running:
sudo ln -svf /Users/<USER>/.docker/run/docker.sock /var/run/docker.sock
In that way all the components looking for Docker under /var/run/docker.sock will find it.

Bitbucket pipelines: problem cloning source repository on self-hosted runner with dind

I work on a project where we have specific computers (configured specs) which pair with our products (hardware). My aim is to configure a self hosted runner so that we can run our test suite on a real product with the same computer and environment that our customers have.
I've been able to set up a self hosted agent with this code
runs-on:
"self.hosted"
"ubuntu18.04"
But this gives me a docker container on my self host.
Q1) is there any way to use bitbucket-pipelines to run the CI on the self hosted machine itself, rather than a docker container inside of the self hosted machine?
My research has suggested the answer to this Q1 is no, so I then continued to try and develop using the docker container provided by bitbucket for self hosted runner.
The problem now, was that I did not have the environment needed in the docker container provided by bitbucket. Configuring the environment with bash script is doable, but not an attractive option due to increases in build time.
I then found the docker-in-docker (dind) option so that I can run my normal build environment inside the self-hosted runner environment. Essentially then involved adding a CLONE_IMAGE argument to the docker command provided by bitbucket.
We now reach my current problem in that I get the following error in bitbucket-pipelines:
# under build tab
GIT_LFS_SKIP_SMUDGE=1 GIT_SSL_NO_VERIFY=true retry 6 git clone --branch="feature/testing-infrastructure" https://x-token-auth:$REPOSITORY_OAUTH_ACCESS_TOKEN#bitbucket.org/$BITBUCKET_REPO_FULL_NAME.git $BUILD_DIR
/tmp/7b85a726-7c76-55d5-b4f4-d7fe74abc8e0/tmp/cloneScript5351248756573077914.sh: line 13: retry: command not found
# under the docker tab
time="2022-07-13T13:38:41.939639712Z" level=info msg="Starting up"
time="2022-07-13T13:38:41.940255296Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
time="2022-07-13T13:38:41.940338812Z" level=warning msg="Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network." host="tcp://0.0.0.0:2375"
time="2022-07-13T13:38:41.940347755Z" level=warning msg="Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there!" host="tcp://0.0.0.0:2375"
The logging output from the docker container on the self-hosted runner is:
Completing step with result Result{status=ERROR, error=Some(Error{key='runner.bitbucket-pipelines.clone-container-failure', message='We couldn't clone the repository. Try rerunning the pipeline.', arguments={}})}.
Q2) What is the problem? What is this retry comment that can't be found? I tried sudo apt install retry which worked, but this is not the program that is needed.
(apologies for the formatting, it seems bitbucket-questions does not want to let me have any more code blocks (?))

pending jenkins doesn't have label docker-slave

I am trying to configure jenkins slave as docker container, have enabled docker API and connections works fine to the API
Have added the configuration for docker template and docker cloud but it seems that my job does not starts
I can see container getting created on my docker node but the job does not start
Docker cloud configuration image
docker template image
One thing to note is that when i run the container specifically on the docker node and then try to ssh using the same credentials that i am using in jenkins i can ssh into the container.
This message of "Jenkins doesn't have label XXXX" is rather misleading and unhelpful.
You think the problem is something you did wrong in your configuration and when you find out what happen it is nothing to do with jenkins or how you set up the docker plugin.
I run into the same problem than you, and the problem was the docker installation I was using.
The steps I followed to fix it were:
(I was using CENTOS7,jenkins 2.1.38, docker version 1.13.1)
1) Go to the logs of your jenkins (centos logs are /var/log/jenkins.log)
2) Looking into the logs you are going to find out the problem. For instance for me was this:
com.github.dockerjava.api.exception.NotFoundException: {"message":"driver failed programming external connectivity on endpoint happy_heyrovsky (cbfa0d43f8c89d2531323249468503be11e9dd603597a870530d28540c662695): exec: \"docker-proxy\": executable file not found in $PATH"}
As you see the problem is that docker it is not able to find docker-proxy ¿how to fix this?
Go to /usr/libexec/docker and you will see docker-proxy-current. so what you have to do is create a link:
sudo ln -s docker-proxy-current docker-proxy
Tha´s all. After doing this change I execute my build on jenkins and it works.

Jenkins pipeline: docker.withServer(...) does not execute docker commands on remote server

I'm using Docker Pipeline Plugin version 1.10.
I have my Jenkins installed in a container. I have a remote server that runs a Docker daemon. The daemon is reachable from the Jenkins machine via TCP (tested). I disabled TLS security on the Docker daemon.
I'm not able to make the docker.withServer(...) step work.
As a basic test I simply put following content in a Jenkinsfile (if I'm correct this is a valid pipeline content):
docker.withServer('tcp://my.docker.host:2345') {
def myImage = docker.build('myImage')
}
When the pipeline executes I get this error: script.sh: line 2: docker: command not found like the docker command was still trying to execute locally (there is no docker command installed locally) rather than on my remote Docker daemon.
Am I missing anything ? Is it required to have the docker command installed locally when trying to execute Docker commands on a remote server..?
have you tried
withDockerServer('tcp://my.docker.host:2345') {
.....
}
Documentation here
docker needs to be installed on jenkins master in order for jenkins to be able to launch the docker on my.docker.host.
the first docker command runs on jenkins master, but with a parameter to pass the command to my.docker.host
the container itself will then run on my.docker.host
Note that you only need to install docker on the jenkins master; the daemon does not need to be running on jenkins master.
Check if you have set up port correctly. Default port for daemon is 2375. It has to be checked on both docker daemon (option -H 0.0.0.0:2375) and on the jenkins client

Jenkins mesosphere/jenkins-dind:0.3.1 and proxy

All,
I am using DCOS and the associated Jenkins.
My company is having a proxy for any external traffic.
Jenkins is running properly and can access the internal network as well as any external network.
I can get jobs to curl a URL on internet if I set the HTTP proxy. I can pass this proxy to mesosphere/jenkins-dind:0.3.1 container as environment variable however, I can't run any docker pull or docker run while being in docker in docker mode.
I managed to reproduce the issue on one of the agent box.
sudo docker run hello-world
Hello from Docker!
This works!!
However, sudo docker run --privileged mesosphere/jenkins-dind:0.3.1 wrapper.sh "docker run hello-world" will fail with
docker: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate is valid for FG3K6C3A13800607, not index.docker.io.
This is typically showing that the docker daemon is not having access to the proxy.
Do you know how to ensure that the dind is getting access to the proxy settings?
Antoine
This error can also manifest itself if the Docker daemon is unauthenticated against your registry but it looks like you're running against the public image, so that's not likely to be the problem.
You could try creating a new Parameter to the Jenkins node (see the instructions here for an example for how to set an environment variable called DOCKER_EXTRA_OPTS: https://docs.mesosphere.com/1.8/usage/service-guides/jenkins/advanced-configuration/).
In this case, we want to do the same (with Name env) but with the contents of Value set to something like HTTP_PROXY=http://proxy.example.com:80/.

Resources