I'm trying to run a TFS agent as a service in a Windows Server Docker Container. I am able to get the agent running if I use the run.cmd but when attempting to configure the agent to run as a service I'm getting the error Below.
I have ensured the account is a local administrator and have tried the local system account and seem to be getting the same error. Thanks
Exit code -1073741502 returned from process: file name 'C:\TFSAgent\bin\AgentService.exe', arguments 'init'.
Command I'm using:
.\config.cmd --unattended --url https://tfsurl --auth Negotiate --username username --password password --pool Sandbox --agent dockeragent --runasservice --windowslogonaccount
username --windowslogonpassword password --replace
Run TFS Agent as Service in Docker Container
According to the document Define container jobs, which need to make sureļ¼
The agent must have permission to access the Docker daemon
To run a self-hosted agent in Docker, you could refer following document:
Run a self-hosted agent in Docker
Running Azure DevOps private agents as docker containers
Related
i am trying to trigger a google cli command in jenkins pipeline.
gcloud auth activate-service-account --key-file=user.json
currently using googlesdk docker image
Here i have my private key stored as credentials in Jenkins server while running command directly from agent i can authenticate to the account. now i wanted to run command inside docker container.
need to know how can i access private key stored in Jenkins from Docker container ?
i tried to access it directly and got following error message
ERROR: gcloud crashed (ValueError): No key could be detected.
some Assistance will be helpful.
i use scripted pipeline.
I have an ECS Fargate service running the jetbrains/teamcity-agent image. This is connected to my TeamCity Host which is running on an EC2 instance(windows).
When I check whether the agent is capable of running docker commands, it shows the following errors:
Unmet requirements:
docker.server.osType contains linux
docker.server.version exists
Under Agent Parameters -> Configuration Parameters, I can see the docker version and the dockerCompose.version properly. Is there a setting that I am missing?
If you are trying to access a docker socket in fargate, Fargate does not support running docker commands, there is a proposed ticket for this feature.
the issue with "docker.server.osType" not showing up usually means
that the docker command run from the agent cannot connect with the
docker daemon running. This is usually due to a lack of permissions,
as docker by default only allows connections from root and users of
the group docker
Teamcity-Unmet-requirements-docker-server-osType-contains-linux
I was facing similar issues got them fixed by adding "build agent" user in "docker" group and restarted/rebooted the server.
Where build agent user ==> Means the user with which your TeamCity services are running.
Command to add a user to group
#chmod -a -G docker <userasperyourrequirement>
Command to reboot the server:
#init 6
I'm using Docker Pipeline Plugin version 1.10.
I have my Jenkins installed in a container. I have a remote server that runs a Docker daemon. The daemon is reachable from the Jenkins machine via TCP (tested). I disabled TLS security on the Docker daemon.
I'm not able to make the docker.withServer(...) step work.
As a basic test I simply put following content in a Jenkinsfile (if I'm correct this is a valid pipeline content):
docker.withServer('tcp://my.docker.host:2345') {
def myImage = docker.build('myImage')
}
When the pipeline executes I get this error: script.sh: line 2: docker: command not found like the docker command was still trying to execute locally (there is no docker command installed locally) rather than on my remote Docker daemon.
Am I missing anything ? Is it required to have the docker command installed locally when trying to execute Docker commands on a remote server..?
have you tried
withDockerServer('tcp://my.docker.host:2345') {
.....
}
Documentation here
docker needs to be installed on jenkins master in order for jenkins to be able to launch the docker on my.docker.host.
the first docker command runs on jenkins master, but with a parameter to pass the command to my.docker.host
the container itself will then run on my.docker.host
Note that you only need to install docker on the jenkins master; the daemon does not need to be running on jenkins master.
Check if you have set up port correctly. Default port for daemon is 2375. It has to be checked on both docker daemon (option -H 0.0.0.0:2375) and on the jenkins client
I have a GitLab project gitlab.com/my-group/my-project which has a CI pipeline that builds an image and pushes it to the project's GitLab registry registry.gitlab.com/my-group/my-project:tag. I want to deploy this image to Google Compute Engine, where I have a VM running docker.
Easy enough to do it manually by ssh'ing into the VM, then docker login registry.gitlab.com and docker run ... registry.gitlab.com/my-group/my-project:tag. Except the docker login command is interactive, which is a no-go for CI. It can accept a username and password on the command line, but that hardly feels like the right thing to do, even if my login info is in a secret variable (storing my GitLab login credentials in a GitLab secret variable?...)
This is the intended workflow on the Deploy stage of the pipeline:
Either install the gcloud tool or use an image with it preinstalled
gcloud compute ssh my-gce-vm-name --quiet --command \
"docker login registry.gitlab.com && docker run registry.gitlab.com/my-group/my-project:tag"
Since the gcloud command would be running within the GitLab CI Runner, it could have access to secret variables, but is that really the best way to log in to the GitLab Registry over ssh from GitLab?
I'll answer my own question in case anyone else stumbles upon it. GitLab creates ephemeral access tokens for each build of the pipeline that give the user gitlab-ci-token access to the GitLab Registry. The solution was to log in as the gitlab-ci-token user in the build.
.gitlab-ci.yml (excerpt):
deploy:
stage: deploy
before_script:
- gcloud compute ssh my-instance-name --command "docker login registry.gitlab.com/my-group/my-project -u gitlab-ci-token -p $CI_BUILD_TOKEN"
The docker login command creates a local configuration file in which your credentials are stored at $HOME/.docker/config.json that looks like this (also see the documentation on this):
{
"auths": {
"<registry-url>": {
"auth": "<credentials>"
}
}
}
As long as the config.json file is present on your host and your credentials (in this case simply being stored as base64("<username>:<password>")) do not change, there is no need to run docker login on every build or to store your credentials as variables for your CI job.
My suggestion would be to simply ensure that the config.json file is present on your target machine (either by running docker login once manually or by deploying the file using whatever configuration management tool you like). This saves you from handling the login and managing credentials within your build pipeline.
Regarding the SSH login per se; this should work just fine. If you really want to eliminate the SSH login, you could setup the Docker engine on your target machine to listen on an external socket, configure authentication and encryption using TLS client certificates as described in the official documentation and directly talk to the remote server's Docker API from within the build job:
variables:
DOCKER_HOST: "tcp://<target-server>:2376"
DOCKER_TLS_VERIFY: "1"
script:
- docker run registry.gitlab.com/my-group/my-project:tag
We had the same "problem" on other hosting providers. Our solution is to use some kind of custom script which runs on the target machine and can be called via a Rest-Api Endpoint (secured by Basic-Auth or what ever).
So you could just trigger the remote host to do the docker login and upgrade your service without granting SSH Access via gitlab-ci.
All,
I am using DCOS and the associated Jenkins.
My company is having a proxy for any external traffic.
Jenkins is running properly and can access the internal network as well as any external network.
I can get jobs to curl a URL on internet if I set the HTTP proxy. I can pass this proxy to mesosphere/jenkins-dind:0.3.1 container as environment variable however, I can't run any docker pull or docker run while being in docker in docker mode.
I managed to reproduce the issue on one of the agent box.
sudo docker run hello-world
Hello from Docker!
This works!!
However, sudo docker run --privileged mesosphere/jenkins-dind:0.3.1 wrapper.sh "docker run hello-world" will fail with
docker: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate is valid for FG3K6C3A13800607, not index.docker.io.
This is typically showing that the docker daemon is not having access to the proxy.
Do you know how to ensure that the dind is getting access to the proxy settings?
Antoine
This error can also manifest itself if the Docker daemon is unauthenticated against your registry but it looks like you're running against the public image, so that's not likely to be the problem.
You could try creating a new Parameter to the Jenkins node (see the instructions here for an example for how to set an environment variable called DOCKER_EXTRA_OPTS: https://docs.mesosphere.com/1.8/usage/service-guides/jenkins/advanced-configuration/).
In this case, we want to do the same (with Name env) but with the contents of Value set to something like HTTP_PROXY=http://proxy.example.com:80/.