I'm trying to run build Docker task to create a docker image. I set up a docker host, I'm using defautl Docker Hub as registry and my whole environment is on Azure.
When I queue a build task it fails at Task Docker.
Log output:
check path : null
task result: Failed
Not found docker: null
Finishing task: Docker
[error]Task Docker failed. This caused the job to fail. Look at the logs for the task for more details.
Does someone have any thought on what may be happening?
After looking into this, it would seem this happens if Docker is not properly installed on the build agent for the service principal the agent is running under.
Keep in mind that:
The Build must be run in a private agent, as the hosted ones do not yet have Docker installed, as per a very small footnote in the bottom of the documentation.
The VSTS agent must be running with a principal that has the environment variables set for docker to run; the default is LocalService account, which won't have that installed. This turns out to be a problem with other stuff as well and I've found it best to have a special user principal to run the agent under, that can also log into the system.
Fixing these two issues made it work for me.
I was able to switch the agent to Hosted VS2017 which has Docker support.
If Linux is an option, try Hosted Linux Preview
Related
We're using the DockerInstaller task in our Azure Devops pipelines. What happens if a different version of Docker is already installed on the agent machine? Does it upgrade, downgrade, fail, or other? Is it possible that multiple Docker can exist side by side on the same agent machine?
Updated by OP:
Confirmed that the task only affects the current pipeline run and not the entire machine.
Seems you are talking about this DockerInstaller task, which is used to install a specific version of the Docker CLI on the agent machine.
After some tests, it should work.
But there are not too many supported versions in this task.
You could also choose to use scripts to handle the process, take a look at this answer: multiple docker clients on the same machine
I am trying to run a jenkins job inside a windows docker container. I have successfully created an image with windows server code docker image which will have MSBuildEngine 4.7.
The problem I am facing is I am not able to run a Jenkins job inside that container.
I am able to do it easily with linux environment.
The actual problem is, Jenkins first puts a shell file which will have the command to run the container and inspect it.
How do I tell Jenkins that my environment is not Linux and it is Windows.
Note: Searching in google does not help now a days. So I directly reached out here
I am working on this issue as well. I am finding that the (maybe just a) underlying issue is how Jenkins tells Docker to mount a volume to the container. I have yet to get around this issue.
edit:
There's a PR addressing this issue and I tested the fork with both Linux and Windows slaves to work as we intend.
Download Rbutcher's fork of the plugin:
git clone https://github.com/rbutcher/docker-workflow-plugin.git
Change to the working branch:
git checkout feat/windows_slaves
Build the plugin:
mvn -DskipTests clean install
Manually import into Jenkins:
Manage Jenkins> Manage Plugins> Advanced>Upload Plugin and select ./target/docker-workflow.hpi.
I would to use Docker on a Self-Hosted Windwos 10 Agent. To do so I installed Docker for Windows and was able to use it on the agent. But when I wanted to use it with a Docker task in VSTS I got the error:
##[error]C:\Program Files\Docker\Docker\Resources\bin\docker.exe failed
with return code: 1
What is the problem?
The agent service (VSTS Agent (agentName)) was running as Network Service what is not enough to use Docker. It is necessary to run the service in another context. Therefore:
Go to services
Search for the VSTS agent service
Right click on the service
Select properties
Go to the Log On tab
And select Local System account
Then restart the service
Now it is possible to use Docker. See also Docker agent does not run under System Account
EDIT:
I encountered the problem also when the Docker service was running as Local System. In this context it was necessary to run the VSTS agent service as Local System too.
I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.
I m actually working with a stack that allows me to make some automation in my integration / deployment system.
Actually I work like following :
I push my code to a github repository
Jenkins sniffs the repo and build the soft, launch unit testing
If unit testing (or other kind of tests, anyway), it notifies Rundeck to deploy to my servers (3 in my case) by connecting into SSH and telling : "hey guy, you have to pull from github, new soft version is available", then it restarts the the concerned service and my soft is now up to date
Okay, tell me if I m wrong, but it seems to be a good solution right ?
Then, I wanted to containerize my applications and now, I got some headaches.
First solution
In fact, I was wondering about something like :
Push to github
Jenkins tests, builds the docker image
Rundeck push to docker hub and tells the 3 servers to pull back the new image from the hub and run it through SSH
Problem : it will run in another container (multiple docker run of the same image, but with different versions :( )
Second solution
The second solution was to :
Push to github
Jenkins tests and tells rundeck that the test successes, without create a "real build" (only one for testing)
Rundeck connects to the running container through ssh and ask to pull the modifications, then it restarts the docker container
Problem : I am forced to use ssh in all my containers
I dont know how to bypass my problems, and what is the best solution...
Thanks for your help
I don't see any problem with solution 1.
1.Build production version with jenkins
2.Push it (via jenkins) to your private docker registry
3.Tell Rundeck/Ansible/Chef/Puppet ask 3 servers to pull latest image and restart container.
However, it's highly recommended to have some strategy, which considers blue-green principle and rollbacks if something is crashed.