Does DockerInstaller task replace any existing installed Docker version? - docker

We're using the DockerInstaller task in our Azure Devops pipelines. What happens if a different version of Docker is already installed on the agent machine? Does it upgrade, downgrade, fail, or other? Is it possible that multiple Docker can exist side by side on the same agent machine?

Updated by OP:
Confirmed that the task only affects the current pipeline run and not the entire machine.
Seems you are talking about this DockerInstaller task, which is used to install a specific version of the Docker CLI on the agent machine.
After some tests, it should work.
But there are not too many supported versions in this task.
You could also choose to use scripts to handle the process, take a look at this answer: multiple docker clients on the same machine

Related

CI/CD with Docker - what is the final deployment step?

I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.

Is it possible to integrate SonarQube, Jenkins and GitLab (all in dockers)?

Currently, I am working in a quality process so as to ensure that the code is acceptable. For that, I'm integrating Jenkins, SonarQube and GitLab, which are running in different servers (actually they are in different docker containers).
The idea is to check with SonarQube everytime the code is pushed against GitLab and block commits, merges, and so on, whether SonarQube has not passed.
I have already integrated Jenkins with SonarQube, but Jenkins checks the code inside his workspace, so imagine a situation where a developer in his laptop needs to push his changes.
My conceptual question is simple: Is it possible to integrate these technologies in order to do this? And, if the question is yes, which steps are necessary?
PD: I don't need to see code, configuration files,and so on. I just need something like:
Configure SonarQube to work with Jenkins
Do an script so as to copy that file in that folder,
...
First, in docker means each tool is in its own container.
They only need to see each other through the network, which is where a Docker Engine in Swarm mode comes in.
Second "configure Jenkins to work with SonarQube"... that is what I have done in my shop, and there isn't much to it.
Once the Jenkins SonarQube plugin is installed, and the address for the SonarQube server entered, you can configure your job and call sonar (for instance with maven: $SONAR_MAVEN_GOAL -Dsonar.host.url=$SONAR_HOST_URL)
The analysis done in the Jenkins workspace will then be published in the SonarQube server.
A swarm server is the more modern version of this 2015 docker-compose.yml file from the marcelbirkner/docker-ci-tool-stack project.
The idea remains the same though: each element is isolated in its own container.
I haven't tried It myself but https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin could be interesting in your setup.

Visual Studio Team Service fails Task Docker Build

I'm trying to run build Docker task to create a docker image. I set up a docker host, I'm using defautl Docker Hub as registry and my whole environment is on Azure.
When I queue a build task it fails at Task Docker.
Log output:
check path : null
task result: Failed
Not found docker: null
Finishing task: Docker
[error]Task Docker failed. This caused the job to fail. Look at the logs for the task for more details.
Does someone have any thought on what may be happening?
After looking into this, it would seem this happens if Docker is not properly installed on the build agent for the service principal the agent is running under.
Keep in mind that:
The Build must be run in a private agent, as the hosted ones do not yet have Docker installed, as per a very small footnote in the bottom of the documentation.
The VSTS agent must be running with a principal that has the environment variables set for docker to run; the default is LocalService account, which won't have that installed. This turns out to be a problem with other stuff as well and I've found it best to have a special user principal to run the agent under, that can also log into the system.
Fixing these two issues made it work for me.
I was able to switch the agent to Hosted VS2017 which has Docker support.
If Linux is an option, try Hosted Linux Preview

Is there a stable plugin for Jenkins for running builds on VMs?

Travis CI has a really nice feature, builds are run within VirtualBox VMs. Each time a build is started, the box is refreshed from a snapshot and the code copied on to it. Any problems with the build cannot affect the host, and you can use any OS to run your builds on.
This would be really good, for example, compiling and testing code on a guest OS that matches your production env. Also, you can keep your host free of any installation dependencies you might need (e.g. a database server) and run ITs without worrying about things like port conflicts.
Does such a thing exist for Jenkins?
Check out the Vagrant Plugin https://wiki.jenkins-ci.org/display/JENKINS/Vagrant-plugin
This plugin allows booting of Vagrant virtual machines, provisioning them and also executing scripts inside of them
You can run Jenkins in a Master Slave Setup. Your Master instance manages all the jobs but lets all the slaves do the actual work. These Slaves can be VMs or physical machines. Go To Manage Jenkins -> Manage Nodes -> New Node to add Nodes to your Jenkins Setup.
There is the vSphere Cloud Plugin and the Scripted Cloud Plugin that can be used for this purpose.

Jenkins CI: should I have a server for Jenkins and a dedicated slave for building?

I am using Jenkins for CI,
I've heard that I should have a dedicated server and slave for running Jenkins and building tasks, respectively -
is this true?
I can understand this as the server may not be powerful enough to handle the server itself and running build tasks,
but is there any defined technical reason for this?
Best practice is to have a separate machine for Jenkins-Server,
and not to use it for builds at all.
This has nothing to do with CPU-power or memory-resources -
A build-machine should have a predefined configuration,
and Jenkins should not be part of it.
(Jenkins requirements may even conflict with those of the build-machine)
You should be able to boot / clone / upgrade / restore / trash the build-machine
without any impact on Jenkins.
Of course you can settle for a single machine, if your resources are limited,
but if you are serious about build-automation - Jenkins should have its own server.
You probably don't need dedicated hardware/VM to run a Jenkins server because the actual Jenkins process (no builds running) uses very little resources. But it all depends on what you want to accomplish with your Jenkins setup.
Do you want to run continuous builds across multiple platforms for multiple projects? Then using a master with slaves is the only way to go. If, on the other hand, you're running fairly simple builds for just a few projects, then you only need one machine to run the builds and the Jenkins process.
You can configure Jenkins to have multiple builds running concurrently so if you have a quad-core machine, you can safely run 2 builds and possibly a third once you analyze resource usage.
At my last gig, I used a quad-core machine with 8GB RAM to run:
Jenkins running Selenium builds
VirtualBox VM with Windows XP
Two instances of Tomcat each with two applications deployed.
And the machine still had more to spare.

Resources