I have an Ansible playbook which I use to provision an server. This provisioning works fine. The server is up and running.
Now I want to test this playbook regularly and in an automated way. My repo is hosted on Github so I want to use Github Actions for this CI build.
In my mind I start a docker container, run my playbook against this container and shut the container down again. But I don't really know how to start. There must be a better way than a "shell script" inside the pipeline. I don't want to run all commands on my own. But even if I need to do that I don't get a grip on how to do this.
So basically my questions are:
How can I test Ansible playbooks and actually run this playbook in a disposable environment? This way I could test the actual installations.
How can I implement this in a Github Actions pipeline?
Maybe Docker is not the best idea to begin with because the Ubuntu image is stripped down (compared to an actual installation on a physical machine). Maybe Vagrant is the better idea? But I have even less ideas how to tackle Vagrant.
Related
I'm new to Jenkins/Docker. So far I've found lots of Jenkins official Documents recommended to be used with Docker. But the necessity and advantages of running Jenkins as a docker container remain vague to me. In my case, it's a node/react app and environment required is not complicated.
Disadvantages I've found running Jenkins as a Docker container:
High usage of hard drive
Directory path in docker container is more complicated to deal with, esp when working with ssh in pipeline scripts
Without docker, I can easily achieve the same and there's also blueocean plugin available.
So, what's the main benefits of Docker with Jenkins/Jenkins Pipeline? Are there pitfalls for my node application using Jenkins without Docker? Articles to help me dive into are also appreciated.
Jenkins as Code
The main advantages of Jenkins in Docker is that it helps you to get: Jenkins as Code
Advantages of Jenkins as code are:
SCM: Code can be in put under version control
History is transparant, backup and roll-back becomes easy.
The code is the documentation of your Jenkins setup.
Jenkins becomes portable, so you can run Jenkins locally to try new plugins etc.
Jenkins pipelines work really well with Docker. As #Ivthillo mentioned: there is no need to install additional tools, you just use images of these tool. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. This makes your Jenkins setup much more clean.
Disadvantage is:
Jenkins initial (Groovy) configuration is poorly documented on the web.
Simple Node setup
Most arguments also holds for a simple Node setup.
Change the node version or run multiple job each with a different Node version becomes easy.
Add your Jenkinsfile inside the Node repo. So everyone with a Jenkins+Docker setup can run your CI/CD.
And finaly: gather knowledge on running your app inside a container will enable your to run your production app in Docker in the future.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.
I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.
Currently, I am working in a quality process so as to ensure that the code is acceptable. For that, I'm integrating Jenkins, SonarQube and GitLab, which are running in different servers (actually they are in different docker containers).
The idea is to check with SonarQube everytime the code is pushed against GitLab and block commits, merges, and so on, whether SonarQube has not passed.
I have already integrated Jenkins with SonarQube, but Jenkins checks the code inside his workspace, so imagine a situation where a developer in his laptop needs to push his changes.
My conceptual question is simple: Is it possible to integrate these technologies in order to do this? And, if the question is yes, which steps are necessary?
PD: I don't need to see code, configuration files,and so on. I just need something like:
Configure SonarQube to work with Jenkins
Do an script so as to copy that file in that folder,
...
First, in docker means each tool is in its own container.
They only need to see each other through the network, which is where a Docker Engine in Swarm mode comes in.
Second "configure Jenkins to work with SonarQube"... that is what I have done in my shop, and there isn't much to it.
Once the Jenkins SonarQube plugin is installed, and the address for the SonarQube server entered, you can configure your job and call sonar (for instance with maven: $SONAR_MAVEN_GOAL -Dsonar.host.url=$SONAR_HOST_URL)
The analysis done in the Jenkins workspace will then be published in the SonarQube server.
A swarm server is the more modern version of this 2015 docker-compose.yml file from the marcelbirkner/docker-ci-tool-stack project.
The idea remains the same though: each element is isolated in its own container.
I haven't tried It myself but https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin could be interesting in your setup.
I wanted to build a Jenkins server which would run test of my puppet code on Vagrant. The issue I found is that the we run our server as VMs already, either in vmWare or AWS and Vagrant will not work as another virtualisation.
Does anyone have an idea how can I create a test platform for my puppet code. What I want to test the deployment of manifest on the nodes them self i.e. If I deploy a class web server or make changes to it I would like to check if it affects/breaks deployment of other classes.
The idea would be to iterate over all the classes/roles and see if the deployments are passing. I would like to make it automatic and independent of our engineers. At the moment we are running manual test with vagrant up however there are too many roles to do that by hand.
Any ideas how can I tackle this?
You can use either Docker or AWS provider for Vagrant.
In case of AWS provider you need to set-up RSync to get your environment into newly launched instance.
If your Vagrant scripts are robust, you can use the same script for both local deployment on your workstation and AWS/Docker deployment on CI server.
There are drawbacks to doing these techniques, in case of Docker you are limited to the same kernel that Jenkins server is running, in case of AWS you will incur additional costs. However, for AWS your don't need to allocate as much resources for your Jenkins server, so you might even save money this way because you will be using paying for extra VMs only when you are running you tests. Just make sure you will shut them down after you done.
Is there any special reason why you want to use vagrant? I'm not sure if you are setting up your production environment with vagrant or not.
In case you are not bound to vagrant, I would recommend you to think about using a docker image to prepare a lightweight environment to run your setups and verifications in.
When doing your tests, spin up a container from your image that contains your puppet distribution and run your setups/tests inside. If you have special kernel requirements, use a separate jenkins slave/agent machine rather than executing jobs on the jenkins master.
If you are not sure how to get started using jenkins with docker, have a look into the examples section of the Jenkins Documentation. The provided examples are showing the declarative pipeline syntax thats still a bit new. Also consider the collapsed Toggle Scripted Pipeline Sections which show the groovy pipeline scripts that are alot more forgiving for jenkins pipeline beginners.
Those should be quite good pointers to get started with running+testing your puppet scripts inside docker. For building and using a docker image there should be more than enough tutorials out there.
Let me know if this was a hint in the right direction or if I mistinterpreted your question.
Suppose i want to move mu current acceptance test CI environment to dockers, so i can take benefit of performance improvements and also quickly setting up multiple clones for slow acceptance tests.
I would have a lot of services.
The easy ones would be postgres, mongodb, reddis and such, which are updated rarely.
However, how would i go about, if my own product has lots of services aswell? - over 10-20 services, that all need to work together for tests. Is it even feasible to handle this with dockers, i.e., how can CI efficiently control so many containers automatically AND make clones of them to run acceptance tests in parallel.
Also, how would i automatically update the containers easily for the CI? Would the CI simply need to rebuild every container at the start of the every run with the HEAD of every service branch? Or would the CI run git pull and some update/migrate command on every service?
In VM-s its easy to control these services, but i would like to be convinced that dockers are good or better for it as well.
I'm in the same position as you and have recently gotten this all working to my liking.
First of all, while docker is generally intended to run a single process, for testing I've found it works better for the docker container to run all services needed. There is some duplication in going this route, but you don't have to worry about shared services, like Mongo or PostgreSQL. This can be accomplished by using something like Supervisor: http://docs.docker.com/articles/using_supervisord/
The idea is to configure supervisor to start all necessary services inside the container, so they are completely isolated from other containers. In my environment, I have mongo, xvfb, chrome and firefox all running in a single container. So really, you still are running a single process (supervisor) but it starts many others.
As for adding repositories to your container, I just have the host machine checkout the code and then when I run docker, I use the -v flag to add the repo to the container. This way you don't need to rebuild the container each time. I build containers nightly with the latest code to be able to add all necessary gems for a faster 'gem install' at testing time.
Lastly I have a script as the entrypoint of the container that allows me to pass in what test I want to run.
Jenkins then just runs the docker commands and passes in the tests to run. These can be done in parallel, sequentially or any other way you like. I'm currently looking into having these tests run on slave Jenkins instances in an auto-scaling group in AWS.
Hope that helps.
drone is a docker based open source CI plus online service: https://drone.io
Generally it runs build and test in docker containers, and remove all containers after built. you just need to provide a file named .drone.yml with similar configuration like .travis.yml to configure your build.
it will manage your services like database, cache as linked container.
For your build environment, you can use exiting docker images as template of dependencies.
So far, it supports github.com and gitlab. for your own CI system, you can use drone CLI only or its web interface.
I recommend to use Jenkins docker plugin, though it is new, it starts to expose the power of docker used inside jenkins, the configuration is well written there. (let me know if u have problem)
The strategy I planned to use it.
create different app images to serve different service like postgres, mongodb, reddis and such, since it is rare updated, they will be configured globally as "cloud" template in advance, each VM will have label to indicate the service
In each jenkins job, each images will be selected as slave node (use that label as name)
When the job is triggered, it will automatically start the docker container as slave in seconds
It shall work for you.
BTW: As the time I answered (2014.5), the plugin is not mature enough, but it is the right direction.