I've been trying gitlab and its CI workflow these days, but found myself confused when I saw these messages during a build:
gitlab-ci-multi-runner 0.6.2 (3227f0a)
Using Docker executor with image mydocker:latest ...
Running on runner-5498280b-project-20053-concurrent-0 via jls-MacBook-Pro...
I register a project specified runner instead of using the shared ones.
Is gitlab actually running all the CI build process via my own machine? What if other co-workers push to this project while my computer was off? I just thought that gitlab would provide every project with a cloud CI server... So , I don't want to turn my own computer into such a server. Am I missing something on its docs?
See this Gitlab Docs
Is gitlab actually running all the CI build process via my own machine?
--Yes, Because you specified it. If you don't want to run into your machine, Then deploy the docker to other server.
What if other co-workers push to this project while my computer was off?
--Then, no build will going to happen,
Related
We are trying to setup a development CI/CD pipeline with Jenkins that builds the Docker Images and deploy that Directly to AWS EKS cluster. Is this even possible??
Our Existing system
Jenkins as CI to pick the Code from GitLab and Build Docker Image
After Build, Jenkins push the Image to Jfrog Artifactory(Professional)
We use Harness for CD, that picks the Image from Artifactory and deploy that
to AWS
Here, Artifiactory and Harness Incurs cost for us and we don't want to use that for Development builds. So, we have setup a Docker Registry with Soantype Nexus3 OSS(open source version).
I would like to know two Options here:
if I can use Jenkins to Build Docker Image and Push that to Nexus Docker Registry and Use Jenkins Itself for CD to deploy that to AWS EKS?
Build Docker images with Jenkins and directly deploy that to AWS EKS without even having to store it in a docker registry?
Any suggestions and help is highly appreciated!
the first option much better.
because one day may need roll-back docker image on Kubernetes. (even development environment)
or you can use AWS ECR. it's easier to use on EKS.
and I think ECR is cheaper than Nexus operation cost.
You may be happy to know that Harness has created a free software version of it's CD service, called Harness Continuous Delivery Community Edition, which should work nicely for your development builds.
I am very new to devops and I could really use some help to understand the concept of this.
So I am trying to develop a continuous integration environment using VirtualBox and Vagrant. I've read some examples of how to build such an environment to pull a maven project from github, build it and deploy to the nexus artefact repository.
I have managed to configure a VM with Ubuntu and installed Tomcat on it.
What I don't understand is where should I configure the Jenkins jobs to build the project and to deploy it to nexus and to make it run in Tomcat Server. On my local machine or in the virtual machine ?
Thanks.
If you are using bridged/hostonly networking for ubuntu, then you can run the jenkins in your host machine. If it is NAT/Private networking, use guest machine to run jenkin jobs.
I'm new to CI/CD process.
We have a model deploying a spring boot application through jenkins in docker in a same machine.
We was searching in internet how to deploy an application to another server, the only key which we have got is through SSH agent. I hope SSH is only for communicating.
Can we have a complete example how to deploy into another server and what are the other preventive measure to be taken into account.
Kindly guide us
In your Jenkins pipeline you need to define a stage for publishing the docker image and in your infrastructure you need a repository that stores your artifacts and docker images.
Repositories I know are Nexus or JFrog Artifactory.
So your server1, at the end of the pipeline, will upload the stable docker image to Nexus.
To execute the docker images in another server (not using an orchestrator) you may use Ansible.
On the net you can find a lot of sources, for example: https://www.codementor.io/mamytianarakotomalala/how-to-deploy-docker-container-with-ansible-on-debian-8-mavm48kw0
Having built,run and executed tests against a docker image on a CI build server(TeamCity2017), how should we deploy it to further machines?
How, for example, if we push it to a Docker registry, would our CI server instruct the target machine to pull and run the image? I.e. where it an application we would use Octopus for this deployment step, but our Octopus server doesn't support Docker deployments as yet.
Any guidance appreciated.
Michael McD.
I would use Octo to deploy your images onto target machines. You'd need to use powershell scripts to have your machines run the images. Or you can use something like Rancher, which is a docker swarm manager. There is no feasible way to have TeamCity deploy your images. The software simply isn't built to be able to do deploys.
The Rancher solution would not be automated, at least not to my knowledge. You would have to trigger upgrades when a new image is pushed to the docker registry.
I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.