As mentioned in the title, I thinking about a dockerized jenkins. I have a running container that run all tests but now I want to run some deployment job.
The files (.py, .conf, .sh) will be copied into folders which are mounted by other container (app container). As I seen some recommend do not use docker as well.
Now I'm wondering if I should continue to use jenkins in a container (so i must find a way to run the deployment script) or prefer to install it on the server ?
If you are running dockerized Jenkins for production, It is good practice to have its volume mounted on Docker host.
I personally do not prefer dockerized Jenkins for production due to non static IP for Jenkins, and reliability issues with docker networking. For non-production use, i dockerize Jenkins.
We're experimenting with containerizing Jenkins in production - the flexibility of being able to easily set up or move instances offsets the learning pain, and that pain is :
1 - Some build jobs are themselves containerized, requiring that you run docker-in-docker. This is possible by passing the host docker.sock into the Jenkins' container. (more reading : https://getintodevops.com/blog/the-simple-way-to-run-docker-in-docker-for-ci). It requires that the host and Jenkins container are running identical versions of Docker, but I can live with that.
2 - SSH keys are a bigger issue. ssh agent forwarding in Docker is notorious for its unreliability, and we've always copied keys into containers (ignoring security questions for the context of this question). In an on-the-host Jenkins instance we put our ssh keys in Jenkins' home folder and everything works seamlessly. But, dockerized Jenkins has its home folder inside a Docker volume, which is owned by the host system, so keys are too open. We got around this by copying the keys to a folder outside Jenkins' home, chown/chmod'ing those keys to the Jenkins container user, then adding the key path to the container's /etc/ssh/ssh_config.
Related
I have a Gitlab runner that runs all kind of jobs using Docker executors (host is Ubuntu 20, guests are various Linux images). The runner runs containers as unprivileged.
I am stumped on an apparently simple requirement - I need to deploy some artifacts on a Windows machine that exposes the target path as an authenticated share (\\myserver\myapp). Nothing more than replacing files on the target with the ones on the source - a simple rsync would be fine.
Gitlab Runner does not allow specifying mounts in the CI config (see https://gitlab.com/gitlab-org/gitlab-runner/-/issues/28121), so I tried using mount.cifs, but I discovered that by default Docker does not allow mounting anything inside the container unless running privileged, which I would like to avoid.
I also tried the suggestion to use --cap-add as described at Mount SMB/CIFS share within a Docker container but they do not seem to be enough for my host, there are probably other required capabilities and I have no idea how to identify them. Also, this looks just slightly less ugly than running privileged.
Now, I do not strictly need to mount the remote folder - if there were an SMB-aware rsync command for example I would be more than happy to use that. Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
Do you have any idea on how achieve this?
Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
You could simply copy an executable (that you can build anywhere else) in Go, listening to a port, ready to receive a file.
This this implementation for instance: file-receive.go. It listens on port 8080 (can be changed) and copies the file content to a local folder.
No installation or setup required: just copy the exe to the target machine and run it.
From your GitLab runner, you can 'curl' send a file to the remote PC machine, port 8080.
My application is built using 3 Docker services:
backend (React)
frontend (Node.js)
nginx (routing traffic)
Up until now I was manually logging into an own Digital Ocean server, cloning the repository and launching the services with docker-compose build && docker-compose up.
I want to automate the process from now on.
Given Gitlab CI/CD Pipelines and the runners, what would be the best approach to automatically deploy the code to Digital Ocean server?
[WHAT I WAS THINKING OF, might seem very "beginner"]
Idea 1: Once a commit was pushed to master -> Gitlab runner will build the services and then copy it over to the DO server via scp. Problem: how do you launch the services? Do you connect to the DO server via ssh from the runner and then run the start script there?
Idea 2: Register a worker on the DO server just so when it pulls the data from Gitlab it has the code on the DO server itself. It just has to build them and run. But this approach is not scalable and seems hacky.
I am looking for some thinking guidelines or a step-by-step approach.
One of the benefits of using Docker in a production-deployment scenario is that you don't separately scp your application code; everything you need is built into the image.
If you're using an automation system like Ansible that can directly run containers on remote hosts then this is straightforward. Your CI system builds Docker images, tags them with some unique version stamp, and pushes them to a repository (Docker Hub, something provided by your cloud provider, one you run yourself). It then triggers the automation system to tell it to start containers with the image you built. (In the case of Ansible, it runs over ssh, so this is more or less equivalent to the other ssh-based options; tools like Chef or Salt Stack require a dedicated agent on the target system.)
If you don't have an automation system like that but you do have ssh and Docker Compose installed on the target system, then you can copy only the docker-compose.yml file to the target host, and then launch it.
TAG=...
docker push myname/myimage:$TAG
scp docker-compose.yml root#remote:
ssh root#remote env TAG=$TAG docker-compose up -d
A further option is to use a dedicated cluster manager like Kubernetes, and talk to its API; then the cluster will pull the updated containers itself, and you don't have to ssh anything. At the scale you're discussing this is probably much heavier weight than you need.
I do have a (Python Flask) application that I want to deploy using GitLab CI and Docker to my VPS.
On my server I want to have a production version and a staging version of my application. Both of them require a MongoDB connection.
My plan is to automatically build the application on GitLab and push it to GitLab's Docker Registry. If I want to deploy the application to staging or production I do a docker pull, docker rm and docker run.
The plan is to store the config (e. g. secret_key) in .production.env (and .staging.env) and pass it to application using docker run --env-file ./env.list
I already have MongoDB installed on my server and both environments of the applications shall use the same MongoDB instance, but a different database name (configured in .env).
Is that the best practice for deploying my application? Do you have any recommendations? Thanks!
Here's my configuration that's worked reasonably well in different organizations and project sizes:
To build:
The applications are located in a git repository (GitLab in your case). Each application brings its own Dockerfile.
I use Jenkins for building, you can, of course, use any other CD tooling. Jenkins pulls the application's repository, builds the docker image and publishes it into a private Docker repository (Nexus, in my case).
To deploy:
I have one central, application-independent repository that has a docker-compose file (or possibly multiple files that extend one central file for different environments). This file contains all service definitions and references the docker images in my Nexus repo.
If I am using secrets, I store them in a HashiCorp Vault instance. Jenkins pulls them, and writes them into an .env file. The docker-compose file can reference the individual environment variables.
Jenkins pulls the docker-compose repo and, in my case via scp, uploads the docker-compose file(s) and the .env file to my server(s).
It then triggers a docker-compose up (for smaller applications) or re-deploys a docker stack into a swarm (for larger applications).
Jenkins removes everything from the target server(s).
If you like it, you can do step 3. via Docker Machine. I feel, however, its benefits don't warrant use in my cases.
One thing I can recommend, as I've done it in production several times is to deploy Docker Swarm with TLS Encrypted endpoints. This link talks about how to secure the swarm via certificate. It's a bit of work, but what it will allow you to do is define services for your applications.
The services, once online can have multiple replicas and whenever you update a service (IE deploy a new image) the swarm will take care of making sure one is online at all times.
docker service update <service name> --image <new image name>
Some VPS servers actually have Kubernetes as a service (Like Digital Ocean) If they do, it's more preferable. Gitlab actually has an autodevops feature and can remotely manage your Kubernetes cluster, but you could also manually deploy with kubectl.
I run Rstudio server mostly from an EC2 instance. However, I´d also like to run it from a cluster at work. They tell me that I can setup docker with rstudio and make it run. Now, I´d also like the Rstudios both on EC2 and work to have the same packages and the same versions available. Any idea how I can do this? Can I have both version point to a dropbox folder? In that case, how can I mount a dropbox folder?
You should setup a docker repository on dockerhub or aws ec2 container service (ecs). ECS is a managed service that allows you to easily deploy docker containers onto a cluster of 1 or more ec2 instances that are running the ecs agent (an aws program that helps that cluster work with the ecs). The Dockerfile should install all packages that you need at build time of the image. I suggest referencing the aws ecs documentation, which includes a walkthrough to get you going very quickly (assuming you have an idea of how docker works): https://aws.amazon.com/documentation/ecs/
You should then always run from that docker image, whether you are running on a local or remote machine. One key advantage of docker is that it keeps your application's environments the same (assuming you use the same build of the image) regardless of the host environment.
I am not sure why would not always run on ECS (we have multiple analysts using RStudio, and ECS lets us provision cpu/memory resources to each one, as well as autoscale as needed). You could install docker on EC2 and manage it that way, but probably easier to just install the ecs agent (or use the ecs optimized ec2 ami which has it preinstalled - the docs above walk through configuring it), and use ECS to launch rstudio services.
I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.