gitlab ci/cd deploy docker to aws ec2 - docker

We are developing spring boot application which is currently deploying in AWS manually. For that, first we build docker image through Dockerfile and then connect to AWS EC2 instance from laptop & then pull the image and then we use docker run to start it. But we want to automate the process using gitlab CI/CD.
We created .gitlab-ci.yml, build stage builds spring-boot application and generates jar file. Package stage then build docker images using Dockerfile from source code and then push the image to registry.
Now i don't know how to finish deploy stage. Most of the tutorials explains only about deploying into Google cloud provider. I use below steps to deploy the docker image...
ssh -i "spring-boot.pem" ubuntu#ec2-IP_address.compute-2.amazonaws.com
sudo docker pull username/spring-boot:v1
sudo docker run -d -p 80:8080 username/spring-boot:v1
Can anybody help me to add above steps into deploy stage. Do I need to add pem file into source to connect to ec2 instance.
Or is there any easy way to deploy docker in ec2 from gitlab ci/cd.

First thing, If there is ssh then it's mean you must provide the key or password by default unless you allow access to everyone.
Do I need to add pem file into source to connect to ec2 instance?
Yes, you should provide the key for ssh.
Or is there any easy way to deploy Docker in ec2 from gitlab ci/cd?
Yes, there is the easiest way to do that but for that, you need to use ECS, the specially designed for Docker container and you can manage your deployment through API instead of doing ssh to the ec2 server.
ECS is designed for running Docker container, Some of the big Advantage of ECS over ec2 is you do not need to worry about container management, scalability and availability, ECS will take care of it. provide ECR which is like docker registry but it's private and with in-network.
deploy-docker-containers

Related

Proper way to deploy docker services via Gitlab CI/CD to an own server

My application is built using 3 Docker services:
backend (React)
frontend (Node.js)
nginx (routing traffic)
Up until now I was manually logging into an own Digital Ocean server, cloning the repository and launching the services with docker-compose build && docker-compose up.
I want to automate the process from now on.
Given Gitlab CI/CD Pipelines and the runners, what would be the best approach to automatically deploy the code to Digital Ocean server?
[WHAT I WAS THINKING OF, might seem very "beginner"]
Idea 1: Once a commit was pushed to master -> Gitlab runner will build the services and then copy it over to the DO server via scp. Problem: how do you launch the services? Do you connect to the DO server via ssh from the runner and then run the start script there?
Idea 2: Register a worker on the DO server just so when it pulls the data from Gitlab it has the code on the DO server itself. It just has to build them and run. But this approach is not scalable and seems hacky.
I am looking for some thinking guidelines or a step-by-step approach.
One of the benefits of using Docker in a production-deployment scenario is that you don't separately scp your application code; everything you need is built into the image.
If you're using an automation system like Ansible that can directly run containers on remote hosts then this is straightforward. Your CI system builds Docker images, tags them with some unique version stamp, and pushes them to a repository (Docker Hub, something provided by your cloud provider, one you run yourself). It then triggers the automation system to tell it to start containers with the image you built. (In the case of Ansible, it runs over ssh, so this is more or less equivalent to the other ssh-based options; tools like Chef or Salt Stack require a dedicated agent on the target system.)
If you don't have an automation system like that but you do have ssh and Docker Compose installed on the target system, then you can copy only the docker-compose.yml file to the target host, and then launch it.
TAG=...
docker push myname/myimage:$TAG
scp docker-compose.yml root#remote:
ssh root#remote env TAG=$TAG docker-compose up -d
A further option is to use a dedicated cluster manager like Kubernetes, and talk to its API; then the cluster will pull the updated containers itself, and you don't have to ssh anything. At the scale you're discussing this is probably much heavier weight than you need.

What is the best practice for deploying my application on my VPS using Docker?

I do have a (Python Flask) application that I want to deploy using GitLab CI and Docker to my VPS.
On my server I want to have a production version and a staging version of my application. Both of them require a MongoDB connection.
My plan is to automatically build the application on GitLab and push it to GitLab's Docker Registry. If I want to deploy the application to staging or production I do a docker pull, docker rm and docker run.
The plan is to store the config (e. g. secret_key) in .production.env (and .staging.env) and pass it to application using docker run --env-file ./env.list
I already have MongoDB installed on my server and both environments of the applications shall use the same MongoDB instance, but a different database name (configured in .env).
Is that the best practice for deploying my application? Do you have any recommendations? Thanks!
Here's my configuration that's worked reasonably well in different organizations and project sizes:
To build:
The applications are located in a git repository (GitLab in your case). Each application brings its own Dockerfile.
I use Jenkins for building, you can, of course, use any other CD tooling. Jenkins pulls the application's repository, builds the docker image and publishes it into a private Docker repository (Nexus, in my case).
To deploy:
I have one central, application-independent repository that has a docker-compose file (or possibly multiple files that extend one central file for different environments). This file contains all service definitions and references the docker images in my Nexus repo.
If I am using secrets, I store them in a HashiCorp Vault instance. Jenkins pulls them, and writes them into an .env file. The docker-compose file can reference the individual environment variables.
Jenkins pulls the docker-compose repo and, in my case via scp, uploads the docker-compose file(s) and the .env file to my server(s).
It then triggers a docker-compose up (for smaller applications) or re-deploys a docker stack into a swarm (for larger applications).
Jenkins removes everything from the target server(s).
If you like it, you can do step 3. via Docker Machine. I feel, however, its benefits don't warrant use in my cases.
One thing I can recommend, as I've done it in production several times is to deploy Docker Swarm with TLS Encrypted endpoints. This link talks about how to secure the swarm via certificate. It's a bit of work, but what it will allow you to do is define services for your applications.
The services, once online can have multiple replicas and whenever you update a service (IE deploy a new image) the swarm will take care of making sure one is online at all times.
docker service update <service name> --image <new image name>
Some VPS servers actually have Kubernetes as a service (Like Digital Ocean) If they do, it's more preferable. Gitlab actually has an autodevops feature and can remotely manage your Kubernetes cluster, but you could also manually deploy with kubectl.

Openshift privileged container access services from openshift namespace

I'm trying to run my custom Jenkins on Openshift. I'm trying to run dockerized pipelines using privileged containers and scc to be able to run docker using my Jenkins. So far, I managed to run the job and it is creating a new Docker container successfully. But, since my new docker is created by Jenkins it doesn't have access to Nexus service on my project. How can I fix this? I was thinking the solution should be for the Jenkins to run docker in the same namespace as my Jenkins.
I'm assuming that you want to run your container in Kubernetes.
On your Deployment I would advise using either a ConfigMap or if you want to keep in encrypted in the cluster you can use a Secret to store your Nexus credentials.
Then you can mount your ConfigMap or Secret under ~/.ivy2/.credentials for example.

Deploying multiple docker containers to AWS ECS

I have created a Docker containers using docker-compose. In my local environment, i am able to bring up my application without any issues.
Now i wanted to deploy all my docker containers to the AWS EC2 (ECS). After going over the ECS documentation, i found out that we can make use of the same docker-compose to deploy in ECS using ECS-CLI. But ECS-CLI is not available for windows instances as of now. So now i am not sure how to use my docker-compose to build all my images using a single command and deploy it to the ECS from an windows instance.
It seems like i have to deploy my docker containers one by one to ECS as like below steps,
From the ECS Control Panel, create a Docker Image Repository.
Connect your local Docker client with your Docker credentials in ECS:
Copy and paste the Docker login command from the previous step. This will log you in for 24 hours
Tag your image locally ready to push to your ECS repository – use the repo URI from the first step
Push the image to your ECS repoository\
create tasks with the web UI, or manually as a JSON file
create a cluster, using the web UI.
Run your task specifying the EC2 cluster to run on
Is there any other way of running the docker containers in ECS ?
Docker-Compose is wrong at this place, when you're using ECS.
You can configure multiple containers within a task definition, as seen here in the CloudFormation docs:
ContainerDefinitions is a property of the AWS::ECS::TaskDefinition resource that describes the configuration of an Amazon EC2 Container Service (Amazon ECS) container
Type: "AWS::ECS::TaskDefinition"
Properties:
Volumes:
- Volume Definition
Family: String
NetworkMode: String
PlacementConstraints:
- TaskDefinitionPlacementConstraint
TaskRoleArn: String
ContainerDefinitions:
- Container Definition
Just list multiple containers there and all will be launched together on the same machine.
I got the same situation as you. One way to resolve this was using the Terraform to deploy our containers as a Task Definitions on AWS ECS.
So, we using the docker-compose.yml to run locally and the terraform is a kind of mirror of our docker-compose on AWS.
Also another option is Kubernetes that you can translate from docker-compose to Kubernetes resources.

Docker, EC2 and Rstudio

I run Rstudio server mostly from an EC2 instance. However, I´d also like to run it from a cluster at work. They tell me that I can setup docker with rstudio and make it run. Now, I´d also like the Rstudios both on EC2 and work to have the same packages and the same versions available. Any idea how I can do this? Can I have both version point to a dropbox folder? In that case, how can I mount a dropbox folder?
You should setup a docker repository on dockerhub or aws ec2 container service (ecs). ECS is a managed service that allows you to easily deploy docker containers onto a cluster of 1 or more ec2 instances that are running the ecs agent (an aws program that helps that cluster work with the ecs). The Dockerfile should install all packages that you need at build time of the image. I suggest referencing the aws ecs documentation, which includes a walkthrough to get you going very quickly (assuming you have an idea of how docker works): https://aws.amazon.com/documentation/ecs/
You should then always run from that docker image, whether you are running on a local or remote machine. One key advantage of docker is that it keeps your application's environments the same (assuming you use the same build of the image) regardless of the host environment.
I am not sure why would not always run on ECS (we have multiple analysts using RStudio, and ECS lets us provision cpu/memory resources to each one, as well as autoscale as needed). You could install docker on EC2 and manage it that way, but probably easier to just install the ecs agent (or use the ecs optimized ec2 ami which has it preinstalled - the docs above walk through configuring it), and use ECS to launch rstudio services.

Resources