Experience with Docker and Terraform for operational environments making use of
containerisation / micro service design
tried to deploy with jenkins docker k8 deployemn architecture
Related
I have deployed a pool of self hosted GitHub runners as pods to my kubernetes cluster. Some of our pipelines contain jobs which run container actions. Is it possible to run those jobs in this type of runner?
Docker in Docker is configured in the deployment, and I can build docker images and push them to the container registry.
I note that the GitHub docs state:
If you want to run workflows that use Docker container actions or service containers, you must use a Linux machine and Docker must be installed.
I've struggled to find any definitive answers to this online
I have Azure devops pipeline, building dockerfile on AKS, as AKS is deprecating docker with the latest release, kindly suggest best practice to have a dockerfile build without docker on AKS cluster.
Exploring on Kaniko, buildah to build without docker..
Nothing has changed. You can still use docker build and docker push on your developer or CI system to build and push the Docker image to a repository. The only difference is that using Docker proper as the container backend within your Kubernetes cluster isn't a supported option any more, but this is a low-level administrator-level decision that your application doesn't know or care about.
Unless you were somehow building using the host docker socket within your Kubernetes cluster, this change will not affect you. And if you were mounting the docker socket from the host in a kubernetes cluster, I'd consider that a security concern that you want to fix.
Docker Desktop runs a docker engine as a container on top of containerd, allowing developers to build and run containers in that environment. Similar can be done with DinD build patterns that run the docker engine inside a container, the difference is the underlying container management tooling is containerd instead of a full docker engine, but the containerized docker engine is indifferent to that.
As an alternative to building within the full docker engine, I'd recommend looking at buildkit which is the current default build tool in docker as of 20.10. It uses containerd and they ship a selection of manifests to run builds directly in kubernetes as a standalone builder.
My application is built using 3 Docker services:
backend (React)
frontend (Node.js)
nginx (routing traffic)
Up until now I was manually logging into an own Digital Ocean server, cloning the repository and launching the services with docker-compose build && docker-compose up.
I want to automate the process from now on.
Given Gitlab CI/CD Pipelines and the runners, what would be the best approach to automatically deploy the code to Digital Ocean server?
[WHAT I WAS THINKING OF, might seem very "beginner"]
Idea 1: Once a commit was pushed to master -> Gitlab runner will build the services and then copy it over to the DO server via scp. Problem: how do you launch the services? Do you connect to the DO server via ssh from the runner and then run the start script there?
Idea 2: Register a worker on the DO server just so when it pulls the data from Gitlab it has the code on the DO server itself. It just has to build them and run. But this approach is not scalable and seems hacky.
I am looking for some thinking guidelines or a step-by-step approach.
One of the benefits of using Docker in a production-deployment scenario is that you don't separately scp your application code; everything you need is built into the image.
If you're using an automation system like Ansible that can directly run containers on remote hosts then this is straightforward. Your CI system builds Docker images, tags them with some unique version stamp, and pushes them to a repository (Docker Hub, something provided by your cloud provider, one you run yourself). It then triggers the automation system to tell it to start containers with the image you built. (In the case of Ansible, it runs over ssh, so this is more or less equivalent to the other ssh-based options; tools like Chef or Salt Stack require a dedicated agent on the target system.)
If you don't have an automation system like that but you do have ssh and Docker Compose installed on the target system, then you can copy only the docker-compose.yml file to the target host, and then launch it.
TAG=...
docker push myname/myimage:$TAG
scp docker-compose.yml root#remote:
ssh root#remote env TAG=$TAG docker-compose up -d
A further option is to use a dedicated cluster manager like Kubernetes, and talk to its API; then the cluster will pull the updated containers itself, and you don't have to ssh anything. At the scale you're discussing this is probably much heavier weight than you need.
We are developing spring boot application which is currently deploying in AWS manually. For that, first we build docker image through Dockerfile and then connect to AWS EC2 instance from laptop & then pull the image and then we use docker run to start it. But we want to automate the process using gitlab CI/CD.
We created .gitlab-ci.yml, build stage builds spring-boot application and generates jar file. Package stage then build docker images using Dockerfile from source code and then push the image to registry.
Now i don't know how to finish deploy stage. Most of the tutorials explains only about deploying into Google cloud provider. I use below steps to deploy the docker image...
ssh -i "spring-boot.pem" ubuntu#ec2-IP_address.compute-2.amazonaws.com
sudo docker pull username/spring-boot:v1
sudo docker run -d -p 80:8080 username/spring-boot:v1
Can anybody help me to add above steps into deploy stage. Do I need to add pem file into source to connect to ec2 instance.
Or is there any easy way to deploy docker in ec2 from gitlab ci/cd.
First thing, If there is ssh then it's mean you must provide the key or password by default unless you allow access to everyone.
Do I need to add pem file into source to connect to ec2 instance?
Yes, you should provide the key for ssh.
Or is there any easy way to deploy Docker in ec2 from gitlab ci/cd?
Yes, there is the easiest way to do that but for that, you need to use ECS, the specially designed for Docker container and you can manage your deployment through API instead of doing ssh to the ec2 server.
ECS is designed for running Docker container, Some of the big Advantage of ECS over ec2 is you do not need to worry about container management, scalability and availability, ECS will take care of it. provide ECR which is like docker registry but it's private and with in-network.
deploy-docker-containers
With the Kubernetes orchestrator now available in the stable version of Docker Desktop for Win/Mac, I've been playing around with running an existing compose stack on Kubernetes locally.
This works fine, e.g., docker stack deploy -c .\docker-compose.yml myapp.
Now I want to go to the next step of running this same application in a production environment using the likes of Amazon EKS or Azure AKS. These services expect proper Kubernetes YAML files.
My question(s) is what's the best way to get these files, or more specifically:
Presumably, docker stack is performing some conversion from Compose YAML to Kubernetes YAML 'under the hood'. Is there documentation/source code links as to what is going on here and can that converted YAML be exported?
Or should I just be using Kompose?
It seems that running the above docker stack deploy command against a remote context (e.g., AKS/EKS) is not possible and that one must do a kubectl deploy. Can anyone confirm?
docker stack deploy with a Compose file to Kube only works on Docker's Kubernetes distributions - Docker Desktop and Docker Enterprise.
With the recent federation announcement you'll be able to manage AKS and EKS with Docker Enterprise, but using them direct means you'll have to use Kubernetes manifest files and kubectl.