Kubernetes on Docker for Windows -> AKS/EKS - docker

With the Kubernetes orchestrator now available in the stable version of Docker Desktop for Win/Mac, I've been playing around with running an existing compose stack on Kubernetes locally.
This works fine, e.g., docker stack deploy -c .\docker-compose.yml myapp.
Now I want to go to the next step of running this same application in a production environment using the likes of Amazon EKS or Azure AKS. These services expect proper Kubernetes YAML files.
My question(s) is what's the best way to get these files, or more specifically:
Presumably, docker stack is performing some conversion from Compose YAML to Kubernetes YAML 'under the hood'. Is there documentation/source code links as to what is going on here and can that converted YAML be exported?
Or should I just be using Kompose?
It seems that running the above docker stack deploy command against a remote context (e.g., AKS/EKS) is not possible and that one must do a kubectl deploy. Can anyone confirm?

docker stack deploy with a Compose file to Kube only works on Docker's Kubernetes distributions - Docker Desktop and Docker Enterprise.
With the recent federation announcement you'll be able to manage AKS and EKS with Docker Enterprise, but using them direct means you'll have to use Kubernetes manifest files and kubectl.

Related

Build Dockerfile without docker on Kubernetes (AKS 1.19.0) running with containerd

I have Azure devops pipeline, building dockerfile on AKS, as AKS is deprecating docker with the latest release, kindly suggest best practice to have a dockerfile build without docker on AKS cluster.
Exploring on Kaniko, buildah to build without docker..
Nothing has changed. You can still use docker build and docker push on your developer or CI system to build and push the Docker image to a repository. The only difference is that using Docker proper as the container backend within your Kubernetes cluster isn't a supported option any more, but this is a low-level administrator-level decision that your application doesn't know or care about.
Unless you were somehow building using the host docker socket within your Kubernetes cluster, this change will not affect you. And if you were mounting the docker socket from the host in a kubernetes cluster, I'd consider that a security concern that you want to fix.
Docker Desktop runs a docker engine as a container on top of containerd, allowing developers to build and run containers in that environment. Similar can be done with DinD build patterns that run the docker engine inside a container, the difference is the underlying container management tooling is containerd instead of a full docker engine, but the containerized docker engine is indifferent to that.
As an alternative to building within the full docker engine, I'd recommend looking at buildkit which is the current default build tool in docker as of 20.10. It uses containerd and they ship a selection of manifests to run builds directly in kubernetes as a standalone builder.

How to handle "docker-in-docker" problem when using Jenkins inside K8S

New to Kubernetes, a little complex question needs help.
Background
Using Jenkins in GKE (Google Kubernetes Engine)
Want to use jenkins-docker plugin to provide the specific test environment for each type of tests
Don't want to mixin docker binary in the Jenkins image (because it is large)
Don't want docker-in-docker
More specifically, I don't want the Jenkins Pod be a new Docker Server
What I want
Each test environment can create a new pod in GKE Cluster, rather than creating containers inside the Jenkins Pod
P.S.
I have just read some articles, but half of them are telling about "how to use K8S to scale up the Jenkins (using jenkins-slave + jenkins-kubernates plugin)", another half are telling about how to "use docker plugin in a dockerized jenkins container on a bare metal machine (you can use /var/run/docker.sock to communicate between the host and the docker container)", but I cannot find **how to use docker plugin (to provide a specific environment) in a dockerized jenkins container inside K8S

Proper way to deploy docker services via Gitlab CI/CD to an own server

My application is built using 3 Docker services:
backend (React)
frontend (Node.js)
nginx (routing traffic)
Up until now I was manually logging into an own Digital Ocean server, cloning the repository and launching the services with docker-compose build && docker-compose up.
I want to automate the process from now on.
Given Gitlab CI/CD Pipelines and the runners, what would be the best approach to automatically deploy the code to Digital Ocean server?
[WHAT I WAS THINKING OF, might seem very "beginner"]
Idea 1: Once a commit was pushed to master -> Gitlab runner will build the services and then copy it over to the DO server via scp. Problem: how do you launch the services? Do you connect to the DO server via ssh from the runner and then run the start script there?
Idea 2: Register a worker on the DO server just so when it pulls the data from Gitlab it has the code on the DO server itself. It just has to build them and run. But this approach is not scalable and seems hacky.
I am looking for some thinking guidelines or a step-by-step approach.
One of the benefits of using Docker in a production-deployment scenario is that you don't separately scp your application code; everything you need is built into the image.
If you're using an automation system like Ansible that can directly run containers on remote hosts then this is straightforward. Your CI system builds Docker images, tags them with some unique version stamp, and pushes them to a repository (Docker Hub, something provided by your cloud provider, one you run yourself). It then triggers the automation system to tell it to start containers with the image you built. (In the case of Ansible, it runs over ssh, so this is more or less equivalent to the other ssh-based options; tools like Chef or Salt Stack require a dedicated agent on the target system.)
If you don't have an automation system like that but you do have ssh and Docker Compose installed on the target system, then you can copy only the docker-compose.yml file to the target host, and then launch it.
TAG=...
docker push myname/myimage:$TAG
scp docker-compose.yml root#remote:
ssh root#remote env TAG=$TAG docker-compose up -d
A further option is to use a dedicated cluster manager like Kubernetes, and talk to its API; then the cluster will pull the updated containers itself, and you don't have to ssh anything. At the scale you're discussing this is probably much heavier weight than you need.

Deploying multiple docker containers to AWS ECS

I have created a Docker containers using docker-compose. In my local environment, i am able to bring up my application without any issues.
Now i wanted to deploy all my docker containers to the AWS EC2 (ECS). After going over the ECS documentation, i found out that we can make use of the same docker-compose to deploy in ECS using ECS-CLI. But ECS-CLI is not available for windows instances as of now. So now i am not sure how to use my docker-compose to build all my images using a single command and deploy it to the ECS from an windows instance.
It seems like i have to deploy my docker containers one by one to ECS as like below steps,
From the ECS Control Panel, create a Docker Image Repository.
Connect your local Docker client with your Docker credentials in ECS:
Copy and paste the Docker login command from the previous step. This will log you in for 24 hours
Tag your image locally ready to push to your ECS repository – use the repo URI from the first step
Push the image to your ECS repoository\
create tasks with the web UI, or manually as a JSON file
create a cluster, using the web UI.
Run your task specifying the EC2 cluster to run on
Is there any other way of running the docker containers in ECS ?
Docker-Compose is wrong at this place, when you're using ECS.
You can configure multiple containers within a task definition, as seen here in the CloudFormation docs:
ContainerDefinitions is a property of the AWS::ECS::TaskDefinition resource that describes the configuration of an Amazon EC2 Container Service (Amazon ECS) container
Type: "AWS::ECS::TaskDefinition"
Properties:
Volumes:
- Volume Definition
Family: String
NetworkMode: String
PlacementConstraints:
- TaskDefinitionPlacementConstraint
TaskRoleArn: String
ContainerDefinitions:
- Container Definition
Just list multiple containers there and all will be launched together on the same machine.
I got the same situation as you. One way to resolve this was using the Terraform to deploy our containers as a Task Definitions on AWS ECS.
So, we using the docker-compose.yml to run locally and the terraform is a kind of mirror of our docker-compose on AWS.
Also another option is Kubernetes that you can translate from docker-compose to Kubernetes resources.

Docker, EC2 and Rstudio

I run Rstudio server mostly from an EC2 instance. However, I´d also like to run it from a cluster at work. They tell me that I can setup docker with rstudio and make it run. Now, I´d also like the Rstudios both on EC2 and work to have the same packages and the same versions available. Any idea how I can do this? Can I have both version point to a dropbox folder? In that case, how can I mount a dropbox folder?
You should setup a docker repository on dockerhub or aws ec2 container service (ecs). ECS is a managed service that allows you to easily deploy docker containers onto a cluster of 1 or more ec2 instances that are running the ecs agent (an aws program that helps that cluster work with the ecs). The Dockerfile should install all packages that you need at build time of the image. I suggest referencing the aws ecs documentation, which includes a walkthrough to get you going very quickly (assuming you have an idea of how docker works): https://aws.amazon.com/documentation/ecs/
You should then always run from that docker image, whether you are running on a local or remote machine. One key advantage of docker is that it keeps your application's environments the same (assuming you use the same build of the image) regardless of the host environment.
I am not sure why would not always run on ECS (we have multiple analysts using RStudio, and ECS lets us provision cpu/memory resources to each one, as well as autoscale as needed). You could install docker on EC2 and manage it that way, but probably easier to just install the ecs agent (or use the ecs optimized ec2 ami which has it preinstalled - the docs above walk through configuring it), and use ECS to launch rstudio services.

Resources