Google Cloud Container: Create a docker container from a Dockerfile - docker

Is it possible to create a docker container in a Google Container Engine cluster, using a dockerfile, which builds an image on-the-fly and deploy in cluster,
rather than creating an image first and uploading it to a Google Container Registry, and then using it from there?
I feel like that is cumbersome, and there should be a way to create containers in cluster directly using a dockerfile.

It is not possible to do this in Google Container Engine. Google Container Engine is designed to help orchestrate container deployment and does not itself provide a source -> deployment workflow.
You may want to look at Google App Engine or Openshift 3 (which is built on Kubernetes) as a more fully featured platform-as-a-service offering.
You can also build this type of tooling on top of a Google Container Engine cluster yourself as all of the building blocks are available.

One service to take a look at when constructing a workflow is Google Container Builder, which can simplify the process of building a container from source and pushing it to Google Container Registry.
It is currently a fairly low level service, but offers some advantages for environments where it may be impractical to run docker build locally.

Related

Deploying Container on GCP

I am trying to deploy this app. https://github.com/taigaio/taiga-docker . This container is a collection of various images. It uses docker-compose to create the container. It is my understanding that this cannot be run as an image from a GCP Artifact Repo as a docker Image. This needs a VM perhaps?
My question is if there is a way to deploy this container as an Image in a serverless fashion in GCP or any other cloud platform. Any pointers/help is much appreciated.
You can either use Cloud Run (the most Serverless way) or on a VM.
On Cloud Run you can deploy a single image as a Service (Cloud Run Terminology), if you have more than one image you can deploy multiple Services and make them talk to each other
Or on VM, that would be as if you are deploying on your personal laptop

GitHub Actions Runner Container on Google Cloud Run that can build docker image

I have built a docker image that when run, it registers itself as a GitHub Runner. This runner will, amongst other things, be used to build and push images to GitHub Container Registry. I don't want to deploy the containers to GKE or Compute, as I don't want the overhead of managing those resources. I would prefer to deploy the containers to Google Cloud Run. I've scoured the docs for help but I can't seem to find the answers to the following question:
Can I run 'docker in docker' when the container is deployed to GCP Cloud Run?
How do I specify the volume mount required when deploying the container to Google Cloud Run, i.e. the usual mapping with docker run would be:
-v /var/run/docker.sock:/var/run/docker.sock
I never tested but it's possible that the current Cloud Run sandbox prevent this king of use. And I don't really know the use case for this!
You can't mount volume in Cloud Run, it's stateless. You only have a in-memory file system in the /tmp directory (and it's in memory, size correctly your Cloud Run instance memory to take this into account). You can connect your instance to 3rd party product, such as Google Cloud Storage or databases, but no volume mountable on Cloud Run (for now)
If you have these requirement you can maybe have a look to autopilot and deploy directly your container on fully managed K8S.

Is it possible to deploy two different docker images within the same Cloud Run service

I've built an app that uses two home made micro services, each of the micro service having its own Dockerfile.
When I build it locally I use docker-compose for practical reasons.
Currently, when I deploy to Cloud Run I use commands like
docker tag xxx
docker push xxx
Then I select the image I want to deploy on Cloud run
As I understand, docker-compose build just builds two images (one for each Dockerfile) and the places them within the same network which allows some practical connections between these two API.
Is it possible to do something similar one Cloud Run without having to deploy each image on a different service ?
PS: For business reasons I can't host my code directly on Cloud Source Repositories, it has to be on Azure
It is not possible to deploy 2 different Docker images to Cloud Run.
The Cloud Run works in the following way:
You build a container image and upload to Google Container Registry
Deploy to Cloud Run with the container image.
Your service is automatically scaled up and down to a specific number of container instances depending on your incoming requests. Each container will run the container image.
Summary = Cloud Run takes a user's container and executes it on Google infrastructure, and handles the instantiation of instances (scaling) of that container.
Please Note, Cloud Run is designed to run Websites,REST APIs backend, Back‐office administration etc and it does not support microservices architecture (different servers running in a different container).
For you scenario, you can deploy multiple services in Cloud Run or use other Google Products such as Cloud SQL, Datastore, Spanner or BigTable.
Note: You can'd deploy 2 containers in the same service however you can deploy a container that contains multiple processes as explained in this article written by a Googler

docker-swarm vs.docker-compose on single host in production

Is there a reason to use docker-swarm instead of docker-compose for deploying a single host in production?
I'm currently rewriting an existing application. My predecessors set up the application using docker-swarm. But I do not understand why: the application will only consist of a single host running a couple of services. These services will only supply some local information on the customer network via a REST-Api to a kubernetes cluster (so no real load or reason to add additional hosts).
I looked through the Docker website and could not find a reason to use docker-swarm to deploy a single host, apart from testing a deployment on a single host dev environment.
Are there benefits of using docker-swarm compared to docker-compose regarding deployment, networking, etc...?
Docker Swarm and Docker Compose are fundamentally different animals. Compose is a build tool that lets you define and configure a group of related containers, whereas swarm is an orchestration tool that manages multiple docker engines in a way that lets you treat them (somewhat) as a single unit. Swarm exposes an API that is mostly compatible with the Docker Remote API, which allows existing applications to use Swarm to scale horizontally without having to completely overhaul the existing interface to the container engine.
That said, much of the functionality in Docker Compose that overlaps with Docker Swarm has been added incrementally. Compose has grown over time, and the distinction between the two has narrowed a bit. Swarm was eventually integrated into the Docker engine, and Docker Stack was introduced, allowing compose.yml files to be read directly by Docker, without using Compose.
So the real question might be: what is the difference between docker compose and docker stack? Not a whole lot. Compose is actually a separate project, written in Python that uses the Docker API under the hood. Stack does much of the same things as Compose, but is integrated into Docker. Stack also wants pre-built images, while compose will handle those image builds for you, which makes compose very handy for development.
What you are dealing with might be a product of a time when these 2 tools were a lot more distinct. Docker Swarm is part of Docker, and it allows for easy scaling if needed (even if you don't need it now, it might be good down the road). On the other hand, Compose (in my opinion anyway) is much more useful for development situations where you are making frequent tweaks to your images, and rebuilding.

Kubernetes vs. Docker: What Does It Really Mean?

I know that Docker and Kubernetes aren’t direct competitors. Docker is the container platform and containers are coordinated and scheduled by Kubernetes, which is a tool.
What does it really mean and how can I deploy my app on Docker for Azure ?
Short answer:
Docker (and containers in general) solve the problem of packaging an application and its dependencies. This makes it easy to ship and run everywhere.
Kubernetes is one layer of abstraction above containers. It is a distributed system that controls/manages containers.
My advice: because the landscape is huge... start learning and putting the pieces of the puzzle together by following a course. Below I have added some information from the:
Introduction to Kubernetes, free online course from The Linux Foundation.
Why do we need Kubernetes (and other orchestrators) above containers?
In the quality assurance (QA) environments, we can get away with running containers on a single host to develop and test applications. However, when we go to production, we do not have the same liberty, as we need to ensure that our applications:
Are fault-tolerant
Can scale, and do this on-demand
Use resources optimally
Can discover other applications automatically, and communicate with each other
Are accessible from the external world
Can update/rollback without any downtime.
Container orchestrators are the tools which group hosts together to form a cluster, and help us fulfill the requirements mentioned above.
Nowadays, there are many container orchestrators available, such as:
Docker Swarm: Docker Swarm is a container orchestrator provided by Docker, Inc. It is part of Docker Engine.
Kubernetes: Kubernetes was started by Google, but now, it is a part of the Cloud Native Computing Foundation project.
Mesos Marathon: Marathon is one of the frameworks to run containers at scale on Apache Mesos.
Amazon ECS: Amazon EC2 Container Service (ECS) is a hosted service provided by AWS to run Docker containers at scale on its infrastructrue.
Hashicorp Nomad: Nomad is the container orchestrator provided by HashiCorp.
Kubernetes is built on Docker technology. It is an orchestration tool for Docker container whereas Docker is a technology to create and deploy containers.
Docker, starting with a platform-as-a-service (PaaS) provider named dotCloud.
All in all, Kubernetes is related to the Docker container, allowing you to implement application portability and extensibility in container orchestration.
DOCKER
Easy and fast to install and configure
Functionality is provided and limited by the Docker API
Quick container deployment and scaling even in very large clusters
Automated internal load balancing through any node in the cluster
Simple shared local volumes
Kubernetes
Require some work to get up and running
Client, API and YAML definitions are unique to Kubernetes
Provides strong guarantees to cluster states at the expense of speed
To Enable load balancing requires manual service configuration
Volumes shared within pods
This is just a basic idea which at least explains the difference.If you want to go in depth see my posts
http://www.thecreativedev.com/an-introduction-to-kubernetes/
http://www.thecreativedev.com/learn-docker-works/
Docker and Kubernetes are complementary. Docker provides an open standard for packaging and distributing containerized applications, while Kubernetes provides for the orchestration and management of distributed, containerized applications created with Docker. In other words, Kubernetes provides the infrastructure needed to deploy and run applications built with Docker.

Resources