I would like to know when to use and the difference between Docker API, Docker remote API, Client API and Compose API. TIA.
There is only Docker Engine API, which allows you to manage Docker calling it.
Docker API = Docker Engine API
Docker remote API = I think this means to configure Docker CLI to connect to a remote API to manage container on other hosts.
Client API = Docker CLI. A CLI to use Docker Engine API.
Compose API = This doesn't exist, Compose is only a tool to use Docker Engine API.
For further information, check Docker Engine API docs: https://docs.docker.com/engine/api/
Basically all the categories that you are referring to are Docker Engine APIs
As per the Docker Docs:
The Engine API is the API served by Docker Engine. It allows you to
control every aspect of Docker from within your own applications,
build tools to manage and monitor applications running on Docker, and
even use it to build apps on Docker itself.
It is the API the Docker client uses to communicate with the Engine,
so everything the Docker client can do can be done with the API. For
example:
Running and managing containers Managing Swarm nodes and services
Reading logs and metrics Creating and managing Swarms Pulling and
managing images Managing networks and volumes
These APIs are used to control Docker on the remote servers.
Docker Compose is a tool for defining and running multi-container Docker applications.
These APIs are used to control Docker on the remote servers.
Docker Compose is a tool for defining and running multi-container Docker applications.
Thanks, I was trying to understand the difference between the Docker APIs while working on this Scalable Docker Deployment in the Bluemix platform.
Related
I have a project to containerize several applications (Gitlab, Jenkins, Wordpress, Python Flask app...). Currently each application runs on a Compute Engine VM each at GCP. My goal would be to move everything to a cluster (Swarm or Kubernetes).
However I have different questions about Docker Swarm on Google Cloud Platform:
How can I expose my Python application on the outside (HTTP load balancer) as well as the other applications only available in my private VPC ?
From what I've seen on the internet, I have the impression that docker swarm is very little used. Should I go for a kubernetes cluster instead ? (I have good knowledge of Docker/Kubernetes)
It is difficult to find information about Docker Swarm in cloud providers. What would be an architecture with Docker Swarm on GCP?
Thanks for your help.
I'd create a template and from that an instance group for all VM, which shall host the Docker swarm. And a separate instance or instance group for said internal purposes - so that there is a strict separation, which can then be used to route the internal & external traffic accordingly (this would apply in any case). Google Kubernetes Engine is about the same as such an instance group, but Google managed infrastructure. See the tutorial, there's not much difference - except that it better integrates with gcloud & kubectl. While there is no requirement to want or need to maintain the underlying infrastructure, GKE is probably less effort.
What you are basically asking is:
Kubernetes vs. Docker Swarm: What’s the Difference?
Docker Swarm vs Kubernetes: A Helpful Guide for Picking One
Kubernetes vs. Docker: What Does it Really Mean?
Docker Swarm vs. Kubernetes: A Comparison
Kubernetes vs Docker Swarm
I have an application which consists of 2 docker containers. Both are small and need to interact with each other quite often through rest api.
How can I deploy both of them to a single Virtial Machine in Google Cloud?
Usually, when creating virtual machine, I get to chose a container image to deploy: Deploy a container image to this VM instance.
I can specify one of my images and get it running in the VM. Can I set multiple images?
You can not deploy multiple containers per VM.
Please consider this limitation when deploying containers on VMs:
1.You can only deploy one container for each VM instance. Consider Google Kubernetes Engine if you need to deploy multiple containers per
VM instance.
2.You can only deploy containers from a public repository or from a private repository at Container Registry. Other private repositories
are currently not supported.
3.You can't map a VM instance's ports to the container's ports (Docker's -p option).
4.You can only use Container-Optimized OS images with this deployment method. You can only use this feature through the Google Cloud
Platform Console or the gcloud command-line tool, not the API.
You can use docker-compose to deploy multi-container applications.
To achieve this on Google Cloud, you'll need:
ssh access to VM
docker and docker compose installed on the VM
My application is comprised of two separate docker containers. One being a Grails based web application and second being a RESTful Python Flask application. Both docker containers are sitting on my local computer. They are not hosted on docker hub. They are proprietary and I don't want to host them publicly.
I would like to try Cloud Foundry to deploy these docker containers and see how it works. However, from the documentation I get a sense that Cloud Foundry doesn't support deploying docker containers sitting on a local machine.
Question
Is there a way to deploy docker containers sitting on a local computer to CloudFoundry? If not, what is a way to securely host the containers somewhere from CF can fetch them?
Is CloudFoundry capable of running a docker container that is a Python Flask application?
One option you have is to not use Docker images, and just push your code directly, one of the nice features of CF. PCF comes with a python buildpack which should automatically detect your Flask app.
Another option would be run your own trusted docker registry, push your images there, and then when you push your app, tell it to grab the images from your registry. If you google "cloud foundry docker registry" you get the following useful results you should check out:
https://github.com/cloudfoundry-community/docker-registry-boshrelease
http://docs.pivotal.io/pivotalcf/1-8/adminguide/docker.html#caveats
https://docs.pivotal.io/pivotalcf/1-7/opsguide/docker-registry.html
When people talk about the 'Docker Engine' do they mean both the Client and the Daemon? Or is it something else entirely?
As I see it there is a Docker Client, a Docker Daemon. The Client runs locally and connects to the Daemon which does the actual running of the containers. The Client can connect to a remote Daemon. Are these both together the Engine? thanks
The Docker Engine is the Docker Daemon running on a single host, installed with the Docker Client CLI. Here are the docs that answer this specific question.
On top of that, you can have a Swarm running that joins multiple hosts to horizontally scale and provide fault tolerance. And there are numerous other projects from Docker, like their Registry, Docker Cloud, and Universal Control Plane, that are each separate from the engine.
Docker engine is a client-server application which comprises of 3 components.
1. Client: Docker CLI or the command line window that helps us to interact.
2. REST API: Client communicate with the server with REST API, the commands issued by the client is sent to the server in the form of REST API, it is this reason our server can either be in the local or remote machine.
3. Server: Server here is either the local or remote machine or host machine which has a daemon process running in it which receives the commands and creates, manages and destroys the docker objects like images, containers, volumes etc.
We have an existing java application which exposes a REST API. When it receives a http request, it starts another java process using the Runtime.getRuntime().exec.
We are in the process of migrating this application to docker and we would like to separate these services, the REST application in 1 container and the other component in another container.
Is there any way, that the REST application can start the other application in another docker container?
Yes you can programmatically spawn a docker container.
Docker Remote API will allow you to do that. You can either use http client library to invoke the remote APIs or you can use java docker client libraries to do the same.
Here is the relevant docker documentation:
Remote API:
https://docs.docker.com/engine/reference/api/docker_remote_api/
Libraries: https://docs.docker.com/engine/reference/api/remote_api_client_libraries/