Does kubernetes come with docker by default - docker

According to this link docker is supporting kubernetes on linux, mac and windows. I can't make out whether or not kubernetes is installed within docker like swarm is. So is it true that with a normal install of docker it will also install kubernetes?

I would not say it'll be installed but you'll get the chance to do it if you want to.
Here's the about page:
Version 18.05.0-ce-win67 (18263)
Channel: edge
0c7f1aa
If you navigate to the Settings page (Docker CE for windows in my case), and select to enable Kubernetes, this is what you get:
If you choose to install it...
Wait a little...
After completion you can use your kubernetes - through regular docker stack * commands - to run a single-node cluster locally while developing. Which is really nice!
You can use the kubectl CLI directly also. It should be already in your PATH. The CLI is located at C:\Program Files\Docker\Docker\resources\bin.
Hope this helps!

Related

Kubernetes Docker - Do I need to run docker compose when I update a secret?

Quick question, do i need to docker compose up on airflow when i amend a secret in kubectl?
I've changed a password using the command line and kubectl in vscode and just want to know if it is necessary to run docker compose up now that it has been changed or not?
If you've installed your airflow system using helm charts directly on k8s. Then you don't have to do anything. Secrets are automatically refreshed inside pods by the kubelet. And you don't have to manipulate docker directly when you already have k8s installed and are interacting with it using kubectl. That's the whole point of having k8s.
If you're using both, you shouldn't, really. Just interact with k8s and forget about docker. You will almost never have to think about docker unless you are debugging some serious problem with k8s system itself.
Nah. Docker compose has nothing to do with it. You probably just need to restart your Pods somehow. I always just do a "Redeploy" through our Rancher interface. I'm sure there is a way to do that with kubectl as well. You just need to get the secret into the Pods, the image itself is unchanged.

Enable gpu support by default on docker containers

I'm using a platform (Cytomine) on Ubuntu 18.04 to run some deep learning containerized applications (this platform handles the Docker images and containers automatically, so I only need to create the image and provide its download URL to the platform). So far it's working good but now I need to enable GPU support to run the model efficiently. Thus, I did some local tests with nvidia-docker to manually run the model container with GPU support, it was really easy to have it working because I just had to add one option to the run command:
docker run --gpus all
However, because I cannot add this option to the code on the Cytomine platform I need to find a way of adding/enabling that option by default to all the containers run by docker.
I tried adding this option to the files /etc/docker/daemon.json and /etc/docker/key.json and then restarted docker sudo systemctl restart docker. However, it didn't work.
Also, I found how to create docker config files (docker config); however, this seems to work only with Docker Swarm and I'm not going to use a Swarm for this project.
Thus, I'm looking for a straightforward solution that can be deployed properly. Is there any way to enable this option (--gpus all) by default when running any Docker container? (like somehow including it on the Dockerfile?)
Thanks!

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

Can I save my running containers when upgrading from `docker` to `docker-ce`?

I've a CentOS 7 box with the default Docker 1.12 package (docker) running about a dozen containers. I want to upgrade to the latest Docker package (docker-ce) provided in their repository, according to these instructions.
However, I have to uninstall the docker package, first. Will this trash my running containers? If so, is there anyway to avoid that?
There is actually a functionality starting at docker 1.12 (just your case) called live-restore.
https://docs.docker.com/engine/admin/live-restore/#enable-the-live-restore-option
Starting with Docker Engine 1.12, you can configure the daemon so that containers remain running if the daemon becomes unavailable. The live restore option helps reduce container downtime due to daemon crashes, planned outages, or upgrades.
To enable it, update the daemon.json with this.
{
"live-restore": true
}
Then you just send a SIGHUP to the docker engine.
Make sure to do all necessary backups, just in case.

How do I run Docker on Google Compute Engine?

What's the procedure for installing and running Docker on Google Compute Engine?
Until the recent GA release of Compute Engine, running Docker was not supported on GCE (due to kernel restrictions) but with the newly announced ability to deploy and use custom kernels, that restriction is no longer intact and Docker now works great on GCE.
Thanks to proppy, the instructions for running Docker on Google Compute Engine are now documented for you here: http://docs.docker.io/en/master/installation/google/. Enjoy!
They now have a VM which has docker pre-installed now.
$ gcloud compute instances create instance-name
--image projects/google-containers/global/images/container-vm-v20140522
--zone us-central1-a
--machine-type f1-micro
https://developers.google.com/compute/docs/containers/container_vms
A little late, but I wanted to add an answer with a more detailed workflow and links, since answers are still rather scattered:
Create a Docker image
a. Locally
b. Using Google Container Builder
Push local Docker image to Google Container Repository
docker tag <current name>:<current tag> gcr.io/<project name>/<new name>
gcloud docker -- push gcr.io/<project name>/<new name>
UPDATE
If you have upgraded to Docker client versions above 18.03, gcloud docker commands are no longer supported. Instead of the above push, use:
docker push gcr.io/<project name>/<new name>
If you have issues after upgrading, see more here.
Create a compute instance.
This process actually obfuscates a number of steps. It creates a virtual machine (VM) instance using Google Compute Engine, which uses a Google-provided, container-optimized OS image. The image includes Docker and additional software responsible for starting our docker container. Our container image is then pulled from the Container Repository, and run using docker run when the VM starts. Note: you still need to use docker attach even though the container is running. It's worth pointing out only one container can be run per VM instance. Use Kubernetes to deploy multiple containers per VM (the steps are similar). Find more details on all the options in the links at the bottom of this post.
gcloud beta compute instances create-with-container <desired instance name> \
--zone <google zone> \
--container-stdin \
--container-tty \
--container-image <google repository path>:<tag> \
--container-command <command (in quotes)> \
--service-account <e-mail>
Tip You can view available gcloud projects with gcloud projects list
SSH into the compute instance.
gcloud beta compute ssh <instance name> \
--zone <zone>
Stop or Delete the instance. If an instance is stopped, you will still be billed for resources such as static IPs and persistent disks. To avoid being billed at all, use delete the instance.
a. Stop
gcloud compute instances stop <instance name>
b. Delete
gcloud compute instances delete <instance name>
Related Links:
More on deploying containers on VMs
More on zones
More create-with-container options
As of now, for just Docker, the Container-optimized OS is certainly the way to go:
gcloud compute images list --project=cos-cloud --no-standard-images
It comes with Docker and Kubernetes preinstalled. The only thing it lacks is the Cloud SDK command-line tools. (It also lacks python3, despite Google's announce of Python 2 sunset on 2020-01-01. Well, it's still 27 days to go...)
As an additional piece of information I wanted to share, I was searching for a standard image that would offer both docker and gcloud/gsutil preinstalled (and found none, oops). I do not think I'm alone in this boat, as gcloud is the thing you could hardly go by without on GCE¹.
My best find so far was the Ubuntu 18.04 image that came with their own (non-Debian) package manager, snap. The image comes with the Cloud SDK preinstalled, and Docker installs literally in a snap, 11 seconds on an F1 instance initial test, about 6s on an n1-standard-1. The only snag I hit was the error message that the docker authorization helper was not available; an attempt to add it with gcloud components install failed because the SDK was installed as a snap, too. However, the helper is actually there, only not in the PATH. The following was what got me the both tools available in a single transient builder VM in the least amount of setup script runtime, starting off the supported Ubuntu 18.04 LTS image²:
snap install docker
ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/bin
gcloud -q auth configure-docker
¹ I needed both for a Daisy workflow imaging a disk with both artifacts from GS buckets and a couple huge, 2GB+ library images from the local gcr.io registry that were shared between the build (as cloud builder layers) and the runtime (where I had to create and extract containers to the newly built image). But that's besides the point; one may needs both tools for a multitude of possible reasons.
² Use gcloud compute images list --uri | grep ubuntu-1804 to get the most current one.
Google's GitHub site offers now a gce image including docker. https://github.com/GoogleCloudPlatform/cloud-sdk-docker-image
It's as easy as:
creating a Compute Engine instance
curl https://get.docker.io | bash
Using docker-machine is another way to host your google compute instance with docker.
docker-machine create \
--driver google \
--google-project $PROJECT \
--google-zone asia-east1-c \
--google-machine-type f1-micro $YOUR_INSTANCE
If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE
Refer to docker machine driver gce
There is now improved support for containers on GCE:
Google Compute Engine is extending its support for Docker containers. This release is an Open Preview of a container-optimized OS image that includes Docker and an open source agent to manage containers. Below, you'll find links to interact with the community interested in Docker on Google, open source repositories, and examples to get started. We look forward to hearing your feedback and seeing what you build.
Note that this is currently (as of 27 May 2014) in Open Preview:
This is an Open Preview release of containers on Virtual Machines. As a result, we may make backward-incompatible changes and it is not covered by any SLA or deprecation policy. Customers should take this into account when using this Open Preview release.
Running Docker on GCE instance is not supported. The instance goes down and not able to login again.
We can use the Docker image given by the GCE, to create a instance.
If your google cloud virtual machine is based on ubuntu use the following command to install docker
sudo apt install docker.io
You may use this link: https://cloud.google.com/cloud-build/docs/quickstart-docker#top_of_page.
The said link explains how to use Cloud Build to build a Docker image and push the image to Container Registry. You will first build the image using a Dockerfile and then build the same image using the Cloud Build's build configuration file.
Its better to get it while creating compute instance
Go to the VM instances page.
Click the Create instance button to create a new instance.
Under the Container section, check Deploy container image.
Specify a container image name under Container image and configure options to run the container if desired. For example, you can specify gcr.io/cloud-marketplace/google/nginx1:1.12 for the container image.
Click Create.
Installing Docker on GCP Compute Engine VMs:
This is the link to GCP documentation on the topic:
https://cloud.google.com/compute/docs/containers#installing
In it it links to the Docker install guide, you should follow the instructions depending on what type of Linux you have running in the vm.

Resources