Minikube and Docker - conflict? [closed] - docker

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I've been using Docker for quite a while for some development. Now, I am trying to learn some more advanced stuff using Kubernetes.
In some course I found information that I should run
eval $(minikube docker-env)
That would register a few environmental variables: DOCKER_TLS_VERIFY, DOCKER_HOST, DOCKER_CERT_PATH and DOCKER_API_VERSION. What would it do? Wouldn't that break my day to day work on default values with my host?
Also, is it possible to switch context/config for my local Docker somehow similar to kubectl config use-context?

That command points Docker’s environment variables to use a Docker daemon hosted inside the Minikube VM, instead of one running locally on your host or in a Docker Desktop-managed VM. This means that you won’t be able to see or run any images or Docker-local volumes you had before you switched (it’s a separate VM). In the same way that you can $(minikube docker-env) to “switch to” the Minkube VM’s Docker, you can $(minikube docker-env -u) to “switch back”.
Mostly using this actually only makes sense if you’re on a non-Linux host and get Docker via a VM; this lets you share the one Minikube/Docker VM and not have to launch two separate VMs, one for Docker and one not.
If you’re going to use Minikube, you should use it the same way you’d use a real, remote Kubernetes cluster: set up a Docker registry, docker build && docker push images there, and reference it in your Deployment specs. The convolutions to get things like live code reloading in Kubernetes are tricky, don’t work on any other Kubernetes setup, and aren’t what you’d run in production.

The said command will only manipulate the current shell. Opening up a new one will allow you to continue working with your normal workflow as the docker CLI will for example per default connect to the daemon socket at /var/run/docker.sock.
I don't know of a tool that will allow you to switch those settings with a single command and based on a context name as kubectl allows you to. You could however write an alias. For bash you could for example just execute:
$ echo 'alias docker-context-a="eval \$(minikube docker-env)"' >> ~/.bashrc

Related

How to restart a docker stack container by cron? [duplicate]

Does anyone know if there is a way to have docker swarm restart one service that is part of a stack without restarting the whole stack?
Doing docker stack deploy again for me is the way to go to update services. As Francois' Answer, and also in my own experience, doing so updates only services that need to be updated.
But sometimes, it seems easier when testing stuff to only restart a single service. In my case, I had to clear the volume and update the service to start it like it was fresh. I'm not sure if there is downside to the method I will describe. I tested it on my development stack and it worked great for me.
Get the service id you want to tear down then use docker service update --force <id> to force the update of the service which effectively re-deploy it
$ docker stack services <stack_name>
ID NAME ...
3xrdy2c7pfm3 stack-name_api ...
$ docker service update --force 3xrdy2c7pfm3
The --force flag will force the service to update causing it to restart.
Scale to 0 and back up:
docker service scale myservice=0
docker service scale myservice=10
Looking at the docker stack documentation:
Extended description
Create and update a stack from a compose or a dab file on the swarm
From this blog article: docker stack works in a similar way as docker compose. It’s idempotent. If the stack is already deployed, docker stack deploy will restart only those services which has the digest or tag that is updated:
From my experience, when I deploy the same stack again with one service changing, only the updated service will be restarted.
BUT... there seems to be some limitations to changes that are taken into account (some report bugs with image tags), so give it a try and see if works as expected.
You can also use service update if you want to be sure that only targeted service if updated with your changes.
You can also refer to this similar SO QA.
As per the example in the documentation for rolling updates:
$ docker service update --image redis:3.0.7 redis
However, that only works if your image is already on the local machines. If not then you need to use --with-registry-auth to send registry authentication details to the swarm agents. See details in the docker service update documentation.
$ docker service update --with-registry-auth --image redis:3.0.7 redis
To restart a single service (with rolling restart to avoid downtime in case the service has multiple replicas) in already configured, existing stack, you can do:
docker service update --force stack_service_name
I don't recommend running the same command again (in another shell) until this one completes (because otherwise the rolling restart isn't guaranteed; it might restart all replicas of that service).
The docker service update command also checks for newer version of the image:tag you are trying to use.
If your registry requires auth, also pass --with-registry-auth argument, like so:
docker service update --force --with-registry-auth stack_service_name
If you don't pass this argument, the service will still be restarted, but the check won't be made and the service will still use the old container image without pulling the new one first. Which might be what you want.
In case you want to also switch to different image tag (or completely different image), you can do it from here too, but remember to also change the tag in your docker-stack.yml, or your next docker stack deploy will revert it back to the verison defined there:
docker service update --with-registry-auth --force --image nginx:edge stack_service_name
remove it:
docker stack rm stack_name
redeploy it:
docker stack deploy -c docker-compose.yml stack_name

Any reasons to not use Docker Swarm (instead of Docker-Compose) on a single node?

There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus, which seems like nice/important to be able to set.
So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose).
But maybe there are reasons why I should not use Swarm on a single node?
I'm posting an answer below, but I'm not sure if it's correct.
Edit: Please note that this is not an opinion based question. If you have a look at the answer below, you'll see that there are "have-to" and "cannot-do" facts about this.
For development, use Docker-Compose. Because only Docker-Compose is able to read your Dockerfiles and build images for you. Docker Stack instead needs pre-built images. Also, with Docker-Compose, you can easily start and stop single containers, with docker-compose kill ... and ... start .... This is useful, during development (in my experience). For example, to see how the app server reacts if you kill the database. Then you don't want Swarm to auto-restart the database directly.
In production, use Docker Swarm (unless: see below), so you can configure mem limits. Docker-Compose has less functionality that Docker Swarm (no mem or cpu limits for example) and doesn't have anything that Swarm does not have (right?). So no reason to use Compose in production. (Except maybe if you know how Compose works already and don't want to spend time reading about the new Swarm commands.)
Docker Swarm doesn't, however, support .env files like Docker-Compose does. So you cannot have e.g. IMAGE_VERSION=1.2.3 in an .env file and then in the docker-compose.yml file have: image: name:${IMAGE_VERSION}. See https://github.com/moby/moby/issues/29133 — instead you'll need to set env vars "manually": IMAGE_VERSION=SOMETHING docker stack up ... (this actually made me stick with Docker-Compose. + that I didn't reasonably quickly find out how to view a container's log, via Swarm; Swarm seemed more complicated.)
In addition to #KajMagnus answer I should note that Docker Swarm still don't support Linux Capabilities as Docker [Compose] do. You can learn about this issue and dive into Docker community discussions here.

Docker - running commands from all containers

I'm using docker compose to create basic environment for my websites (at the moment only locally so I don't care about security issues). At the moment I'm using 3 different containers"
for nginx
for php
for mysql
I can obviously log in to any container to run commands. For example I can ssh to php container to verify PHP version or run PHP script but the question is - is it possible to have such configuration that I could run commands from all containers running for example one SSH container?
For example I would like to run commands like this:
php -v
nginx restart
mysql
after logging to one common SSH for all services.
Is it possible at all? I know there is exec command so I could add before each command name of container but it won't be flexible enough to use and in case of more containers it would be more and more difficult.
So the question is - is it possible at all and if yes, how could it be achieved?
Your question was:
Is it possible at all?
and the answer is:
No
This is due to the two restrictions you are giving in combination. Your first restrictions is:
Use SSH not Exec
It is definitly possible to have an SSH daemon running in each container and setup the security so that you can run ssh commands in e.g. a passwordless mode
see e.g. Passwordless SSH login
Your second restriction is:
one common SSH for all services
and this would now be the tricky part. You'd have to:
create one common ssh server in e.g. one special container for this purpose or using one of the containers
create communication to or between containers
make sure that the ssh server knows which command is for which container
All in all this would be so complicated in comparison to a simple bash or python script that can do the same with exec commands that in all the "no" is IMHO a better answer than trying to solve the academic problem of "might there be some tricky/fancy solution of doing this".

Is "docker start" completely resume all running service started by "docker run" [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
The command docker run is used to create container from image and make container running. When calling docker run I can pass CMD to tell docker to run some service at startup.
But when I call docker stop to stop the running container and then call docker start, does it run the same with the above docker run, for example, does it start all service the same as docker run
The docker client is a convenience wrapper for many calls to the Docker API.
Docker run will :
Attempt to create the container
If the dockerimage isnt found, it will attempt to pull it
If the image is pulled successfully, it will then create the container
Once the container is created, it will then call Docker start on the new container
The short answer to your question is: Docker stop is the opposite of the Docker Start command. Docker run calls docker start at the end, but it also does a bunch of other things.
Docker run will always try and create a new container, and throw an error if the container name already exists. Docker start can be used to manually start an existing container. (you could also look into the "docker restart" command, which I believe calls docker stop and then docker start.)
Hope this helps!

Is there a "multi-user" Docker mode, e.g. for scientific clusters?

I want to use Docker for isolating scientific applications for the use in a HPC Unix cluster. Scientific software often has exotic dependencies so isolating them with Docker appears to be a good idea. The programs are to be run as jobs and not as services.
I want to have multiple users use Docker and the users should be isolated from each other. Is this possible?
I performed a local Docker installation and had two users in the docker group. The call to docker images showed the same results for both users.
Further, the jobs should be run under the calling users's UID and not as root.
Is such a setup feasible? Has it been done before? Is this documented anywhere?
Yes there is! It's called Singularity and it was designed with scientific applications and multi user HPCs. More at http://singularity.lbl.gov/
OK, I think there will be more and more solutions pop up for this. I'll try to update the following list in the future:
udocker for executing Docker containers as users
Singularity (Kudos to Filo) is another Linux container based solution
Don't forget about DinD (Docker in Docker): jpetazzo/dind
You could dedicate one Docker per user, and within one of those docker containers, the user could launch a job in a docker container.
I'm also interested in this possibility with Docker, for similar reasons.
There are a few of problems I can think of:
The Docker Daemon runs as root, providing anyone in the docker group
with effective host root permissions (e.g. leak permissions by
mounting host / dir as root).
Multi user Isolation as mentioned
Not sure how well this will play with any existing load balancers?
I came across Shifter which may be worth a look an partly solves #1:
http://www.nersc.gov/research-and-development/user-defined-images/
Also I know there is discussion to use kernel user namespaces to provide mapping container:root --> host:non-privileged user but I'm not sure if this is happening or not.
There is an officially supported Docker image that allows one to run Docker in Docker (dind), available here: https://hub.docker.com/_/docker/. This way, each user can have their own Docker daemon. First, start the daemon instance:
docker run --privileged --name some-docker -d docker:stable-dins
Note that the --privileged flag is required. Next, connect to that instance from a second container:
docker run --rm --link some-docker:docker docker:edge version

Resources