I want to deploy an ETC cluster with 3 nodes with Docker.
So I use a dicovery url for that purpose.
The problem Im having is that when I delete a etcd container, and start a new one, then it cannot rejoin to the cluster.
Docker log says:
member "XXX" has previously registered with discovery service token (https://discovery.etcd.io/yyyy)
But etcd could not find valid cluster configuration in the given data dir (/data).
I am using volume for folder /data and /waldir
Also using --net=host so it is using the same host IP always.
But why can't the new container re-join to the cluster?
Where the cluster information is saved inside the container?
Help will be appreciated.
Thanks.
Related
Does exist any way to do this:
run one service (container) with main application - server (flask application);
server allows to run another services, them are also flask applications;
but I want to run each new service in separate container ?
For example, I have endpoint /services/{id}/run at the server, each id is some service id. Docker image is the same for all services, each service is running on separate port.
I would like something like this:
request to server - <host>//services/<id>/run -> application at server make some magic command/send message to somewhere -> service with id starts in new container.
I know that at least locally I can use docker-in-docker or simply mount docker socket in container and work with docker inside this container. But I would like to find way to work across multiple machines (each service can run on another machine).
For Kubernetes: I know how create and run pods and deployments, but I can't find how to run new container on command from another container. Can I somehow communicate with k8s from container to run new container?
Generally:
can I run new container from another without docker-in-docker and mounting docker socket;
can I do it with/without Kubernetes?.
Thanks for advance.
I've compiled all of the links that were in the comments under the question. I would advise taking a look into them:
Docker:
StackOverflow control Docker from another container.
The link explaining the security considerations is not working but I've managed to get it with the Webarchive: Don't expose the Docker socket (not even to a container)
Exposing dockerd API
Docker Engine Security
Kubernetes:
Access Clusters Using the Kubernetes API
Kubeflow in the spite of machine learning deployments
Is is possible to link a docker container with a service running in minikube? I have a mysql container which I want to access using PMA pod in minikube. I have tried adding PMA_HOST is the yaml file while creating pod but getting an error on the PMA GUI page mentioning -
mysqli_real_connect(): (HY000/2002): Connection refused
If I understand you correctly, you want to access a service (mysql) running outside kube cluster (minikube) from that kube cluster.
You have two ways to achieve this:
make sure your networking is configured in a way allowinf traffic passing both ways correctly. Then you should be able to access that mysql service directly by it's address or by creating external service inside kube cluster (create Service with no selector and manualy configure external Endpoints
Use something like ie. telepresence.io to expose localy developed service inside remote kubernetes cluster
I have one serious doubt in docker swarm . I have created the docker-machine in VM
manager1
worker1
worker2
And joined all the worker to manager and create the service like
docker service create --replicas 3 -p 80:80 --name web nginx
I change the index.html in docker service in manager1
When I run the url like http://192.168.99.100 it showing the index.html file that I have changed but the remaining 2 node showing the default nginx page
What is the concept of swarm ? Whether it is used only for the service failure ?
How to make the centralized data storage in docker swarm.
There are a few approaches to ensuring the same app and data are available on all nodes.
Your code (like nginx with your web app) should be built into an image, sent to a registry, and pulled to a Swarm service. That way the same app/code/site is in all the containers of that service.
If you need persistent data, like a database, then you should use a plugin volume driver that lets you store the unique data on shared storage.
According to the official documentation on Install and Create a Docker Swarm, first step is to create a vm named local which is needed to obtain the token with swarm create.
Once the manager and all nodes have been created and added to the swarm cluster, do I need to keep running the local vm?
Note: this tutorial is for the first version of Swarm (called Swarm legacy). There is a new version called Swarm mode available since Docker 1.12. Putting it out there because there seems to be a lot of confusion between the two.
No you don't have to keep the local VM, this is just to get a unique cluster token with the Docker Hub discovery service.
Now this is a bit overkill just to generate a token. You can bypass this step by:
Running the swarm container directly if you have Docker for Mac or a more generally a local instance of Docker running:
docker run --rm swarm create
Directly query the service discovery URL to generate a token:
curl -X POST "https://discovery.hub.docker.com/v1/clusters"
Okay, here is my situation:
I created a cluster of docker swarm using docker machine. I can deploy any container, etc. So basically everything is working fine. My question right now is how to give access to the cluster to someone else. I want other people to deploy container on that cluster using docker-compose.
Docker machine configures the docker engine on each node to be secured using TLS:
https://docs.docker.com/engine/security/https/
The client configuration can be seen when running the "docker-machine config" command, for example the following settings are used to access the remote docker host:
--tlsverify
--tlscacert="~/.docker/machine/certs/ca.pem"
--tlscert="~/.docker/machine/certs/cert.pem"
--tlskey="~/.docker/machine/certs/key.pem"
-H=tcp://....
It's the files under ~/.docker/machine/certs that are needed by other users who want to connect to your swarm.
I expect that docker will eventually create some form of user authentication and authorization.