Can a single docker host be managed by multiple docker-machine instances? - docker

Using docker-machine on my system (S1), i created a docker host on AWS using amazon-ec2 driver. I have another system (S2) on which i installed docker-machine. Used generic driver and pointed docker-machine to manage the docker host on AWS. From this point onwards, i am unable to access the docker host from S1. Any suggestions on how to get this working?

Not natively, although it may do in the future.
There is third party tool called machine-share to help you import and export docker-machine configs from one host to another without having to edit the JSON configuration.

Related

Docker desktop networking Windows and Linux nodes

I have a windows service within a Docker container that needs to access a MySQL database in a Linux container on the same machine (dev machine currently).
I thought of creating an overlay network on the two "nodes" on the same machine but this isn't possible as creating the swarm worker fails on windows after creating the swarm master on linux.
Is this possible, if not what is the easiest way of doing this? The purpose of the windows container is simply to deploy to a test environment to gather data. Do I need to deploy the linux to the cloud or another machine maybe, so the windows container can communicate?
You can simply use docker compose, it will create the network automatically. Replace the MySQL host with the MySQL service name you defined in the compose yaml file. Detailed information please refer to docker-compose.

It's possible to manage MacOS Docker Desktop with Docker Machine?

I have Docker Desktop installed on my Mac (not Docker Toolkit) and I installed docker-machine according to the official documentation
I'm triying to add my localhost Docker engine like a docker node under docker machine with no success.
The steps that I made were:
Enable sshd in localhost (ssh localhost works)
Add localhost Docker to Docker Machine:
docker-machine create --driver generic --generic-ip-address 127.0.0.1 --generic-ssh-user <"ssh_username"> <node_name>
Running pre-create checks...
Creating machine...
(localhost) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Password:
Detecting the provisioner...
Password:
Error creating machine: Error detecting OS: Error getting SSH command: ssh command error:
command : cat /etc/os-release
err : exit status 1
output : cat: /etc/os-release: No such file or directory
Output of docker-machine ls
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
localhost - generic Running tcp://127.0.0.1:2376 Unknown Unable to query docker version: Cannot connect to the docker engine endpoint
Sorry for my English, I'm not native.
docker-machine is dangerous. I wouldn't recommend it for managing production servers as it requires passwordless sudo and makes it very easy to damage your Docker installation. I managed to completely remove all containers an images from a server, not realizing the command I ran was not merely connecting to the server, but initializing it from scratch.
If you want to control multiple Docker daemons from single CLI try Docker Contexts.
Edit:
docker-machine's purpose is provisioning and managing machines with Docker daemon.
It can be used both with local VM's and with various cloud providers. With a single command it can create and start a VM, then install and configure Docker on that new VM (including generating TLS certificates).
It can create an entire Docker Swarm cluster.
It can also install Docker on a physical machine, given SSH access with passwordless sudo (that is what generic driver you tried to use is for).
Once a machine is fully provisioned with Docker it also can set environment variables that configure Docker CLI to send commands to a remote Docker daemon installed on that machine - see here for details.
Finally, one can also add machines with Docker manually configured by not using any driver - as described here. The only purpose of that is to allow for a unified workflow when switching between various remote machines.
However, as I stated before docker-machine is dangerous - it can also remove existing VMs and in case of physical machines reprovsion them, thereby removing all existing images, containers, etc. A simple mistake can wipe a server clean. Not to mention it requires both key-based SSH and passwordless sudo, so if an unauthorized person gets their hands on an SSH key for a production server, then that's it - they have full root access to everything.
It is possible to use docker-machine with preexisting Docker installations safely - you need to add them without using any driver as described here. In this scenario, however, most docker-machine commands won't work, so the only benefit is easy generation of those environment variables for Docker CLI I mentioned before.
Docker Contexts are a new way of telling Docker CLI which Docker daemon it's supposed to communicate with. They essentially are meant to replace all those environment variables docker-machine generates.
Since Docker CLI only communicates with Docker daemon, there is no risk of accidentally deleting a VM or reprovisioning already configured physical machine. And since they are a part of Docker CLI, there is no need to install additional software.
On the other hand, Docker contexts cannot be used to create or provision new machines - one needs to either do that manually or use some other mechanism or tool (like Vagrant or some kind of template provided by the cloud provider).
So if you really need a tool that'll let you easily create, provision and remove docker-enabled machines then use docker-machine. If, however, all you wan is to have a list of all your Docker-enabled machines in one place and a way to easily set up which one your local Docker CLI is supposed to talk to, Docker Contexts are a much safer alternative.

Unable to connect outside database from Docker container App

we have two machineā€¦one is windows machine and another in Linux machine. My application is running under Docker Container at Linux machine. our data base is running at Windows machine.our application need to get data from windows machine DB.
As we have given proper data source detail like IP, username ,password in our application. it works when we do not use docker container but when we use docker container it do not work.
Can anyone help me out to get this solution that how we can connect outside DB from Docker enabled application as we are totally new guys in term of Docker.
Any help would be much appreciated.
Container's default network is "bridge",you should choose macvlan or host network.
method 1
docker run -d --net host image
this container will share your host IP address and will be able to access your database.
method 2
Use docker network create command to create a macvlan network,refrence here
then create your container by
docker run -d --net YOURNETWORK image
The container will have an IP address which is the same gateway with its host.
There are a lot of issues that could be affecting your container's ability to communicate with your database. In the future you should compose your question with as much detail as possible. To correctly answer this you will, at a minimum, need to include the following details:
Linux distribution name & version
Docker version
Output of docker inspect from the container
Linux firewall configuration
Network configuration
Is your Windows machine running on the same local network / subnet as your Linux machine? If so, please provide information about the subnet, as the default bridge set up by Docker may restrict access to local resources, whereas those over a wide area network would still be accessible.
You can try passing the --network=host option to your docker run command like so: docker run --network=host <image name>. Doing so eliminates the need to specify port mappings in your run command, as they are ignored when using the host's network.
Please edit your question and include the above requested details to get a complete answer.

Create Docker-Machine without re-provisioning Docker

I have an existing AWS EC2 instance with docker already provisioned on it. I would like to import this existing host to allow Docker Machine to manage this locally.
To do this, so far I have been using the generic driver. But as you can see in the documentation, it re-provisions docker every time, thereby bringing down my running containers. The AWS driver does not seem to have an option to do this either.
So how can I add an existing host locally without re-provisioning docker or bringing down my containers?

Use real server instead of docker-machine for OSX

I have a linux on cloud with a installed docker service on it. How can I use my VS on cloud instead of docker-machine on my OSX? it means instead of install VirtualBox and create a VM on it by docker-machine, I use my server on cloud as docker server.
To access a remote Docker daemon simply pass the -H flag to your docker commands:
docker -H=tcp://192.168.0.100:2375 images
You need to ensure that the remote Docker daemon is listening on the appropriate network interface. Be aware though that doing this on an external server is highly insecure, anyone that can reach the port has effectively root access on the server. At the very least read this article on securing the Docker daemon.
Personally I would only recommend using a port binding via ssh tunnel to access the remote Docker daemon.
You might get a solution from docker-machine's generic driver. Just start the virtual server in cloud, set up proper SSH keys and get started :) It should work just the same as with a VM within VirtualBox.
I'm not sure how to get VS auto-started if it is shut down though. Via a could-vendor specific command line program?
Edit: I should have read the docs better, the first cloud example actually shows the usage of digital ocean driver. If it is already running then just use the generic driver.

Resources