How do I run docker-compose up on a a docker-swarm? - docker

I'm new to Docker and trying to get started by deploying locally a hello-world Flask app on Docker-Swarm.
So far I have my Flask app, a Dockerfile, and a docker-compose.yml file.
version: "3"
services:
webapp:
build: .
ports:
- "5000:5000"
docker-compose up works fine and deploys my Flask app.
I have started a Docker Swarm with docker swarm init, which I understand created a swarm with a single node:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
efcs0tef4eny6472eiffiugqp * moby Ready Active Leader
Now, I don't want workers or anything else, just a single node (the manager node created by default), and deploy my image there.
Looking at these instructions https://docs.docker.com/get-started/part4/#create-a-cluster it seems like I have to create a VM driver, then scp my files there, and ssh to run docker-compose up. Is that the normal way of working? Why do I need a VM? Can't I just run docker-compose up on the swarm manager? I didn't find a way to do so, so I'm guessing I'm missing something.

Running docker-compose up will create individual containers directly on the host.
With swarm mode, all the commands to manage containers have shifted to docker stack and docker service which manage containers across multiple hosts. The docker stack deploy command accepts a compose file with the -c arg, so you would run the following on a manager node:
docker stack deploy -c docker-compose.yml stack_name
to create a stack named "stack_name" based on the version 3 yml file. This command works the same regardless of whether you have one node or a large cluster managed by your swarm.

Related

Link docker containers in the Dockerfile

I have a Jaeger running in a docker container in my local machine.
I've created a sample app which sends trace data to Jaeger. When running from the IDE, the data is sent perfectly.
I've containerized my app, and now I'm deploying it as a container, but the communication only works when I use --link jaeger to link both containers (expected).
My question is:
Is there a way of adding the --link parameter within my Dockerfile, so then I don't need to specify it when running the docker run command?
There is no possibility of doing it in the Dockerfile if you want to keep two separate image. How should you know in advance the name/id of the container you're going to link ?
Below are two solutions :
Use Docker compose. This way, Docker will automatically link all the containers together
Create a bridge network and add all the container you want to link inside. This way, you'll have name resolution and you'll be able to contact each container using its name
I recommend you using netwoking, by creating:
docker network create [OPTIONS] NETWORK
and then run with --network="network"
using docker-compose with network and link to each other
example:
version: '3'
services:
jaeger:
network:
-network1
other_container:
network:
-network1
networks:
network1:
external: true

How do I access Mopidy running in Docker container from another container

To start, I am more familiar running Docker through Portainer than I am with doing it through the console.
What I'm Doing:
Currently, I'm running Mopidy through a container, which is being accessed by other machines through the default Mopidy port. In another container, I am running a Slack bot using the Limbo repo as a base. Both of them are running on Alpine Linux.
What I Need:
What I want to do is for my Slack bot to be able to call MPC commands, such as muting the volume, etc. This is where I am stuck. What is the best way for this to work
What I've tried:
I could ssh into the other container to send a command, but it doesn't make sense to do this since they're both running on the same server machine.
The best way to connect a bunch of containers is to define a service stack using docker-compose.yml file and launch all of them using docker-compose up. This way all the containers will be connected via single user-defined bridge network which will make all their ports accessible to each other without you explicitly publishing them. It will also allow the containers to discover each other by the service name via DNS-resolution.
Example of docker-compose.yml:
version: "3"
services:
service1:
image: image1
ports:
# the following only necessary to access port from host machine
- "host_port:container_port"
service2:
image: image2
In the above example any application in the service2 container can reach some port on service1 just by using service2:port address.

docker-machine+digital ocean: How to mount volumes?

I'm using volumes to persist data between container restarts. I've tested this on my dev machine and it works fine. I successfully deployed my application onto Digital Ocean using docker-machine but the data doesn't persist after container restarts (e.g docker-compose down followed by docker-compose up).
Using volumes with docker-machine should work, right? How can I check if the volumes has mounted on my remote machine (e.g. DO)? docker volume ls shows volumes mounted on my local machine but I have no idea how to check for the volumes mounted on the remote machine.
The operating systems for my development and remote machines are both Ubuntu 16.04.
The relevant db/volume bits from my docker-compose file (I, of course, have other services but I omitted them for brevity):
version: '3'
services:
db:
image: postgres:9.6.5
volumes:
-db:/var/lib/postgresql/data
volumes:
db:
Initially I misunderstood your question. Now to answer : the problem that you are facing is because of docker-compose down , instead of that you can try docker-compose stop
For more information about docker-compose
Commands:
build Build or rebuild services
create Create services
down Stop and remove containers, networks, images, and volumes
start Start services
stop Stop services
up Create and start containers

Docker-Compose with Docker 1.12 "Swarm Mode"

Does anyone know how (if possible) to run docker-compose commands against a swarm using the new docker 1.12 'swarm mode' swarm?
I know with the previous 'Docker Swarm' you could run docker-compose commands directly against the swarm by updating the DOCKER_HOST to point to the swarm master :
export DOCKER_HOST="tcp://123.123.123.123:3375"
and then simply execute commands as if you were running them against a single instance of Docker engine.
OR is this functionality something that docker-compose bundle is replacing?
I realized my question was vaguely worded and actually has two parts to it. Eventually however, I was able to figure out solutions to both issues.
1) Can you run commands directly 'against' a swarm / swarm-mode in Docker 1.12 running on a remote machine?
While you can't really run commands 'against' a swarm you CAN run docker service commands on the master node of a swarm in order to run services on that swarm.
You can also configure the Docker daemon (the docker daemon that is the master node of the swarm) to listen on TCP ports in order to externally expose the Docker API.
2) Can you still use docker-compose files to start services in Docker 1.12 swarm-mode?
Yes, although these features are currently part of Docker's "experimental" features. This means you must download/install the version that includes the experimental features (check the github).
You essentially follow these instructions https://github.com/docker/docker/blob/master/experimental/docker-stacks-and-bundles.md
to go from the docker-compose.yml file to a distributed application bundle and then to an application stack (this is when your services are actually run).
$ docker-compose bundle
$ docker deploy [OPTIONS] STACK
Here's what I did:
On my remote swarm manager node I started docker with the following options:
docker daemon -D -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 &
This configures Docker daemon to listen on the standard docker socket unix:///var/run/docker.sock AND on localhost:2375.
WARNING : I'm not enabling TLS here just for simplicity
On my local machine I update the docker host environment variable to point at my swarm master node.
$ export DOCKER_HOST="tcp://XX.XX.XX.XX:2377" (populate with your IP)
Navigate to the directory of my docker-compose.yml file
Create a bundle file from my docker-compose.yml file. Make sure to include the .dab extension.
docker-compose bundle --fetch-digests -o myNewBundleFile.dab
Create an application stack from the bundle file. Do not specify the .dab extension here.
$ docker deploy myNewBundleFile
Now I'm still experiencing some networking related issues but I have successfully gotten my service up and running from my unmodified docker-compose.yml files. The network issues I'm experiencing is documented here : https://github.com/docker/docker/issues/23901
While the official support for Swarm mode in Docker Compose is still in progress, I've created a simple script that takes docker-compose.yml file and runs docker service commands for you. See https://github.com/ddrozdov/docker-compose-swarm-mode for details.
It is not possible. Compose uses containers to create a client-side concept of a service. Docker 1.12 Swarm mode introduces a new server-side concept of a service.
You are correct that docker-compose bundle; docker stack deploy is the way to get a Compose file running in Swarm Mode.

Link Running External Docker to docker-compose services

I assume that there is a way to link via one or a combination of the following: links, external_links and networking.
Any ideas? I have come up empty handed so far.
Here is an example snippet of a Docker-compose which is started from within a separate Ubuntu docker
version: '2'
services:
web:
build: .
depends_on:
- redis
redis:
image: redis
I want to be able to connect to the redis port from the Docker that launched the docker-compose.
I do not want to bind the ports on the host as it means I won't be able to start multiple docker-compose from the same model.
-- context --
I am attempting to run a docker-compose from within a Jenkins maven build Docker so that I can run tests. But I cannot for the life of me get the original Docker to access exposed ports on the docker-compose
Reference the machines by hostname, v2 automatically connects the nodes by hostname on a private network by default. You'll be able to ping "web" and "redis" from within each container. If you want to access the machines from your host, include a "ports" definition for each service in your yml.
The v1 links were removed from the v2 compose syntax since they are now implicit. From the docker compose file documentation
links with environment variables: As documented in the environment variables reference, environment variables created by links have been
deprecated for some time. In the new Docker network system, they have
been removed. You should either connect directly to the appropriate
hostname or set the relevant environment variable yourself...

Resources