docker-machine+digital ocean: How to mount volumes? - docker

I'm using volumes to persist data between container restarts. I've tested this on my dev machine and it works fine. I successfully deployed my application onto Digital Ocean using docker-machine but the data doesn't persist after container restarts (e.g docker-compose down followed by docker-compose up).
Using volumes with docker-machine should work, right? How can I check if the volumes has mounted on my remote machine (e.g. DO)? docker volume ls shows volumes mounted on my local machine but I have no idea how to check for the volumes mounted on the remote machine.
The operating systems for my development and remote machines are both Ubuntu 16.04.
The relevant db/volume bits from my docker-compose file (I, of course, have other services but I omitted them for brevity):
version: '3'
services:
db:
image: postgres:9.6.5
volumes:
-db:/var/lib/postgresql/data
volumes:
db:

Initially I misunderstood your question. Now to answer : the problem that you are facing is because of docker-compose down , instead of that you can try docker-compose stop
For more information about docker-compose
Commands:
build Build or rebuild services
create Create services
down Stop and remove containers, networks, images, and volumes
start Start services
stop Stop services
up Create and start containers

Related

Link docker containers in the Dockerfile

I have a Jaeger running in a docker container in my local machine.
I've created a sample app which sends trace data to Jaeger. When running from the IDE, the data is sent perfectly.
I've containerized my app, and now I'm deploying it as a container, but the communication only works when I use --link jaeger to link both containers (expected).
My question is:
Is there a way of adding the --link parameter within my Dockerfile, so then I don't need to specify it when running the docker run command?
There is no possibility of doing it in the Dockerfile if you want to keep two separate image. How should you know in advance the name/id of the container you're going to link ?
Below are two solutions :
Use Docker compose. This way, Docker will automatically link all the containers together
Create a bridge network and add all the container you want to link inside. This way, you'll have name resolution and you'll be able to contact each container using its name
I recommend you using netwoking, by creating:
docker network create [OPTIONS] NETWORK
and then run with --network="network"
using docker-compose with network and link to each other
example:
version: '3'
services:
jaeger:
network:
-network1
other_container:
network:
-network1
networks:
network1:
external: true

Create a NFS share between containers described in different docker-compose files and running in different docker-machines

I have the following setup in my computer:
One docker-machine set-up for the containers of my Project A. I have my docker-compose.yml file, describing which containers have to be build, the volumes to mount and so on, and the Dockerfile for each container.
Another docker-machineset-up for the containers of my Project B, with its docker-compose.yml and Dockerfiles.
I now want to do a NFS share between a container in my project A (let's call it container 1) and another container in my project B (container 2).
I was checking links like this, but, as far as I understand it, that's for containers in the same network. In this case, my container 1 and container 2 are not in the same network, and they are in different machines.
I haven't specified any networking option when running docker-machine or in my docker-compose.yml files (apart from exposing the ports that my apps use).
How can I do an NFS share between those 2 containers?
The 'docker-compose up' command creates a network by name [projectname]_default" and all the services specified in the docker-compose.yml file will be mapped to the network that got created.
For example, suppose your app is in a directory called myapp, and your docker-compose.yml looks like this:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
When you run docker-compose up, the following happens:
1) A network called myapp_default is created.
2) A container is created using web’s configuration. It joins the network myapp_default under the name web.
3) A container is created using db’s configuration. It joins the network myapp_default under the name db.
and If you want other service to make use of the existing docker network that is created then you need to define that using 'external' option
Use a pre-existing network
If you want your containers to join a pre-existing network, use the external option:
networks:
default:
external:
name: my-pre-existing-network
Instead of attempting to create a network called [projectname]_default, Compose looks for a network called my-pre-existing-network and connect your app’s containers to it.
source: https://docs.docker.com/compose/networking/#use-a-pre-existing-network

How do I access Mopidy running in Docker container from another container

To start, I am more familiar running Docker through Portainer than I am with doing it through the console.
What I'm Doing:
Currently, I'm running Mopidy through a container, which is being accessed by other machines through the default Mopidy port. In another container, I am running a Slack bot using the Limbo repo as a base. Both of them are running on Alpine Linux.
What I Need:
What I want to do is for my Slack bot to be able to call MPC commands, such as muting the volume, etc. This is where I am stuck. What is the best way for this to work
What I've tried:
I could ssh into the other container to send a command, but it doesn't make sense to do this since they're both running on the same server machine.
The best way to connect a bunch of containers is to define a service stack using docker-compose.yml file and launch all of them using docker-compose up. This way all the containers will be connected via single user-defined bridge network which will make all their ports accessible to each other without you explicitly publishing them. It will also allow the containers to discover each other by the service name via DNS-resolution.
Example of docker-compose.yml:
version: "3"
services:
service1:
image: image1
ports:
# the following only necessary to access port from host machine
- "host_port:container_port"
service2:
image: image2
In the above example any application in the service2 container can reach some port on service1 just by using service2:port address.

Docker services running on some locally and others remotely

How can I configure docker-compose to use multiple containers where some containers (especially in active development) to be running on your local host computer and other services are containers in remote servers?
In docker-compose.yml
rails:
build: some_path
volumes: some_volumes
mysql:
image: xxx
build: xxxx
nginx:
image: xxx
build: xxxx
other_services:
Currently I have all containers running locally and it works fine, but noticed that performance is slow; what if I have, for example, nginx and other_services running remotely - how do I do that? If there is a tutorial link, kindly let me know since didn't find one with google.
Using docker swarm. You can create a swarm with many nodes (one your local machine one the remote server) and then using docker stack deploy you can deploy your application to those machines.
This is the tutorial.

How do I run docker-compose up on a a docker-swarm?

I'm new to Docker and trying to get started by deploying locally a hello-world Flask app on Docker-Swarm.
So far I have my Flask app, a Dockerfile, and a docker-compose.yml file.
version: "3"
services:
webapp:
build: .
ports:
- "5000:5000"
docker-compose up works fine and deploys my Flask app.
I have started a Docker Swarm with docker swarm init, which I understand created a swarm with a single node:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
efcs0tef4eny6472eiffiugqp * moby Ready Active Leader
Now, I don't want workers or anything else, just a single node (the manager node created by default), and deploy my image there.
Looking at these instructions https://docs.docker.com/get-started/part4/#create-a-cluster it seems like I have to create a VM driver, then scp my files there, and ssh to run docker-compose up. Is that the normal way of working? Why do I need a VM? Can't I just run docker-compose up on the swarm manager? I didn't find a way to do so, so I'm guessing I'm missing something.
Running docker-compose up will create individual containers directly on the host.
With swarm mode, all the commands to manage containers have shifted to docker stack and docker service which manage containers across multiple hosts. The docker stack deploy command accepts a compose file with the -c arg, so you would run the following on a manager node:
docker stack deploy -c docker-compose.yml stack_name
to create a stack named "stack_name" based on the version 3 yml file. This command works the same regardless of whether you have one node or a large cluster managed by your swarm.

Resources