docker-compose with docker-swarm under docker-machine not mounting volumes - docker

I have a working docker-compose example running a static html with nginx.
The directory tree:
├── docker-compose.yml
├── nginx
│   ├── app
│   │   └── index.html
│   └── Dockerfile
The docker-compose.yml:
nginx:
build: ./nginx
volumes:
- ./nginx/app:/usr/share/nginx/html
ports:
- 8080:80
The nginx directory has the Dockerfile:
FROM nginx
Everything is working right.
The problem is that I build a Docker-Swarm infrastructure with docker-machine following the Docker documentation: The Local, the swarm-master and two nodes.
Everything seems to work fine too.
$ eval $(docker-machine env --swarm swarm-master)
$[swarm-master]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8afafeb9f05 few_nginx "nginx -g 'daemon off" 20 minutes ago Up 3 seconds 443/tcp, 192.168.99.102:8080->80/tcp swarm-host-00/few_nginx_1
But nginx is returning a Forbidden
$[swarm-master]$ curl http://192.168.99.102:8080/
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.9.10</center>
</body>
</html>
I enter into the virtualbox and into the container:
$[swarm-master]$ docker-machine ssh swarm-host-00
docker#swarm-host-00:~$ docker exec -ti d8afafeb9f05 bash
and nothing inside the nginx/html directory:
root#d8afafeb9f05:/# ls /usr/share/nginx/html/
Is it necessary to do something different in compose/swarm for the volumes? Am I doing something wrong?

It's necessary to copy the files to the hosts or build the image with the COPY command into the Dockerfile
Reference: https://github.com/docker/compose/issues/2799#issuecomment-177998133

Related

Deploy multiple docker images with a single gitlab-ci.yml to a VPS

I have a repo that has two microservices (command & query) with their related Dockerfile:
├── command
│   ├── DockerFile
│   ├── main.py
│   ├── requirements.txt
│   └── server
│   ├── app.py
│   ├── env.py
│   ├── logger.py
├── docker-compose.yaml
└── query
├── DockerFile
├── main.py
├── requirements.txt
└── server
├── app.py
├── database.py
├── env.py
I have a gitlab runner in a VPS, and another VPS where I deploy the services.
I would like to deploy both microservices separately using gitlab-ci. Here is how I do usually when I have only one image:
stages:
- publish
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
publish:
image: docker:latest
stage: publish
services:
- docker:19.03-dind
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_COMMIT"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my-app || true"
- ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p $INTERNAL_SERVER_IP:80:80 --name my-app $TAG_COMMIT"
The problem when having 2 images, is that the build stage will create an image for the entire repo. Is it possible to do it individually for each service ?
Can I use docker-compose on the deployment server ?
You can create two sets of jobs (publish-query, publish-command) that each build and push up the two docker images. If they don't depend on each other, they will even build at the same time depending on the runner. They would have the same stage (publish in your case).
Likewise, you can create two deploy jobs (deploy-query, deploy-command) that does each deployment. Both in the deploy stage.
I haven't used docker-compose on the build server, but I have created images, pushed them up, and then ran docker-compose on the deployment machine that pulls in the multiple images. Probably not the most graceful way of handling that, but it worked.

Data is visible in Docker container + named volume although it shouldn't

I'm probably just being stupid here, but I thought this shouldn't work, yet it does and I don't get why. I'm coping test files to /var/www in my Docker image during build and subsequently mounting a named volume on /var/www, but I still see the files.
~/test$ tree
.
├── docker
│   ├── data
│   │   └── Dockerfile
│   └── docker-compose.yml
└── src
├── testfile1
└── testfile2
3 directories, 4 files
./docker/docker-compose.yml
version: '3'
services:
test-data:
container_name: test-data
build:
context: ..
dockerfile: ./docker/data/Dockerfile
volumes:
- test-data:/var/www
volumes:
test-data:
name: test-data
./docker/data/Dockerfile
FROM alpine
COPY src/ /var/www/
CMD sleep infinity
From what I thought I understand the volume isn't available at build time and should overlay/hide the files as it's mounted on /var/www too when the container starts, but it doesn't?
~/test$ docker inspect -f '{{ .Mounts }}' test-data
[{volume test-data /var/lib/docker/volumes/test-data/_data /var/www local rw true }]
~/test$ docker exec test-data ls -l /var/www
-rw-r--r-- 1 root root 0 Oct 21 09:01 testfile1
-rw-r--r-- 1 root root 0 Oct 21 09:01 testfile2
Running Docker Destop 3.6.0 on Windows + WSL2 Ubuntu 20.04
The very first time (only) a Docker named volume (only) is attached to a container, Docker copies files from the underlying image into the volume. The volume contents never get updated after this initial copy. This copy also doesn't happen for host-directory bind-mounts, or on Kubernetes or other not-actually-Docker environments.
You'd see the behavior you expect in two ways. First, if you change the volumes: to a bind mount
volumes:
- ./local-empty-directory:/var/www
you'll see that replace the image content the way you expect. The other thing you can change is to run your existing setup once, change the contents of the image, and run it again
docker-compose build
docker-compose run --rm test-data ls -l /var/www
touch src/testfile3
docker-compose build
docker-compose run --rm test-data ls -l /var/www
# testfile3 isn't in the volume and won't be in this listing
With its limitations, I tend to not recommend actually relying on the "Docker copies files into volumes" behavior. There are a couple of common patterns that use it, but then are surprised when the volume never gets updated or the setup doesn't run in a different container runtime.
Docker volumes exists independently of your image/container. If you run docker volume ls you will see your volumes, which is where the data exists and becomes mounted to the container at run-time

Deploying Golang app on Docker exits process

I'm very new to Docker. I have a Golang app that has the following structure:
.
├── 404.html
├── Dockerfile
├── index.html
├── scripts
├── server.go
├── static
│   ├── jquery.min.js
│   ├── main.css
│   └── main.js
└── styles
I got the Dockerfile from DockerHub. It's too large to post here, but the full version is here. The last few lines of the Dockerfile, which I think might be relevant, are:
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
WORKDIR $GOPATH
When I go into my directory and type in docker build -t my-app . then it successfully builds. When I type in docker run -d -p 80:80 url-shortener, it gives me a string which I assume is the ID.
But when I do docker ps, it doesn't show the process running.
If I do docker ps -a, it shows the process but it says,
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6adc34244350 my-app "/bin/sh" 6 minutes ago
Exited (0) 6 minutes ago
I apologize if this is a very dumb question. I'm a complete Docker noob and could use some guidance.
From your docker ps output, your image is configured to only run a shell. Since you ran it as a background process without any input, it processed all of its input and exited successfully (status code 0). docker logs 6adc34244350 will show you what output (if any) it produced.
Docker has an excellent tutorial on writing and running custom images that's worth reading. In particular, you shouldn't copy the official golang Dockerfile; your own Dockerfile can start with FROM golang:1.10 and it will inherit everything in that image. You also almost certainly want to make sure you have a CMD command that runs your application (by default) when the container starts up.

Copy static files from Docker container to non empty named volume

I wanna copy new static files from Docker container via named volume to nginx container that has old static.
Prerequisites:
Host machine directory tree:
.
├── data
│   ├── bar.2.css
│   └── foo.2.js
├── docker-compose.yml
├── Dockerfile
Dockerfile:
FROM busybox:latest
COPY data /data
docker-compose.yml:
version: '3'
services:
static:
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Directory tree of named volume myvolume with old static:
.
├── bar.1.css
└── foo.1.js
Sequence of steps:
Build myimage with Dockerfile: docker build -t myimage .
Check new static files in myimage: docker run myimage ls /data
bar.2.css
foo.2.js
Run: docker-compose up -d --build static
In my mind it must rebuild service static and overwrite old static files. But it did't. Why and how to fix it? Also, what is a better approach?
I think that you are just coping the new files alongside the old files with the docker build -t myimage .
Maybe you can delete the previous data before you insert new, by running a one-time container??
docker exec -it static rm /data
and then just copy the new data, or build the new image:
docker cp /data static:/data
You can also, implement the build step inside the docker-compose file:
version: '3'
services:
static:
build: /
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Why -- I believe that you are mounting the pre-existing volume myvolume atop your /data folder of the static container. This is because your myvolume already exists. If myvolume did not exist, the content of /data would be copied to the volume.
See: Docker-Volume-Docs -- "If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents will be copied into the volume."
Sample Solution
Give this a shot. With the structure and content below do a:
docker-compose up --build
This is additive, so if you update/add content to the newdata folder and re-run your compose, then the new content will be present in the shared volume.
You can mount and inspect the shared volume, like this:
docker run -it --rm --mount type=volume,src={docker-volume-name},target=/shared busybox sh
Environment
Folder structure:
.
├── dockerfile
├── docker-compose.yml
├── newdata/
── apple.txt
── banana.txt
dockerfile
FROM busybox:latest
# From host machine to image
COPY newdata/* /newdata/
# #Runtime from image to where a shared volume could be mounted.
ENTRYPOINT [ "cp", "-r", "/newdata/", "/shared" ]
docker-compose.yml
version: '3.2'
services:
data-provider:
image: data-provider
build: .
volumes:
- type: volume
source: so
target: /shared
destination:
image: busybox:latest
volumes:
- type: volume
source: so
target: /shared-data
depends_on:
- data-provider
command: ls -la /shared-data/newdata
volumes:
so:
Sample Output:
$ docker-compose up --build
Creating volume "sodockervol_so" with default driver
Building data-provider
Step 1/3 : FROM busybox:latest
---> c75bebcdd211
Step 2/3 : COPY newdata/* /newdata/
---> bc85fc19ed7b
Removing intermediate container 2a39f4be8dd2
Step 3/3 : ENTRYPOINT cp -r /newdata/ /shared
---> Running in e755c3179b4f
---> 6e79a32bf668
Removing intermediate container e755c3179b4f
Successfully built 6e79a32bf668
Successfully tagged data-provider:latest
Creating sodockervol_data-provider_1 ...
Creating sodockervol_data-provider_1 ... done
Creating sodockervol_destination_1 ...
Creating sodockervol_destination_1 ... done
Attaching to sodockervol_data-provider_1, sodockervol_destination_1
destination_1 | total 16
destination_1 | drwxr-xr-x 2 root root 4096 Oct 9 17:50 .
destination_1 | drwxr-xr-x 3 root root 4096 Oct 9 17:50 ..
destination_1 | -rwxr-xr-x 1 root root 25 Oct 9 17:50 apple.txt
destination_1 | -rwxr-xr-x 1 root root 28 Oct 9 17:50 banana.txt
sodockervol_data-provider_1 exited with code 0
sodockervol_destination_1 exited with code 0

Setting up a docker / fig Mesos environment

I'm trying to set up a docker / fig Mesos cluster.
I'm new to fig and Docker. Docker has plenty of documentation, but I find myself struggling to understand how to work with fig.
Here's my fig.yaml at the moment:
zookeeper:
image: jplock/zookeeper
ports:
- "49181:2181"
mesosMaster:
image: mesosphere/mesos:0.19.1
ports:
- "15050:5050"
links:
- zookeeper:zk
command: mesos-master --zk=zk --work_dir=/var/log --quorum=1
mesosSlave:
image: mesosphere/mesos:0.19.1
links:
- zookeeper:zk
command: mesos-slave --master=zk
Thanks !
Edit:
Thanks to Mark O`Connor's help, I've created a working docker-based mesos setup (+ storm, chronos, and more to come).
Enjoy, and if you find this useful - please contribute:
https://github.com/yaronr/docker-mesos
PS. Please +1 Mark's answer :)
You have not indicated the errors you were experiencing.
This is the documentation for the image you're using:
https://registry.hub.docker.com/u/mesosphere/mesos/
Mesos base Docker using the Mesosphere packages from
https://mesosphere.io/downloads/. Doesn't start Mesos, please use the
mesos-master and mesos-slave Dockers.
What really worried me about those images is that they were untrusted and no source was immediately available.
So I re-created your example using the mesosphere github as inspiration:
https://github.com/mesosphere/docker-containers
Updated Example
Example updated to include the chronos framework
├── build.sh
├── fig.yml
├── mesos
│   └── Dockerfile
├── mesos-chronos
│   └── Dockerfile
├── mesos-master
│   └── Dockerfile
└── mesos-slave
└── Dockerfile
Build the base image (only has to be done once)
./build.sh
Run fig to start an instance of each service:
$ fig up -d
Creating mesos_zk_1...
Creating mesos_master_1...
Creating mesos_slave_1...
Creating mesos_chronos_1...
One useful thing about fig is that you can scale up the slaves
$ fig scale slave=5
Starting mesos_slave_2...
Starting mesos_slave_3...
Starting mesos_slave_4...
Starting mesos_slave_5...
The mesos master console should show 5 slaves running
http://localhost:15050/#/slaves
And the chronos framework should be running and ready to launch tasks
http://localhost:14400
fig.yml
zk:
image: mesos
command: /usr/share/zookeeper/bin/zkServer.sh start-foreground
master:
build: mesos-master
ports:
- "15050:5050"
links:
- "zk:zookeeper"
slave:
build: mesos-slave
links:
- "zk:zookeeper"
chronos:
build: mesos-chronos
ports:
- "14400:4400"
links:
- "zk:zookeeper"
Notes:
Only single instance of zookeeper needed for this example
build.sh
docker build --rm=true --tag=mesos mesos
mesos/Dockerfile
FROM ubuntu:14.04
MAINTAINER Mark O'Connor <mark#myspotontheweb.com>
RUN echo "deb http://repos.mesosphere.io/ubuntu/ trusty main" > /etc/apt/sources.list.d/mesosphere.list
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
RUN apt-get -y update
RUN apt-get -y install mesos marathon chronos
mesos-master/Dockerfile
FROM mesos
MAINTAINER Mark O'Connor <mark#myspotontheweb.com>
EXPOSE 5050
CMD ["--zk=zk://zookeeper:2181/mesos", "--work_dir=/var/lib/mesos", "--quorum=1"]
ENTRYPOINT ["mesos-master"]
mesos-slave/Dockerfile
FROM mesos
MAINTAINER Mark O'Connor <mark#myspotontheweb.com>
CMD ["--master=zk://zookeeper:2181/mesos"]
ENTRYPOINT ["mesos-slave"]
mesos-chronos/Dockerfile
FROM mesos
MAINTAINER Mark O'Connor <mark#myspotontheweb.com>
RUN echo "zk://zookeeper:2181/mesos" > /etc/mesos/zk
EXPOSE 4400
CMD ["chronos"]
Notes:
The "chronos" command line is configured using files.

Resources