Not able to run service using docker stack in docker swarm - docker

Here is the docker-compose file that I once used to successfully run docker container using docker-compose.
Now trying to deploy using the docker stack. My manager and workers node are ready.
docker-compose.yml
version: '3'
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
Command used:
docker stack deploy -c docker-compose.yml monitoring
Trying to run it in the manager node. Why is the service not running . checking if the service is running or not using docker service ls shows 0/1 as you can see in the figure bellow.
When trying to check the logs with docker service logs <servicename> Nothing gets loaded.
Where exactly am I missing the things. My ultimate goal being running the full docker monitoring service like cadvisor and node-exporter and all other required. I tried with this https://github.com/stefanprodan/swarmprom/blob/master/docker-compose.yml
I have figured out the problem is in the replicated mode, runs well in global mode although.
What error is here with the replicated mode. I don't think I have a syntax error here. How to make it run in replicated mode and in the manager node only

If everything works well with the "docker-compose up" but you find the issues with "docker stack deploy". This is how I fixed it.
I am using docker in a Windows machines and docker uses a VirtualBox VM. You will need to do port forward to see the containers in the host machine.
The problem is related to the bind mount with the volumes.
See the path in the two scenarios below when I do docker inspect :
Docker-compose up:
"Mounts": [
{
"Type": "bind",
"Source": "/c/Users/Bardav/Documents/Large-Scale-Software Testing/PromptDocker/PromptDocker/app_users",
"Destination": "/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
Docker stack deploy -c docker-compose.ymal store:
"Mounts": [
{
"Type": "bind",
"Source": "C:\\Users\\Bardav\\Documents\\Large-Scale-Software Testing\\PromptDocker\\PromptDocker\\app_viewer",
"Target": "/app/"
}
],
SOLUTION:
Update the path in the docker-compose.yml and it will work. For instance;
RELATIVE PATH NOT WORKING
reg_users:
image: prompt_user_app
build: ./app_users/
volumes:
- ./app_users/:/app/
ABSOLUTE PATH WORKS FINE
reg_users:
image: prompt_user_app
build: ./app_users/
volumes:
- "/c/Users/Bardav/Documents/Large-Scale-Software Testing/PromptDocker/PromptDocker/app_users:/app/"

This might be related to the placement constraint on Swarm Master / Manager. I had faced a similar problem while trying to schedule visualiser or portioner container on the master node using Azure Container Service. You can check the status of the master. If it is in Puased state you can make it active
I had shared my experience at https://www.handsonarchitect.com/2017/11/visualize-docker-swarm-in-azure.html

Related

Docker-compose, remote context, path relative to local machine instead of remote

I have a very simple docker-compose.yml:
version: '2.4'
services:
containername:
image: ${DOCKER_IMAGE}
volumes:
- ./config:/root/config
I'm using a remote staging server accessed via ssh:
docker context create staging --docker "host=ssh://ubuntu#staging.example.com"
docker context use staging
However, I see unexpected results with my volume after I docker-compose up:
docker-compose --context staging up -d
docker inspect containername
...
"Mounts": [
{
"Type": "bind",
"Source": "/Users/crummy/code/.../config",
"Destination": "/root/config",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
...
It seems the expansion of ./config to a full path happens on the machine docker-compose is running on. Not the machine Docker is running on.
I can fix this problem by hardcoding the entire path: /home/ubuntu/config:/root/config. But this makes my docker-compose file a little less flexible than it could be. Is there a way to get the dot expansion to occur on the remote machine?
No, the docs say that:
You can mount a relative path on the host, which expands relative to the directory of the Compose configuration file being used. Relative paths should always begin with . or ..
I believe that happens for two reasons:
There's no easy way and objective way that the docker-compose can find out how to expand . in this context, as there's no way to know what . would mean for the ssh client (home? same folder?).
Even though the docker cli is using a different context, the expansion is done by the docker-compose tool, that's unaware about the context switch.
Even using environment variables might pose a problem, since the env expansion would also happen in the machine you're running the docker-compose command.

Why are my services not showing up on the network after started in docker swarm?

After fiddling around for a couple days with what was new to me a week ago, I'm kind of stuck and would like your help. I've created a docker swarm with some Pi's running Ubuntu server 20.04 LTS and when I use the command:
$ docker stack deploy --compose-file docker-compose.visualizer.yml visualizer
The terminal feedback is:
Creating network visualizer_default
Creating service visualizer_visualizersvc
Practically the same output when I run:
$ docker stack deploy --compose-file docker-compose.home-assistant.yml home-assistant
Checking the stacks:
$ docker stack ls
NAME SERVICES ORCHESTRATOR
home-assistant 1 Swarm
visualizer 1 Swarm
Checking services in stacks:
$ docker stack services visualizer
ID NAME MODE REPLICAS IMAGE PORTS
t5nz28hzbzma visualizer_visualizersvc replicated 0/1 dockersamples/visualizer:latest *:8000->8080/tcp
$ docker stack services home-assistant
ID NAME MODE REPLICAS IMAGE PORTS
olj1nbx5vj40 home-assistant_homeassistant replicated 0/1 homeassistant/home-assistant:stable *:8123->8123/tcp
When I then browse to the ports specified in docker-compose.visualizer.yml or docker-compose.home-assistant.yml there is no response on the server side ("can't connect"). Identical for both the manager and worker IP. This is inside a home network, in a single subnet with no traffic rules set for LAN traffic.
EDIT: a portscan reveals no open ports in the specified range on either host.
Any comments on my work are welcome as I'm learning, but I would very much like to see some containers 'operational'.
As a reference I included the docker-compose files:
docker-compose.home-assistant.yml
version: "3"
services:
homeassistant:
image: homeassistant/home-assistant:stable
ports:
- "8123:8123"
volumes:
- './home-assistant:/config'
environment:
TZ: 'Madrid'
restart: unless-stopped
network_mode: host
docker-compose.visualizer.yml
version: "3"
services:
visualizersvc:
image: alexellis2/visualizer-arm:latest
deploy:
placement:
constraints:
- 'node.role==manager'
ports:
- '8000:8080'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
Bonus points for telling me if I should always approach the manager through the specified ports or if I have to approach the machine running the service (or any good documentation on the subject.)
Not long after you post a question you happen to find the answer yourself of course:
I never scaled the services (to 1 in my case)
docker service scale [SERVICE_ID]=1
EDIT: The services were not scaling to 1 because of another error, I think in the visualizer, but this brought me to the final answer.
Now I'm getting a mountain of new error messages, but at least those are verbose :)
Any feedback is still welcome.

How to get the port numbers for a service - Docker compose file

I am trying to start containers with the docker-compose.yml. I am trying to start two services, one is mongo and other is OHIF viewer.
Currently I am able to access mongo locally (localhost:27017(after port-forwarding) in desktop whereas OHIF viewer isn't possible (ports aren't visible/empty so, I am not able to access them locally). Can you guide me as to how I can set them?
As you can see from my docker-compose file that I have set network_mode:"host" to be able to access them locally in my desktop as well.
Based on my json file, I thought the port was already set (pacsIP:8042) but it's missing as shown in screenshot above when I execute "docker ps" command. Can you guide me on this? I am new to docker and your inputs will definitely be helpful. PACSIP is my docker host (remote linux server) IP. I would like to port forward them and view it in my desktop
Please find below the docker-compose.yml file
version: '3.6'
services:
mongo:
image: "mongo:latest"
container_name: ohif-mongo
ports:
- "27017:27017"
viewer:
image: ohif/viewer:latest
container_name: ohif-viewer
ports:
- "3030:80"
- "8042:8042" - # Not sure whether this is correct. I tried with and without this as well but it didn't work
network_mode: "host"
environment:
- MONGO_URL=mongodb://mongo:27017/ohif
extra_hosts:
- "pacsIP:172.xx.xxx.xxx"
volumes:
- ./dockersupport-app.json:/app/app.json
As you can see that in the volumes, I am using a dockersupport-app.json file which is given below
{
"apps" : [{
"name" : "ohif-viewer",
"script" : "main.js",
"watch" : true,
"merge_logs" : true,
"cwd" : "/app/bundle/",
"env": {
"METEOR_SETTINGS": {
"servers": {
"dicomWeb": [
{
"name": "Orthanc",
"wadoUriRoot": "http://pacsIP:8042/wado", # these ports
"qidoRoot": "http://pacsIP:8042/dicom-web", #these ports
"wadoRoot": "http://pacsIP:8042/dicom-web", #these ports
"qidoSupportsIncludeField": false,
"imageRendering": "wadouri",
"thumbnailRendering": "wadouri",
"requestOptions": {
"auth": "orthanc:orthanc",
"logRequests": true,
"logResponses": false,
How can I access the OHIF-Viewer locally? What changes should I make to docker-compose.yml or json file? I did with and without port 8042 under "Ports" section of docker-compose file but it didn't work still.
Did you use docker-compose run or docker-compose up?
According to docker documentation: "docker-compose run command does not create any of the ports specified in the service configuration."
Try to use docker-compose up command.
If you use network_mode: host it bypasses all of Docker's standard networking. In this case that includes the port mappings: since the container is directly using the host's network interface, there's nothing to map per se.
network_mode: host is almost never necessary and I'd remove it here. That should make the ports visible in the docker ps output again, and make the remapped port 3030 accessible. As it is you can probably reach your service on port 80, which presumably the service binds to, directly on the host network.

Hyperledger Fabric BYFN - Unable to find directory listed in docker-compose-base.yaml

I am looking at docker-compose-base.yaml line 27:
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
- orderer.example.com:/var/hyperledger/production/orderer
I can find below 3 directories on my filesystem
../channel-artifacts/genesis.block
../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp
../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/
but I cannot find a directory named orderer.example.com. I think this is not meant to be a directory but related to the
container_name: orderer.example.com
in some way. Could anyone explain the meaning of the last mapping:
- orderer.example.com:/var/hyperledger/production/orderer
it does not look like a local <-> docker directory mapping. Then what is it?
Before starting the container it's normal that the orderer.example.com folder is not create, since it will be created when the container starts.
Mount the directory /var/hyperledger/production/orderer between the host and the container can be useful if you orderer is run is solo mode.
Indeed, if your container crash (for example your server restart) you can keep track of the blocks that the orderer has build. It's can be used for restart the blockchain network easier (or move the orderer in other server) .
TL;DR: The first three
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
are examples of bind mounts. The last one
- orderer.example.com:/var/hyperledger/production/orderer
is a volume. Its confusing because bind mounts can be created using the volume syntax and hence also get referred to as volumes in documentation.
This is not related to
container_name: orderer.example.com
docker-compose-base.yaml is a base file that is used e.g., by docker-compose-e2e-template.yaml. In that file one can see volumes being defined:
volumes:
orderer.example.com:
peer0.org1.example.com:
peer1.org1.example.com:
peer0.org2.example.com:
peer1.org2.example.com:
these volumes behave the same way as if one had created them using the docker volume create command. see https://docs.docker.com/engine/reference/commandline/volume_create/ to understand what that command does. It is a way to create persistent storage that 1. does not get deleted when the docker containers stop and exit and, 2. that can be used to share data amongst containers.
To see list of all volumes created by docker on the machine, run:
docker volume ls
To inspect a volume, run (to give an example):
$ docker volume inspect net_orderer.example.com
[
{
"CreatedAt": "2018-11-06T22:10:42Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/net_orderer.example.com/_data",
"Name": "net_orderer.example.com",
"Options": null,
"Scope": "local"
}
]
re: the original question:
Unable to find directory listed in docker-compose-base.yaml
You will not be able to find this directory. E.g., if you try listing the mountpoint above:
$ ls /var/lib/docker/volumes/net_orderer.example.com/_data
ls: /var/lib/docker/volumes/net_orderer.example.com/_data: No such file or directory
docker volumes are not created as regular directories on the host. The steps to "get" to a volume are quite complex in fact. see https://forums.docker.com/t/host-path-of-volume/12277/9 for details.

Cassandra fails to start on Docker Toolbox external volume

The issue is related to running "library/Cassandra" version 3.11.2 on Docker Toolbox (docker-machine version 0.11.0) on Windows 7.
These are versions that I have to use due to constraints at my place of work, and can confirm that I have had no problems doing exactly the same thing on Docker on Mac OSX at home.
Cassandra works fine if I don't map the data directory to an external volume in the Compose file, but has the downside of causing the Virtualbox VM to fill up on a regular basis. It's fine for development, but a pain for our other systems such as Test and won't be of any use in Production.
I am aware of the constraints imposed by Docker Toolbox for Windows, especially around the use of C:\Users as mount points.
My docker-compose file looks as follows:
version: '3'
services:
cassandra:
privileged: true
image: cassandra
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
volumes:
- C:\Users\davis_s\docker-volumes\cassandra:/var/lib/cassandra
container_name: cassandra
environment:
- CASSANDRA_CLUSTER_NAME=mycluster
- CASSANDRA_ENDPOINT_SNITCH=GossipingPropertyFileSnitch
- CASSANDRA_DC=datacenter1
networks:
- smartdeploy_evo
When I map the data volume for Cassandra to a folder outside of the VM it stops when it gets to the following line:
cassandra | INFO [main] 2018-07-27 10:49:00,044 ColumnFamilyStore.java:411 - Initializing system.IndexInfo
It does manage to create the following folders under this location (which was totally empty prior to startup):
commitlog
data
hints
saved_caches
with other subdirectories under these ones, such as data/system/Index-info.. etc.
I have left it over the weekend and it never gets past this line.
The output for "docker inspect" related to the mount point looks as follows:
"Mounts": [
{
"Type": "bind",
"Source": "/c/Users/davis_s/docker-volumes/cassandra",
"Destination": "/var/lib/cassandra",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
I have tried the following to try and overcome this:
Given full (recursive) Windows permissions to "Everyone" on C:\Users\davis_s
Mapped a different Shared Folder in VirtualBox, again with full permissions for "Everyone"
Tried running the Cassandra container with "Privileged: true"
I've seen a few other posts on SO related to Cassandra stopping at a similar point, but these look to have been solved by clearing the folder and restarting which is relevant in this case (as it started empty anyway!).
Thanks in advance...

Resources