I am new to Docker and I want to run Deluge as a Swarm service
I am running Docker inside OSX
First I started up Swarm
$ docker swarm init
Then I created an overlay network, that then I would use on the service
$ docker network create --driver overlay p2pnetwork
After that I created the service for Swarm
$ docker service create --name deluge \
--network p2pnetwork \
--replicas 1 \
linuxserver/deluge \
-e PUID=501 -e PGID=20 \
-e TZ=ART \
-e UMASK_SET=022 \
-v /Users/agus/Downloads/torrents/incoming:/downloads \
-v /Users/agus/Downloads/torrents/config:/config \
-p 8112:80
Then when I collected this data
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
0eydgjayj3q8 deluge replicated 0/1 linuxserver/deluge:latest
$ docker service ps deluge
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xdut1h8riw56 deluge.1 linuxserver/deluge:latest moby Running Starting 1 second ago
8c54vkti22c8 \_ deluge.1 linuxserver/deluge:latest moby Shutdown Failed 6 seconds ago "task: non-zero exit (111)"
3oan0xb349zi \_ deluge.1 linuxserver/deluge:latest moby Shutdown Failed 17 seconds ago "task: non-zero exit (111)"
xvri5grnmi4g \_ deluge.1 linuxserver/deluge:latest moby Shutdown Failed 27 seconds ago "task: non-zero exit (111)"
9rcmvxihyg91 \_ deluge.1 linuxserver/deluge:latest moby Shutdown Failed 39 seconds ago "task: non-zero exit (111)"
$ docker service logs deluge
deluge.1.xvri5grnmi4g#moby | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
deluge.1.xdut1h8riw56#moby | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
deluge.1.xvri5grnmi4g#moby | [s6-init] ensuring user provided files have correct perms...exited 0.
deluge.1.3oan0xb349zi#moby | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
deluge.1.xvri5grnmi4g#moby | [fix-attrs.d] applying ownership & permissions fixes...
deluge.1.xdut1h8riw56#moby | [s6-init] ensuring user provided files have correct perms...exited 0.
deluge.1.8c54vkti22c8#moby | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
deluge.1.8c54vkti22c8#moby | [s6-init] ensuring user provided files have correct perms...exited 0.
deluge.1.3oan0xb349zi#moby | [s6-init] ensuring user provided files have correct perms...exited 0.
deluge.1.xvri5grnmi4g#moby | [fix-attrs.d] done.
deluge.1.xdut1h8riw56#moby | [fix-attrs.d] applying ownership & permissions fixes...
deluge.1.xdut1h8riw56#moby | [fix-attrs.d] done.
deluge.1.8c54vkti22c8#moby | [fix-attrs.d] applying ownership & permissions fixes...
deluge.1.3oan0xb349zi#moby | [fix-attrs.d] applying ownership & permissions fixes...
deluge.1.xvri5grnmi4g#moby | [cont-init.d] executing container initialization scripts...
deluge.1.xvri5grnmi4g#moby | [cont-init.d] 10-adduser: executing...
deluge.1.xdut1h8riw56#moby | [cont-init.d] executing container initialization scripts...
deluge.1.8c54vkti22c8#moby | [fix-attrs.d] done.
deluge.1.3oan0xb349zi#moby | [fix-attrs.d] done.
deluge.1.3oan0xb349zi#moby | [cont-init.d] executing container initialization scripts...
deluge.1.xvri5grnmi4g#moby | usermod: no changes
deluge.1.xdut1h8riw56#moby | [cont-init.d] 10-adduser: executing...
deluge.1.8c54vkti22c8#moby | [cont-init.d] executing container initialization scripts...
deluge.1.8c54vkti22c8#moby | [cont-init.d] 10-adduser: executing...
deluge.1.3oan0xb349zi#moby | [cont-init.d] 10-adduser: executing...
deluge.1.xvri5grnmi4g#moby |
deluge.1.xdut1h8riw56#moby | usermod: no changes
deluge.1.xdut1h8riw56#moby |
deluge.1.8c54vkti22c8#moby | usermod: no changes
deluge.1.3oan0xb349zi#moby | usermod: no changes
deluge.1.xvri5grnmi4g#moby | -------------------------------------
deluge.1.xvri5grnmi4g#moby | _ _ _
deluge.1.xdut1h8riw56#moby | -------------------------------------
deluge.1.8c54vkti22c8#moby |
deluge.1.3oan0xb349zi#moby |
deluge.1.3oan0xb349zi#moby | -------------------------------------
deluge.1.xvri5grnmi4g#moby | | |___| (_) ___
deluge.1.xdut1h8riw56#moby | _ _ _
deluge.1.8c54vkti22c8#moby | -------------------------------------
deluge.1.8c54vkti22c8#moby | _ _ _
deluge.1.xvri5grnmi4g#moby | | / __| | |/ _ \
deluge.1.3oan0xb349zi#moby | _ _ _
deluge.1.xdut1h8riw56#moby | | |___| (_) ___
deluge.1.xvri5grnmi4g#moby | | \__ \ | | (_) |
deluge.1.8c54vkti22c8#moby | | |___| (_) ___
deluge.1.3oan0xb349zi#moby | | |___| (_) ___
deluge.1.xdut1h8riw56#moby | | / __| | |/ _ \
deluge.1.8c54vkti22c8#moby | | / __| | |/ _ \
deluge.1.xdut1h8riw56#moby | | \__ \ | | (_) |
deluge.1.3oan0xb349zi#moby | | / __| | |/ _ \
deluge.1.xvri5grnmi4g#moby | |_|___/ |_|\___/
deluge.1.xvri5grnmi4g#moby | |_|
deluge.1.8c54vkti22c8#moby | | \__ \ | | (_) |
deluge.1.xdut1h8riw56#moby | |_|___/ |_|\___/
deluge.1.3oan0xb349zi#moby | | \__ \ | | (_) |
deluge.1.3oan0xb349zi#moby | |_|___/ |_|\___/
deluge.1.xvri5grnmi4g#moby |
deluge.1.8c54vkti22c8#moby | |_|___/ |_|\___/
deluge.1.xdut1h8riw56#moby | |_|
deluge.1.3oan0xb349zi#moby | |_|
deluge.1.8c54vkti22c8#moby | |_|
deluge.1.xvri5grnmi4g#moby | Brought to you by linuxserver.io
deluge.1.xvri5grnmi4g#moby | We gratefully accept donations at:
deluge.1.3oan0xb349zi#moby |
deluge.1.xdut1h8riw56#moby |
deluge.1.8c54vkti22c8#moby |
deluge.1.8c54vkti22c8#moby | Brought to you by linuxserver.io
deluge.1.xvri5grnmi4g#moby | https://www.linuxserver.io/donations/
deluge.1.3oan0xb349zi#moby | Brought to you by linuxserver.io
deluge.1.xdut1h8riw56#moby | Brought to you by linuxserver.io
deluge.1.3oan0xb349zi#moby | We gratefully accept donations at:
deluge.1.8c54vkti22c8#moby | We gratefully accept donations at:
deluge.1.xvri5grnmi4g#moby | -------------------------------------
deluge.1.xvri5grnmi4g#moby | GID/UID
deluge.1.xdut1h8riw56#moby | We gratefully accept donations at:
deluge.1.3oan0xb349zi#moby | https://www.linuxserver.io/donations/
deluge.1.8c54vkti22c8#moby | https://www.linuxserver.io/donations/
deluge.1.8c54vkti22c8#moby | -------------------------------------
deluge.1.xvri5grnmi4g#moby | -------------------------------------
deluge.1.xdut1h8riw56#moby | https://www.linuxserver.io/donations/
deluge.1.3oan0xb349zi#moby | -------------------------------------
deluge.1.3oan0xb349zi#moby | GID/UID
deluge.1.8c54vkti22c8#moby | GID/UID
deluge.1.xvri5grnmi4g#moby | User uid: 911
deluge.1.xdut1h8riw56#moby | -------------------------------------
deluge.1.xdut1h8riw56#moby | GID/UID
deluge.1.3oan0xb349zi#moby | -------------------------------------
deluge.1.8c54vkti22c8#moby | -------------------------------------
deluge.1.xvri5grnmi4g#moby | User gid: 911
deluge.1.3oan0xb349zi#moby | User uid: 911
deluge.1.xdut1h8riw56#moby | -------------------------------------
deluge.1.xvri5grnmi4g#moby | -------------------------------------
deluge.1.8c54vkti22c8#moby | User uid: 911
deluge.1.8c54vkti22c8#moby | User gid: 911
deluge.1.3oan0xb349zi#moby | User gid: 911
deluge.1.xdut1h8riw56#moby | User uid: 911
deluge.1.xvri5grnmi4g#moby |
deluge.1.xvri5grnmi4g#moby | [cont-init.d] 10-adduser: exited 0.
deluge.1.8c54vkti22c8#moby | -------------------------------------
deluge.1.3oan0xb349zi#moby | -------------------------------------
deluge.1.xdut1h8riw56#moby | User gid: 911
deluge.1.xdut1h8riw56#moby | -------------------------------------
deluge.1.8c54vkti22c8#moby |
deluge.1.xvri5grnmi4g#moby | [cont-init.d] 30-config: executing...
deluge.1.3oan0xb349zi#moby |
deluge.1.3oan0xb349zi#moby | [cont-init.d] 10-adduser: exited 0.
deluge.1.xdut1h8riw56#moby |
deluge.1.8c54vkti22c8#moby | [cont-init.d] 10-adduser: exited 0.
deluge.1.xvri5grnmi4g#moby | [cont-init.d] 30-config: exited 0.
deluge.1.xvri5grnmi4g#moby | [cont-init.d] done.
deluge.1.3oan0xb349zi#moby | [cont-init.d] 30-config: executing...
deluge.1.xdut1h8riw56#moby | [cont-init.d] 10-adduser: exited 0.
deluge.1.8c54vkti22c8#moby | [cont-init.d] 30-config: executing...
deluge.1.3oan0xb349zi#moby | [cont-init.d] 30-config: exited 0.
deluge.1.xvri5grnmi4g#moby | [services.d] starting services
deluge.1.8c54vkti22c8#moby | [cont-init.d] 30-config: exited 0.
deluge.1.xvri5grnmi4g#moby | [services.d] done.
deluge.1.xdut1h8riw56#moby | [cont-init.d] 30-config: executing...
deluge.1.3oan0xb349zi#moby | [cont-init.d] done.
deluge.1.3oan0xb349zi#moby | [services.d] starting services
deluge.1.8c54vkti22c8#moby | [cont-init.d] done.
deluge.1.xvri5grnmi4g#moby | exec: fatal: unable to exec -e: No such file or directory
deluge.1.xdut1h8riw56#moby | [cont-init.d] 30-config: exited 0.
deluge.1.8c54vkti22c8#moby | [services.d] starting services
deluge.1.3oan0xb349zi#moby | [services.d] done.
deluge.1.xdut1h8riw56#moby | [cont-init.d] done.
deluge.1.xvri5grnmi4g#moby | -e exited 111
deluge.1.8c54vkti22c8#moby | [services.d] done.
deluge.1.xvri5grnmi4g#moby | [cont-finish.d] executing container finish scripts...
deluge.1.3oan0xb349zi#moby | exec: fatal: unable to exec -e: No such file or directory
deluge.1.xdut1h8riw56#moby | [services.d] starting services
deluge.1.xdut1h8riw56#moby | [services.d] done.
deluge.1.8c54vkti22c8#moby | exec: fatal: unable to exec -e: No such file or directory
deluge.1.xvri5grnmi4g#moby | [cont-finish.d] done.
deluge.1.3oan0xb349zi#moby | -e exited 111
deluge.1.3oan0xb349zi#moby | [cont-finish.d] executing container finish scripts...
deluge.1.xdut1h8riw56#moby | exec: fatal: unable to exec -e: No such file or directory
deluge.1.8c54vkti22c8#moby | -e exited 111
deluge.1.xvri5grnmi4g#moby | [s6-finish] syncing disks.
deluge.1.xvri5grnmi4g#moby | [s6-finish] sending all processes the TERM signal.
deluge.1.xdut1h8riw56#moby | -e exited 111
deluge.1.3oan0xb349zi#moby | [cont-finish.d] done.
deluge.1.8c54vkti22c8#moby | [cont-finish.d] executing container finish scripts...
deluge.1.8c54vkti22c8#moby | [cont-finish.d] done.
deluge.1.xvri5grnmi4g#moby | [s6-finish] sending all processes the KILL signal and exiting.
deluge.1.xdut1h8riw56#moby | [cont-finish.d] executing container finish scripts...
deluge.1.3oan0xb349zi#moby | [s6-finish] syncing disks.
deluge.1.3oan0xb349zi#moby | [s6-finish] sending all processes the TERM signal.
deluge.1.8c54vkti22c8#moby | [s6-finish] syncing disks.
deluge.1.xdut1h8riw56#moby | [cont-finish.d] done.
deluge.1.xdut1h8riw56#moby | [s6-finish] syncing disks.
deluge.1.3oan0xb349zi#moby | [s6-finish] sending all processes the KILL signal and exiting.
deluge.1.8c54vkti22c8#moby | [s6-finish] sending all processes the TERM signal.
deluge.1.8c54vkti22c8#moby | [s6-finish] sending all processes the KILL signal and exiting.
deluge.1.xdut1h8riw56#moby | [s6-finish] sending all processes the TERM signal.
deluge.1.xdut1h8riw56#moby | [s6-finish] sending all processes the KILL signal and exiting.
deluge.1.m73zgk69hz2u#moby | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
deluge.1.m73zgk69hz2u#moby | [s6-init] ensuring user provided files have correct perms...exited 0.
deluge.1.m73zgk69hz2u#moby | [fix-attrs.d] applying ownership & permissions fixes...
deluge.1.m73zgk69hz2u#moby | [fix-attrs.d] done.
deluge.1.m73zgk69hz2u#moby | [cont-init.d] executing container initialization scripts...
deluge.1.m73zgk69hz2u#moby | [cont-init.d] 10-adduser: executing...
deluge.1.m73zgk69hz2u#moby | usermod: no changes
deluge.1.m73zgk69hz2u#moby |
deluge.1.m73zgk69hz2u#moby | -------------------------------------
deluge.1.m73zgk69hz2u#moby | _ _ _
deluge.1.m73zgk69hz2u#moby | | |___| (_) ___
deluge.1.m73zgk69hz2u#moby | | / __| | |/ _ \
deluge.1.m73zgk69hz2u#moby | | \__ \ | | (_) |
deluge.1.m73zgk69hz2u#moby | |_|___/ |_|\___/
deluge.1.m73zgk69hz2u#moby | |_|
deluge.1.m73zgk69hz2u#moby |
deluge.1.m73zgk69hz2u#moby | Brought to you by linuxserver.io
deluge.1.m73zgk69hz2u#moby | We gratefully accept donations at:
deluge.1.m73zgk69hz2u#moby | https://www.linuxserver.io/donations/
deluge.1.m73zgk69hz2u#moby | -------------------------------------
deluge.1.m73zgk69hz2u#moby | GID/UID
deluge.1.m73zgk69hz2u#moby | -------------------------------------
deluge.1.m73zgk69hz2u#moby | User uid: 911
deluge.1.m73zgk69hz2u#moby | User gid: 911
deluge.1.m73zgk69hz2u#moby | -------------------------------------
deluge.1.m73zgk69hz2u#moby |
deluge.1.m73zgk69hz2u#moby | [cont-init.d] 10-adduser: exited 0.
deluge.1.m73zgk69hz2u#moby | [cont-init.d] 30-config: executing...
deluge.1.m73zgk69hz2u#moby | [cont-init.d] 30-config: exited 0.
deluge.1.m73zgk69hz2u#moby | [cont-init.d] done.
deluge.1.m73zgk69hz2u#moby | [services.d] starting services
deluge.1.m73zgk69hz2u#moby | [services.d] done.
deluge.1.m73zgk69hz2u#moby | exec: fatal: unable to exec -e: No such file or directory
deluge.1.m73zgk69hz2u#moby | -e exited 111
deluge.1.m73zgk69hz2u#moby | [cont-finish.d] executing container finish scripts...
deluge.1.m73zgk69hz2u#moby | [cont-finish.d] done.
deluge.1.m73zgk69hz2u#moby | [s6-finish] syncing disks.
deluge.1.m73zgk69hz2u#moby | [s6-finish] sending all processes the TERM signal.
deluge.1.m73zgk69hz2u#moby | [s6-finish] sending all processes the KILL signal and exiting.
It seems that the service is failing for the volumes, but I do not understand why or how to fix it
-- UPDATE
I also have some mistakes in the service declaration (besides the right answer)
docker service create --name deluge \
--network p2pnetwork \
--replicas 1 \
-e PUID=501 -e PGID=20 \
-e TZ=ART \
-e UMASK_SET=022 \
-p 8112:8112 \
--mount type=bind,source=/Users/agus/Downloads/torrents,destination=/downloads \
--mount type=bind,source=/Users/agus/Downloads/torrents/config,destination=/config \
linuxserver/deluge
Also I need to change the download folders inside the deluge configuration and it worked
Docker Swarm Service uses a other syntax to mount volumes.
docker service create --name deluge \
--network p2pnetwork \
--replicas 1 \
linuxserver/deluge \
-e PUID=501 -e PGID=20 \
-e TZ=ART \
-e UMASK_SET=022 \
--mount type=bind,source=/Users/agus/Downloads/torrents/incoming,destination=/downloads \
--mount type=bind,source=/Users/agus/Downloads/torrents/config,destination=/config \
-p 8112:80
"--mount". Hopefully the volume is available on all your Swarm-Members, if you search for a good way to share volumes between Swarm-Nodes, i would suggest: GlusterFS
Related
More details:
I mostly followed the following instructions to setup my init script:
https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus
I used the docker base image:
nvidia/cuda:11.2.1-runtime-ubuntu20.04
My cloud init ExecStart command for the docker part is currently:
MY COMMAND
docker run --rm --name=myapp -dit -p 80:80 --volume /var/lib/nvidia/lib64:/usr/local/nvidia/lib64 --volume /var/lib/nvidia/bin:/usr/local/nvidia/bin --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/nvidiactl:/dev/nvidiactl <Docker Container Name> <Uvicorn Startup Command>
When I SSH into the running VM,
I did the following to help log what processes are using the gpu:
sudo mount --bind /var/lib/nvidia /var/lib/nvidia
sudo mount -o remount,exec /var/lib/nvidia
On first pass, I get the following output with /var/lib/nvidia/bin/nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 55C P0 27W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
When I go to the root (by doing cd .. twice) and run MY COMMAND, I get the same issue with no processes using the GPU.
However, when I run MY COMMAND in the home/{username} directory, I get the following output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 51C P0 28W / 70W | 14734MiB / 15109MiB | 13% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 4694 C /usr/bin/python3 2329MiB |
| 0 N/A N/A 4695 C /usr/bin/python3 2329MiB |
| 0 N/A N/A 4696 C /usr/bin/python3 2521MiB |
| 0 N/A N/A 4700 C /usr/bin/python3 5221MiB |
| 0 N/A N/A 4701 C /usr/bin/python3 2329MiB |
+-----------------------------------------------------------------------------+
My basic question is: How do I do the same thing in my cloud config script as I was able to do manually in my VM?
I already tried adding a user to my cloud init script like in the example provided by the google link and starting docker with -u flag, but that ran into permissioning issues (specifically PermissionError: [Errno 13] Permission denied: '/.cache')
Edit:
Never found a solution that I understood, but, turned out I needed to run
sudo mount --bind /var/lib/nvidia /var/lib/nvidia
sudo mount -o remount,exec /var/lib/nvidia
In the same directory I was running my docker command using ExecStartPre
A colleague and I both use Azure DevOps build agents self-hosted in Docker. We're both connecting to separate DevOps organisations, but other than that the agents are the same - we're using the latest version of the image (mcr.microsoft.com/azure-pipelines/vsts-agent), both running on the same host OS, same version of Docker.
My build agent runs jobs and outputs the log data to the Azure DevOps console in a sensible format:
Starting: Initialize job
Agent name: 'AS01'
Agent machine name: '7b20a38350c7'
Current agent version: '2.183.1'
Agent running as: 'root'
Prepare build directory.
Set build variables.
Download all required tasks.
Checking job knob settings.
Finished checking job knob settings.
Start tracking orphan processes.
Finishing: Initialize job
His, however, seems to replace any space character ' ' with triple asterisks:
Starting:***Initialize***job
Agent***name:***'SERVER1'
Agent***machine***name:***'SERVER1'
Current***agent***version:***'2.185.0'
Agent***running***as:***'root'
Prepare***build***directory.
Set***build***variables.
Download***all***required***tasks.
Downloading***task:***DockerInstaller***(0.181.0)
Downloading***task:***Docker***(2.181.2)
Checking***job***knob***settings.
***Knob:***AgentToolsDirectory***=***/opt/hostedtoolcache***Source:***${AGENT_TOOLSDIRECTORY}***
***Knob:***TaskRestrictionsEnforcementMode***=***WarningOnly***Source:***$(agent.taskRestrictionsEnforcementMode)***
***Knob:***EnableTaskRestrictionsTelemetry***=***true***Source:***$(agent.enableTaskRestrictionsTelemetry)***
Finished***checking***job***knob***settings.
Start***tracking***orphan***processes.
Finishing:***Initialize***job
The most obvious difference I can spot is that the agent versions are different; mine is 2.183.2 (but signs on in the container logs as v2.184.2).
___ ______ _ _ _
/ _ \ | ___ (_) | (_)
/ /_\ \_____ _ _ __ ___ | |_/ /_ _ __ ___| |_ _ __ ___ ___
| _ |_ / | | | '__/ _ \ | __/| | '_ \ / _ \ | | '_ \ / _ \/ __|
| | | |/ /| |_| | | | __/ | | | | |_) | __/ | | | | | __/\__ \
\_| |_/___|\__,_|_| \___| \_| |_| .__/ \___|_|_|_| |_|\___||___/
| |
agent v2.184.2 |_| (commit 01983e9)
His claims to be running 2.185.0 and signs on in the container logs as such.
___ ______ _ _ _
/ _ \ | ___ (_) | (_)
/ /_\ \_____ _ _ __ ___ | |_/ /_ _ __ ___| |_ _ __ ___ ___
| _ |_ / | | | '__/ _ \ | __/| | '_ \ / _ \ | | '_ \ / _ \/ __|
| | | |/ /| |_| | | | __/ | | | | |_) | __/ | | | | | __/\__ \
\_| |_/___|\__,_|_| \___| \_| |_| .__/ \___|_|_|_| |_|\___||___/
| |
agent v2.185.0 |_| (commit a01fe8d)
It seems that redeploying the container from the latest image did not cause the use of a newer agent version in my environment -- the image seems to have a startup script that checks which version to download, then downloads and installs it. I even tried running the same image on my environment with the access token and organisation name of my colleague, which caused the image to download 2.185.0 from his tenant. Then I swapped the situation, used his environment and his image with my access token and organisation name, and 2.184.2 was downloaded so it seems the problem isn't specific to our Docker environments, nor the servers running them - it seems to be tied to the DevOps organisation.
Is there possibly a problem with version 2.185.0 of the agent that causes these weird asterisks in the logs? Or, is there a way I can force my agent to use v2.185.0 to see if the asterisks appear that way?
I've used the answer to this question to enable control of my Raspberry Pi GPIO pins from within a Docker container, which runs Alpine.
$ docker run --device /dev/gpiomem whatever
This works with the Python RPi.GPIO module, but not with wiringPi.
Python and its dependencies takes up about the same space as Alpine itself, so I'd like to use wiringPi to save on install time and SD card space, and to keep things simple.
Running wiringPi v2.46 in Raspbian directly (installed using apt-get install wiringpi) works fine, and I can successfully configure and trigger digital output pins.
Running wiringPi v2.26 in the Alpine container (installed using apk add wiringpi) fails when trying to configure a pin:
$ gpio -g mode 26 out
Unable to determine hardware version. I see: Hardware : BCM2835,
- expecting BCM2708 or BCM2709. Please report this to projects#drogon.net
Is there anything I can do to expose the SOC to the container, so that wiringPi recognises it correctly? What else might be required to get this working?
Warning: Thar be hacks ahead.
The fundamental problem, as I noted in my comment, is that your containerized wiringPi is simply too old. Older Pi harwdare (like the Pi 2) used the BCM2708/BCM2709 processors, but newer hardware uses the BCM2835. WiringPi needs to know what processor is in use in order to correctly locate the GPIO control registers.
It's possible that there isn't a substantial difference between the older and newer processors as far as GPIO goes. If -- and only if -- this is the case, you can "trick" your containerized wiringPi into working.
Create a version a /proc/cpuinfo with the desired processor name:
sed s/BCM2835/BCM2709/ < /proc/cpuinfo > /proc/cpuinfo
Bind mount that onto /proc/cpuinfo inside the container. You'll also need to provide /dev/mem, and you'll need to run with --privileged:
docker run -it --rm --privileged --device /dev/mem -v /tmp/cpuinfo:/proc/cpuinfo alpine sh
This gives us:
/ # gpio -v
gpio version: 2.26
Copyright (c) 2012-2015 Gordon Henderson
This is free software with ABSOLUTELY NO WARRANTY.
For details type: gpio -warranty
Raspberry Pi Details:
Type: Model 2, Revision: 1.1, Memory: 1024MB, Maker: Sony [OV]
/ # gpio readall
+-----+-----+---------+------+---+---Pi 2---+---+------+---------+-----+-----+
| BCM | wPi | Name | Mode | V | Physical | V | Mode | Name | wPi | BCM |
+-----+-----+---------+------+---+----++----+---+------+---------+-----+-----+
| | | 3.3v | | | 1 || 2 | | | 5v | | |
| 2 | 8 | SDA.1 | IN | 0 | 3 || 4 | | | 5V | | |
| 3 | 9 | SCL.1 | IN | 0 | 5 || 6 | | | 0v | | |
| 4 | 7 | GPIO. 7 | IN | 0 | 7 || 8 | 0 | IN | TxD | 15 | 14 |
| | | 0v | | | 9 || 10 | 0 | IN | RxD | 16 | 15 |
| 17 | 0 | GPIO. 0 | IN | 0 | 11 || 12 | 0 | IN | GPIO. 1 | 1 | 18 |
| 27 | 2 | GPIO. 2 | IN | 0 | 13 || 14 | | | 0v | | |
| 22 | 3 | GPIO. 3 | IN | 0 | 15 || 16 | 0 | IN | GPIO. 4 | 4 | 23 |
| | | 3.3v | | | 17 || 18 | 0 | IN | GPIO. 5 | 5 | 24 |
| 10 | 12 | MOSI | IN | 0 | 19 || 20 | | | 0v | | |
| 9 | 13 | MISO | IN | 0 | 21 || 22 | 0 | IN | GPIO. 6 | 6 | 25 |
| 11 | 14 | SCLK | IN | 0 | 23 || 24 | 0 | IN | CE0 | 10 | 8 |
| | | 0v | | | 25 || 26 | 0 | IN | CE1 | 11 | 7 |
| 0 | 30 | SDA.0 | IN | 0 | 27 || 28 | 0 | IN | SCL.0 | 31 | 1 |
| 5 | 21 | GPIO.21 | IN | 0 | 29 || 30 | | | 0v | | |
| 6 | 22 | GPIO.22 | IN | 0 | 31 || 32 | 0 | IN | GPIO.26 | 26 | 12 |
| 13 | 23 | GPIO.23 | IN | 0 | 33 || 34 | | | 0v | | |
| 19 | 24 | GPIO.24 | IN | 0 | 35 || 36 | 0 | IN | GPIO.27 | 27 | 16 |
| 26 | 25 | GPIO.25 | IN | 0 | 37 || 38 | 0 | IN | GPIO.28 | 28 | 20 |
| | | 0v | | | 39 || 40 | 0 | IN | GPIO.29 | 29 | 21 |
+-----+-----+---------+------+---+----++----+---+------+---------+-----+-----+
| BCM | wPi | Name | Mode | V | Physical | V | Mode | Name | wPi | BCM |
+-----+-----+---------+------+---+---Pi 2---+---+------+---------+-----+-----+
You would want to experiment to see if this actually works as intended.
Or just run a Raspbian image instead of an Arch image.
Thanks to larsks for pointing out the difference in version numbers.
Version 2.46 of wiringPi is available for Alpine, but I'd failed to notice it's only in the edge branch of the community repository
To use this version I had to modify the file /etc/apk/repositories, replacing the existing community entry with the edge version.
Since I'm using Docker, I had to do this as part of the image build process, so I added the following to my Dockerfile:
RUN sed -i "s/v[0-9.]*\/community/edge\/community/" /etc/apk/repositories \
&& apk update && apk add wiringpi
I have a 5 node Cloud Foundry setup on Azure as part of API provisioning platform.
(3.5GB)3 nodes - Garden container
(1.5GB)1 node - Postgres
(28 GB)1 node - Rest of Cloud Foundry components like cloud controller, haproxy, gorouter, stager, auctioneer etc
boshuser#boshvm:~$ bosh instances --ps
Acting as user 'admin' on deployment 'single-vm-cf-on-azure' on 'bosh'
Director task 184
Task 184 done
+-------------------------------------------------------+---------+-----+----------------------+---------------+
| Instance | State | AZ | VM Type | IPs |
+-------------------------------------------------------+---------+-----+----------------------+---------------+
| cell_api_z1/0 (cf0f2f38-9676-4a25-9716-67255be18006)* | running | n/a | resource_api_z1 | x.x.x.x |
| rep | running | | | |
| consul_agent | running | | | |
| garden | running | | | |
| metron_agent | running | | | |
+-------------------------------------------------------+---------+-----+----------------------+---------------+
| cell_api_z1/1 (4147fa15-98ad-4ba9-a65d-7bd3f0dfafb6) | running | n/a | resource_api_z1 | x.x.x.x |
| rep | running | | | |
| consul_agent | running | | | |
| garden | running | | | |
| metron_agent | running | | | |
+-------------------------------------------------------+---------+-----+----------------------+---------------+
| cell_api_z1/2 (51219434-846c-4edc-a3f7-ad0b28bbc68d) | running | n/a | resource_api_z1 | x.x.x.x |
| rep | running | | | |
| consul_agent | running | | | |
| garden | running | | | |
| metron_agent | running | | | |
+-------------------------------------------------------+---------+-----+----------------------+---------------+
| cf_z1/0 (770622e7-0fbb-407e-904a-cc69f5341cef)* | running | n/a | resource_z1 | x.x.x.x |
| | | | | x.x.x.x |
| consul_agent | running | | | |
| nats | running | | | |
| nats_stream_forwarder | running | | | |
| metron_agent | running | | | |
| etcd | running | | | |
| etcd_metrics_server | running | | | |
| rpc_nfsd | running | | | |
| rpc_mountd | running | | | |
| cloud_controller_ng | running | | | |
| cloud_controller_worker_local_1 | running | | | |
| cloud_controller_worker_local_2 | running | | | |
| nginx_cc | running | | | |
| cloud_controller_worker_1 | running | | | |
| cloud_controller_clock | running | | | |
| nfs_mounter | running | | | |
| route_registrar | running | | | |
| consul_template | running | | | |
| haproxy_config | running | | | |
| haproxy | running | | | |
| doppler | running | | | |
| loggregator_trafficcontroller | running | | | |
| uaa | running | | | |
| gorouter | running | | | |
| statsd-injector | running | | | |
| syslog_drain_binder | running | | | |
| bbs | running | | | |
| auctioneer | running | | | |
| converger | running | | | |
| stager | running | | | |
| nsync_listener | running | | | |
| nsync_bulker | running | | | |
| tps_listener | running | | | |
| tps_watcher | running | | | |
| cc_uploader | running | | | |
| ssh_proxy | running | | | |
| file_server | running | | | |
| route_emitter | running | | | |
+-------------------------------------------------------+---------+-----+----------------------+---------------+
| postgres_z1/0 (40f5d667-81c6-4065-98e5-8e272b17ed7d)* | running | n/a | resource_postgres_z1 | x.x.x.x |
| postgres | running | | | |
| metron_agent | running | | | |
+-------------------------------------------------------+---------+-----+----------------------+---------------+
When I deploy the cloud foundry manifest using BOSH. It gets deployed successfully and everything works fine which it should.
The problem starts after few hours of deployment. The 'cf' commands stop responding and the apps deployed on cloud foundry do not respond when accessed from browser.
As a solution I have to recreate the cloud controller node. The setup works fine for few hours and then fail again.
Earlier the cloud controller node had 8GB RAM. I did some memory usage analysis when the setup became unresponsive. I found that on a 8GB node the memory was getting choked. So I concluded the issue might be happening because of low RAM.
As a solution I updated the cloud controller node from 8GB to 28GB and redeployed the cloud foundry manifest. The setup worked fine and I was able to access the apps from browser.
But again after few hours the 'cf' commands became unresponsive and I was not able to access apps from browser. Out of 28GB only around 8GB is getting used. So there cannot be an issue because of low memory.
Not sure why the setup becomes unresponsive after few hours.
Any help would be very much appreciated.
Thanks
I have a simple project using rebar with this config:
{sub_dirs, [
"gen",
"apps/basilisk_server",
"rel"
"apps/*"
"ebin"
"deps/*/ebin"
]}.
{erl_opts, [debug_info, fail_on_warning]}.
{
deps,
[
{ email_address, ".*", { git, "git://github.com/mswimmer/erlang-email-address.git", "master"}},
{ gen_smtp , ".*", { git, "https://github.com/Vagabond/gen_smtp.git" , "master"}},
{ pat , ".*", { git, "https://github.com/selectel/pat.git" , "master"}},
{ thrift , ".*", { git, "https://github.com/MustyMustelidae/ThriftErl.git", "master"}}
]
}.
My directory structure is:
+---.rebar
+---deps
| +---edown
| | +---.rebar
| | +---bin
| | +---doc
| | +---ebin
| | +---priv
| | | \---scripts
| | +---samples
| | \---src
| +---email_address
| | +---.rebar
| | +---ebin
| | +---src
| | \---test
| +---gen_smtp
| | +---.rebar
| | +---ebin
| | +---src
| | +---test
| | \---testdata
| +---gproc
| | +---.rebar
| | +---doc
| | +---ebin
| | +---include
| | +---patches
| | | +---gen_leader
| | | +---kernel
| | | \---stdlib
| | +---priv
| | +---reference
| | +---src
| | +---test
| | | \---eqc
| | \---tetrapak
| +---lager
| | +---.rebar
| | +---ebin
| | +---include
| | +---src
| | \---test
| +---pat
| | +---.rebar
| | +---ebin
| | +---include
| | \---src
| +---proper
| | +---.rebar
| | +---doc
| | +---ebin
| | +---examples
| | +---include
| | +---src
| | \---test
| +---tempo
| | +---.rebar
| | +---c_src
| | +---doc
| | | \---utf8
| | +---ebin
| | +---src
| | \---test
| \---thrift
| +---.rebar
| +---ebin
| +---include
| +---out
| | \---production
| | \---erl
| +---src
| \---test
+---ebin <--- My .app/.beam
\---src <--- My source files
\---gen <---More source files
(Sorry for the wall)
Each of the dependencies has a .app file in it's ebin folder, and I'm running the command erl -pa apps/*/ebin -pa ebin -pa deps/*/ebin. I expect I should be able to run application:ensure_all_started(somedep). where "somedep" is the name of a dependency with a .app defined in it's ebin folder. But when I run it for any of them I get the error {error,{somedep,{"no such file or directory","somedep.app"}}}. When I run it for my own application I get the correct response ({ok,_}). Because I can't start the dependencies' applications, my own application fails with an undef error. Am I missing some step to register my .app files with the shell?
The Windows command shell doesn't expand the wildcards in apps/*/ebin and deps/*/ebin, so the Erlang load path isn't getting set properly.