I have successfully installed the nfs service on Windows 10 Pro (NFS Client is activated). It was already working, but suddenly it is not working anymore.
I think NFS Server/Service is successfully configured and installed. mount \\127.0.0.1/c/Projects N: I can mount the Share locally.
ddev start (with nfs_mount_enabled: false) works ddev start (with nfs_mount_enabled: true) gives me the
error:
Starting yogamehome-2020... Pushing mkcert rootca certs to ddev-global-cache Pushed mkcert rootca certs to ddev-global-cache Building db Building web Recreating ddev-yogamehome-2020-db ... done Recreating ddev-yogamehome-2020-web ... error Recreating ddev-yogamehome-2020-dba ...
ERROR: for ddev-yogamehome-2020-web Cannot start service web: error while mounting volume '/var/lib/docker/volumes/ddev-yogamehome-2020_nfsmount/_data': failed to mount local volume: mount :/c/Projects/yogamehome-2020:/var/lib/docker/volumes/ddev-yogamehome-2020_n Recreating ddev-yogamehome-2020-dba ... doneock: operation not supported
ERROR: for web Cannot start service web: error while mounting volume '/var/lib/docker/volumes/ddev-yogamehome-2020_nfsmount/_data': failed to mount local volume: mount :/c/Projects/yogamehome-2020:/var/lib/docker/volumes/ddev-yogamehome-2020_nfsmount/_data, data: addr=127.0.0.1,hard,nolock: operation not supported Encountered errors while bringing up the project. Failed to start yogamehome-2020: Failed to run docker-compose [-f C:\Projects\yogamehome-2020.ddev\docker-compose.yaml -f C:\Projects\yogamehome-2020.ddev\docker-compose.environment.yaml -f C:\Projects\yogamehome-2020.ddev\docker-compose.hosts.yaml up --build -d ], err='exit status 1', stdout='Step 1/6 : ARG BASE_IMAGE Step 2/6 : FROM $BASE_IMAGE ---> 94b0ac137a40 Step 3/6 : ARG username ---> Using cache ---> 56f6d4f186b1 Step 4/6 : ARG uid ---> Using cache ---> 02f90fa967ed Step 5/6 : ARG gid ---> Using cache ---> 2f6228a1a2d0 Step 6/6 : RUN (groupadd --gid $gid "$username" || groupadd "$username" || true) && (useradd -l -m -s "/bin/bash" --gid "$username" --comment '' --uid $uid "$username" || useradd -l -m -s "/bin/bash" --gid "$username" --comment '' "$username") ---> Using cache ---> c3a74d13aecb
Successfully built c3a74d13aecb Successfully tagged drud/ddev-dbserver-mariadb-10.2:v1.13.0-yogamehome-2020-built Step 1/6 : ARG BASE_IMAGE Step 2/6 : FROM $BASE_IMAGE ---> 82d77d5c110a Step 3/6 : ARG username ---> Using cache ---> a4ae9b611d25 Step 4/6 : ARG uid ---> Using cache ---> 9a4a76b8819c Step 5/6 : ARG gid ---> Using cache ---> 6ef62cc84fc9 Step 6/6 : RUN (groupadd --gid $gid "$username" || groupadd "$username" || true) && (useradd -l -m -s "/bin/bash" --gid "$username" --comment '' --uid $uid "$username" || useradd -l -m -s "/bin/bash" --gid "$username" --comment '' "$username") ---> Using cache ---> 764de2909aba
Successfully built 764de2909aba Successfully tagged drud/ddev-webserver:v1.13.0-yogamehome-2020-built ', stderr='Building db Building web Recreating ddev-yogamehome-2020-db ... done Recreating ddev-yogamehome-2020-web ... error Recreating ddev-yogamehome-2020-dba ...
ERROR: for ddev-yogamehome-2020-web Cannot start service web: error while mounting volume '/var/lib/docker/volumes/ddev-yogamehome-2020_nfsmount/_data': failed to mount local volume: mount :/c/Projects/yogamehome-2020:/var/lib/docker/volumes/ddev-yogamehome-2020_n Recreating ddev-yogamehome-2020-dba ... doneock: operation not supported
ERROR: for web Cannot start service web: error while mounting volume '/var/lib/docker/volumes/ddev-yogamehome-2020_nfsmount/_data': failed to mount local volume: mount :/c/Projects/yogamehome-2020:/var/lib/docker/volumes/ddev-yogamehome-2020_nfsmount/_data, data: addr=127.0.0.1,hard,nolock: operation not supported Encountered errors while bringing up the project.'`
Any idea?
I can access it via Explorer and every directory is there, but cannot mount it via ddev.
Most recent version of ddev.
Thanks and kind regards, Harald
shareeditdeleteflag
You have not included C:\Projects in your ~/.ddev/nfs_exports.txt
The error says "failed to mount local volume: mount :/c/Projects/yogamehome-2020:"
It's generally recommended to have your projects in a home directory, but since you have them in \Projects, you'll need to share that in nfs_exports.txt. Add a line like this:
C:\Projects > /c/Projects
i have exactly that line in my nfs_exports.txt:
C:\Projects > /c/Projects
As i wrote above i can mount the directory locally. So i gues the Problem is not on the server side.
Thanks,
Harald
Update:
If i start ddev without nfs and then try to mount via ssh:
holzm#yogamehome-2020-web:~/nfs-mount$ sudo mount -t nfs 127.0.0.1:/Projects ~/nfs-mount
mount: /home/holzm/nfs-mount: permission denied.
Related
I'm new here.
I'm trying to Creating custom Docker container images on Azure VM.
But I cant create them because of "container-suseconnect-zypp"
environment : AZURE Virtual Machine (Standard B4ms) - SLES15 SP1 (PAYG)
first of all, I've no problem for getting repository on local (below)
# zypper lr -u
Refreshing service 'container-suseconnect-zypp'.
Repository priorities are without effect. All enabled repositories share the same priority.
# | Alias | Name | Enabled | GPG Check | Refresh | URI
----+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+---------+-----------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | Basesystem_Module_x86_64:SLE-Module-Basesystem15-SP1-Debuginfo-Pool | SLE-Module-Basesystem15-SP1-Debuginfo-Pool | No | ---- | ---- | plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
.
.
.
Secondly, I've already started "containerbuild-regionsrv" service
and used "host network" when I built Docker Image
with reference to the following : https://documentation.suse.com/container/all/single-html/SLES-container/index.html
> sudo systemctl start containerbuild-regionsrv
> sudo systemctl enable containerbuild-regionsrv
> docker build --network host /build-directory/
my Dockerfile is
FROM registry.suse.com/suse/sle15:15.1
# Extra metadata
LABEL version="1.0"
LABEL description="Base SLES 15 SP1 SAP image"
# Create zypper repos and empty folder in NEW Container
RUN mkdir -p /etc/zypp/repos.d \
&& mkdir -p /jail
# add repo from local repo
RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/ \
&& zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Updates/SLE-Module-Basesystem/15-SP1/x86_64/update_debug/
.
.
.
# Update repos and install missing packages:
RUN update-ca-certificates && zypper ref -s && zypper update -y
here's my Question.
Why my URI of SLES repo start with not "https://" but "plugin:/"? Is there no problem for adding repo inside container?
When I build Docker Image from Dockerfile, My result is :
# docker build --network host -t base_os .
Sending build context to Docker daemon 4.359GB
Step 1/6 : FROM registry.suse.com/suse/sle15:15.1
---> d6d9e74d8ba3
Step 2/6 : LABEL version="1.0"
---> Running in 51b8f6dc39e5
Removing intermediate container 51b8f6dc39e5
---> 12b8756a372c
Step 3/6 : LABEL description="Base SLES 15 SP1 SAP image"
---> Running in 1b57cfcdceea
Removing intermediate container 1b57cfcdceea
---> aa8ddd1de6b4
Step 4/6 : RUN mkdir -p /etc/zypp/repos.d && mkdir -p /jail
---> Running in fd5a0d6cf9bc
Removing intermediate container fd5a0d6cf9bc
---> 982b38ddd9c7
Step 5/6 : RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
---> Running in 74416afc3982
Removing intermediate container 74416afc3982
---> 3e78bccdfcd1
Step 6/6 : RUN update-ca-certificates && zypper ref -s && zypper update -y
---> Running in 0d52ac4d4e28
Refreshing service 'container-suseconnect-zypp'.
Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
All services have been refreshed.
Warning: There are no enabled repositories defined.
Use 'zypper addrepo' or 'zypper modifyrepo' commands to add or enable repositories.
Problem retrieving the repository index file for service 'container-suseconnect-zypp':
[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
I think, because of 'container-suseconnect-zypp' issue, I can't install some additional packages needed for SAP inside my container, even if I'll build Docker image without Dockerfile step 6/6.
Is this problem related with Azure VM using PAYG SLES15?
Do I need SSL certificate used by RMT?
I am running VSCode in a Windows 10 machine, connecting to a Docker instance on a remote Linux host, to develop C++ projects. The docker instance mounts local folders for source code files, and user is set to match the user on Linux host to avoid file ownership and permission problems.
On Windows 10 I use WSL1, the default user has both UID/GID 1000, and the VSCode's docker processes use these IDs to launch and connect to the docker instance on remote. Is there a way to override the UID/GID VSCode uses so they match the IDs on remote?
Thanks,
Step 1/4 : FROM devenv:latest
---> 120d987bae07
Step 2/4 : RUN groupadd -g 301765 chengd
---> Using cache
---> 58a697ed3565
Step 3/4 : RUN useradd -l -u 301765 -g chengd chengd
---> Using cache
---> b5c7c2b48a83
Step 4/4 : USER chengd
---> Using cache
---> f310c9d1e05b
Successfully built f310c9d1e05b
Successfully tagged vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf:latest
[7325 ms] Start: Run: wsl -d Ubuntu-20.04 -e /bin/sh -c cd '/home/da/repos/devenv' && DISPLAY='1' ELECTRON_RUN_AS_NODE='1' SSH_ASKPASS='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\scripts\ssh-askpass.bat' VSCODE_SSH_ASKPASS_NODE='D:\Users\ChengD\AppData\Local\Programs\Microsoft VS Code\Code.exe' VSCODE_SSH_ASKPASS_MAIN='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\dist\common\sshAskpass.js' VSCODE_SSH_ASKPASS_HANDLE='\\.\pipe\ssh-askpass-7e8e4f69496930d0e88509584ba46ab3357d9ff1-sock' DOCKER_CONTEXT='tcp_201' VSCODE_SSH_ASKPASS_COUNTER='5' docker 'inspect' '--type' 'image' 'vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf'
[10240 ms] Start: Run: wsl -d Ubuntu-20.04 -e /bin/sh -c cd '/home/da/repos/devenv' && DISPLAY='1' ELECTRON_RUN_AS_NODE='1' SSH_ASKPASS='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\scripts\ssh-askpass.bat' VSCODE_SSH_ASKPASS_NODE='D:\Users\ChengD\AppData\Local\Programs\Microsoft VS Code\Code.exe' VSCODE_SSH_ASKPASS_MAIN='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\dist\common\sshAskpass.js' VSCODE_SSH_ASKPASS_HANDLE='\\.\pipe\ssh-askpass-7e8e4f69496930d0e88509584ba46ab3357d9ff1-sock' DOCKER_CONTEXT='tcp_201' VSCODE_SSH_ASKPASS_COUNTER='6' docker 'build' '-f' '/tmp/vsch/updateUID.Dockerfile-0.177.2' '-t' 'vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf-uid' '--build-arg' 'BASE_IMAGE=vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf' '--build-arg' 'REMOTE_USER=chengd' '--build-arg' 'NEW_UID=1000' '--build-arg' 'NEW_GID=1000' '--build-arg' 'IMAGE_USER=chengd' '/tmp/vsch'
Solved the problem by setting updateRemoteUserUID to false in devcontainer.json.
I guess you all know this error pretty good. In order to understand why I may encouter it, you may need some background info about the context as it's not really a common use (i think at least).
I have an UNRAID server
A VM running in this server
A bunch of services running in containers via compose in this VM (referenced later as my production VM)
[PROBLEM] I need to add a container that itself will run compose from this official image
This last container is maintained by myself and should run an API, web client, db, ect. When building it I first trigger the build of proxy, api, web and other project that I develop and once ot's done I build the compose one from the images I just built and some open source ones.
To test I created a dummy VM on my unraid server and created a compose environment similar to my production VM. (let's call it my test VM). I added a compose file with only my app and portainer.
The problem is when I run 'docker-compose up' on that test server, portainer start but my app fails because it cannot connect to it's docker daemon (see logs below)
What I tried:
running my app as root or a created user
adding the created user to docker group (but docker group does not exists so I'm creating it; seems odd to me...)
checking permission on /var/run/docker.sock: it returned a file not found error even as root
passing the socket from my test VM when running the parent compose via a volume (- "/var/run/docker.sock:/var/run/docker.sock")
Dockerfile:
FROM docker/compose
# Create plaxdmin user
RUN adduser plaxdmin --disabled-password
RUN addgroup docker
RUN addgroup plaxdmin docker
USER plaxdmin
# Final values
ARG PLAXDMIN_VERSION
ARG RELEASE_TYPE
ENV PLAXDMIN_VERSION=$PLAXDMIN_VERSION
ENV RELEASE_TYPE=$RELEASE_TYPE
# Default user defined values
ENV TIMEZONE=Europe/Paris
ENV PLAXDMIN_DNS="plaxdmin.default.org"
# Init folders and copy docker-compose api configuration files
WORKDIR /var/log/plaxdmin
WORKDIR /etc/plaxdmin
ADD ./resources/conf/* ./
WORKDIR /opt/plaxdmin/
ADD ./resources/docker-compose.yml ./
# Expose port
EXPOSE 80
# On run debug and start compose fleet
CMD docker -v \
&& docker-compose -v \
&& printenv \
&& ls -al /etc/plaxdmin \
&& ls -al /opt/plaxdmin/ \
&& ls -al /var/log/plaxdmin/ \
&& pwd \
&& whoami \
&& groups $user \
# && ls -la /var/run/docker.sock \
&& docker-compose up || true
docker build logs:
Step 1/18 : FROM docker/compose
latest: Pulling from docker/compose
aad63a933944: Pulling fs layer
b396cd7cbac4: Pulling fs layer
0426ec0ed60a: Pulling fs layer
9ac2a98ece5b: Pulling fs layer
9ac2a98ece5b: Waiting
b396cd7cbac4: Verifying Checksum
b396cd7cbac4: Download complete
aad63a933944: Verifying Checksum
aad63a933944: Download complete
aad63a933944: Pull complete
0426ec0ed60a: Verifying Checksum
0426ec0ed60a: Download complete
b396cd7cbac4: Pull complete
9ac2a98ece5b: Verifying Checksum
9ac2a98ece5b: Download complete
0426ec0ed60a: Pull complete
9ac2a98ece5b: Pull complete
Digest: sha256:b60a020c0f68047b353a4a747f27f5e5ddb17116b7b018762edfb6f7a6439a82
Status: Downloaded newer image for docker/compose:latest
---> c3e188a6b38f
Step 2/18 : RUN adduser plaxdmin --disabled-password
---> Running in 07aa9a297234
Removing intermediate container 07aa9a297234
---> 494c8a4291e0
Step 3/18 : RUN addgroup docker
---> Running in f64e5022e65d
Removing intermediate container f64e5022e65d
---> 84ee5fbf6dea
Step 4/18 : RUN addgroup plaxdmin docker
---> Running in 0efa66b73f4a
Removing intermediate container 0efa66b73f4a
---> eb647c03c118
Step 5/18 : USER plaxdmin
---> Running in 4529203341d1
Removing intermediate container 4529203341d1
---> 8501d9993307
Step 6/18 : ARG PLAXDMIN_VERSION
---> Running in 07d61186fadd
Removing intermediate container 07d61186fadd
---> ed6e9f9df0ab
Step 7/18 : ARG RELEASE_TYPE
---> Running in 0fa98c641843
Removing intermediate container 0fa98c641843
---> d0fe2f700e53
Step 8/18 : ENV TIMEZONE=Europe/Paris
---> Running in 5c5d383c6858
Removing intermediate container 5c5d383c6858
---> 48394a4e01b3
Step 9/18 : ENV PLAXDMIN_DNS="plaxdmin.default.org"
---> Running in 187304a8a1ed
Removing intermediate container 187304a8a1ed
---> 5827abebd0ff
Step 10/18 : ENV PLAXDMIN_VERSION=$PLAXDMIN_VERSION
---> Running in 54ff13db32e6
Removing intermediate container 54ff13db32e6
---> 9377ac82544e
Step 11/18 : ENV RELEASE_TYPE=$RELEASE_TYPE
---> Running in 2da68d0375ac
Removing intermediate container 2da68d0375ac
---> dd09ee57c867
Step 12/18 : WORKDIR /var/log/plaxdmin
---> Running in 9ac2fdb93c5e
Removing intermediate container 9ac2fdb93c5e
---> 252771ee5ff4
Step 13/18 : WORKDIR /etc/plaxdmin
---> Running in eb6c9a16b12f
Removing intermediate container eb6c9a16b12f
---> 6fd180adcb80
Step 14/18 : ADD ./resources/conf/* ./
---> 70e10c126b4f
Step 15/18 : WORKDIR /opt/plaxdmin/
---> Running in 0a6f15afc915
Removing intermediate container 0a6f15afc915
---> d8c321d31689
Step 16/18 : ADD ./resources/docker-compose.yml ./
---> 60847c38d0be
Step 17/18 : EXPOSE 80
---> Running in cbe2a4d7f8be
Removing intermediate container cbe2a4d7f8be
---> 56269d51e6d5
Step 18/18 : CMD docker -v && docker-compose -v && printenv && ls -al /etc/plaxdmin && ls -al /opt/plaxdmin/ && ls -al /var/log/plaxdmin/ && pwd && whoami && groups $user && docker-compose up || true
---> Running in 49d1a3505198
Removing intermediate container 49d1a3505198
---> beba0e2fd039
Successfully built beba0e2fd039
Successfully tagged plaxdmin/full:latest
Successfully tagged plaxdmin/full:unstable
Successfully tagged plaxdmin/full:v-202102010319
Successfully tagged plaxdmin/full:64ce4f02f88ac81219dd61ae0d8c2e4aa6e0403e
Successfully tagged plaxdmin/full:master
Start logs:
plaxdmin_1 | Docker version 19.03.8, build afacb8b7f0
plaxdmin_1 | docker-compose version 1.26.2, build eefe0d3
plaxdmin_1 | HOSTNAME=b3a358707bd6
plaxdmin_1 | SHLVL=2
plaxdmin_1 | HOME=/home/plaxdmin
plaxdmin_1 | PGID=1421
plaxdmin_1 | TIMEZONE=Europe/Paris
plaxdmin_1 | RELEASE_TYPE=unstable
plaxdmin_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
plaxdmin_1 | PLAXDMIN_DNS=plaxdmin.vba.ovh
plaxdmin_1 | PUID=1421
plaxdmin_1 | PWD=/opt/plaxdmin
plaxdmin_1 | PLAXDMIN_VERSION=v-202102010319
plaxdmin_1 | total 20
plaxdmin_1 | drwxr-xr-x 1 root root 4096 Feb 1 15:57 .
plaxdmin_1 | drwxr-xr-x 1 root root 4096 Feb 1 15:59 ..
plaxdmin_1 | -rw-rw-rw- 1 root root 262 Jan 31 02:06 application.properties
plaxdmin_1 | -rw-rw-rw- 1 root root 690 Jan 31 02:06 log4j.properties
plaxdmin_1 | -rw-rw-rw- 1 root root 1518 Jan 31 19:31 nginx.conf
plaxdmin_1 | total 12
plaxdmin_1 | drwxr-xr-x 1 root root 4096 Feb 1 15:57 .
plaxdmin_1 | drwxr-xr-x 1 root root 4096 Feb 1 15:57 ..
plaxdmin_1 | -rw-rw-rw- 1 root root 2374 Feb 1 02:01 docker-compose.yml
plaxdmin_1 | total 8
plaxdmin_1 | drwxr-xr-x 2 root root 4096 Feb 1 15:57 .
plaxdmin_1 | drwxr-xr-x 1 root root 4096 Feb 1 15:57 ..
plaxdmin_1 | /opt/plaxdmin
plaxdmin_1 | plaxdmin
plaxdmin_1 | plaxdmin docker
plaxdmin_1 | Couldn't connect to Docker daemon at http+docker://localhost - is it running?
plaxdmin_1 |
plaxdmin_1 | If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
Since the goal of your container is to run Docker commands, it has permissions to take over the entire host system should it choose to. It's not any less safe to run it as USER root, which will also address the socket permission problem. Since your Dockerfile doesn't actually do anything switched to the alternate user (COPY makes files be owned by root by default and you do not RUN any commands) you can also delete the USER line and the alternate-user setup.
# This user and group will not be used; delete these lines
# RUN adduser plaxdmin --disabled-password
# RUN addgroup docker
# RUN addgroup plaxdmin docker
# Nothing is done as this user
# Stay as the default root user to be able to run `docker` commands
# USER plaxdmin
If the host's /var/run/docker.sock is mode 0660 and owned by a group docker (a typical setup) the container process must run as the same numeric group ID in order to be able to access the socket. This will intrinsically be host-specific and it's not something you can set in your Dockerfile.
When you launch the orchestration container, you can run it with an additional group to put it in the docker group
# If the container process isn't already running as root
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
--group-add docker \
...
Or, in Compose version 2 syntax (but not version 3) there is a group_add: option that can specify this
version: '2.4'
services:
orchestrator:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
group_add:
- docker
(The documentation says the group must exist in both contexts, so you may need to look up the numeric group ID and use that instead.)
I'm trying to run a Go CD agent using docker-dind to auto build some of my docker images.
I'm having trouble getting the user go to have access to the docker daemon.
When I try and access docker info I get the following:
[go] Task: /bin/sh ./builder.shtook: 2.820s
[START]
[USER] go
[TAG] manual
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/fish/angular-cli/json: dial unix /var/run/docker.sock: connect: permission denied
Sending build context to Docker daemon 3.072kB
Step 1/8 : FROM node:8-alpine
---> 4db2697ce114
Step 2/8 : MAINTAINER jack#fish.com
---> Using cache
---> 22f46bf6b4c1
Step 3/8 : VOLUME /usr/local/share/.cache/yarn/v1
---> Using cache
---> 86b979e7a2b4
Step 4/8 : RUN apk add --no-cache --update build-base python
---> Using cache
---> 4a08b0a1fc9d
Step 5/8 : RUN yarn global add #angular/cli#1.5.3
---> Using cache
---> 6fe4530181a5
Step 6/8 : EXPOSE 4200
---> Using cache
---> 480edc47696e
Step 7/8 : COPY ./docker-entrypoint.sh /
---> Using cache
---> 329f9eaa5c76
Step 8/8 : ENTRYPOINT /docker-entrypoint.sh
---> Using cache
---> cb1180ff8e9f
Successfully built cb1180ff8e9f
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/fish/angular-cli/json: dial unix /var/run/docker.sock: connect: permission denied
My root user can accesss docker info properly, but the go user fails.
$ cat /etc/group
root:x:0:root
bin:x:1:root,bin,daemon
daemon:x:2:root,bin,daemon
sys:x:3:root,bin,adm
....
adm:x:4:root,adm,daemon
wheel:x:10:root
xfs:x:33:xfs
ping:x:999:
nogroup:x:65533:
nobody:x:65534:
dockremap:x:101:dockremap,go
go:x:1000:go
My docker.sock permissions are as follows:
$ ls -alh /var/run/docker.sock
srw-rw---- 1 root 993 0 Apr 20 2017 /var/run/docker.sock
What do I need to append to my Dockerfile in order to allow the go user to access the docker daemon?
When running a dind container, IE docker in docker, it its common place to volume mount /var/run/docker.sock:/var/run/docker.sock from the host into the dind-container.
When this occurs, the PID is not only owned by root, but by a numeric group id from the host.
Running the following inside the container should show you the host GID:
$ ls -alh /var/run/docker.sock
srw-rw---- 1 root 993 0 Apr 20 2017 /var/run/docker.sock
The above process is owned by group 993, 993 is derived from the host machines /etc/group -> docker role.
As it is nearly impossible to ensure that we have a common group id when the image is first built, the group id should be assigned at runtime using your docker-entrypoint.sh file.
My personal goal is to get this runtime user of 'go' for a GO CD go-agent, but one could substitute this approach for jenkins or any other runtime user.
As the dind & go-agent are both based off alpine linux, the following will work for alpine-linux:
#setup docker group based on hosts mount gid
echo "Adding hosts GID to docker system group"
# this only works if the docker group does not already exist
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
BUILD_USER=go
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
#addgroup is distribution specific
addgroup -S -g ${DOCKER_GID} ${DOCKER_GROUP}
addgroup ${BUILD_USER} ${DOCKER_GROUP}
fi
If you exec into the container, and cat your /etc/group file, you should see the following:
docker:x:993:go
This is a slightly modified version of #Jack's answer.
I created a docker-entrypoint.sh which will determine the GID and reuse the group if it already exists. Primarily on Docker for Windows machines the Docker socket is using root. This would need runuser as su will only work if the user's shell is not set to nologin which in my case is set to nologin
#!/bin/sh
set -e
DOCKER_SOCKET=/var/run/docker.sock
RUNUSER=jobberuser
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
DOCKER_GROUP=$(getent group ${DOCKER_GID} | awk -F ":" '{ print $1 }')
if [ $DOCKER_GROUP ]
then
addgroup $RUNUSER $DOCKER_GROUP
else
addgroup -S -g ${DOCKER_GID} docker
addgroup $RUNUSER docker
fi
fi
exec runuser -u $RUNUSER -- $#
In order to allow other users to access Docker you need to:
sudo groupadd docker
sudo usermod -aG docker go
If you are running this command as the go user, you need to logout and login after performing above task.
I am following http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Chaincode-setup/ and using Options 1 i.e. vagrant development environment.When I run make membersrvc && membersrvc i get below message :
build/bin/membersrvc
CGO_CFLAGS=" " CGO_LDFLAGS="-lrocksdb -lstdc++ -lm -lz -lbz2 -lsnappy"
GOBIN=/opt/gopath/src/github.com/hyperledger/fabric/build/bin go install -
ldflags "-X github.com/hyperledger/fabric/metadata.Version=0.7.0-snapshot-
131b36c" github.com/hyperledger/fabric/membersrvc
Binary available as build/bin/membersrvc
I assume membersrvc is running because "ps -a | grep membersrvc" returns
2486 pts/0 00:00:01 membersrvc
After this I ran "make peer" and got this :
Building docker javaenv-image
docker build -t hyperledger/fabric-javaenv build/image/javaenv
Sending build context to Docker daemon 44.03 kB
Step 1 : FROM openjdk:8
---> 96cddf5ae9f1
Step 2 : RUN wget https://services.gradle.org/distributions/gradle-2.12-
bin.zip -P /tmp --quiet
---> Using cache
---> 3dbbd6c16d7e
Step 3 : RUN unzip -qo /tmp/gradle-2.12-bin.zip -d /opt && rm /tmp/gradle-
2.12-b in.zip
---> Using cache
---> bd1d42253704
Step 4 : RUN ln -s /opt/gradle-2.12/bin/gradle /usr/bin
---> Using cache
---> 248e99587f37
Step 5 : ENV MAVEN_VERSION 3.3.9
---> Using cache
---> 27105db40f7a
Step 6 : ENV USER_HOME_DIR "/root"
---> Using cache
---> 03f5e84bf9ce
Step 7 : RUN mkdir -p /usr/share/maven /usr/share/maven/ref && curl -fsSL
http ://apache.osuosl.org/maven/maven-
3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_V ERSION-
bin.tar.gz | tar -xzC /usr/share/maven --strip-components=1 && ln -
s /usr/share/maven/bin/mvn /usr/bin/mvn
---> Running in 6ec30acda848
This stays on the window forever and nothing happens after this.
After this i try to run "peer node start --peer-chaincodedev" in another window
but i get below error:
No command 'peer' found, did you mean:
Why is not my peer created yet?
#PySa - a correct build of the Peer will drop you back to the cmd line and if you then issue the cmd peer it will show you the help / switches. To make / build the memberservices and peer all you have to do is the following:
vagrant up
ssh into the machine
cd /hyperledger
make membersrvc
make peer - this can take a LOOOOONG time depending on your
machine & internet connection - the process has to download a LOT of
data to complete correctly.
Once the above is done I would also strongly suggest you run make unit-test and when that's done make behave - again these will take a long to run but assuming all is well by the time it's done you'll be able to run membersrvc and peer node start (each in their own terminal windows) without problems...
FYI - the memberservices does NOT report anything to the console - the peer however does...