"container-suseconnect-zypp" : dockerfile fail on PAYG SLES15.1 VM(azuer) - docker

I'm new here.
I'm trying to Creating custom Docker container images on Azure VM.
But I cant create them because of "container-suseconnect-zypp"
environment : AZURE Virtual Machine (Standard B4ms) - SLES15 SP1 (PAYG)
first of all, I've no problem for getting repository on local (below)
# zypper lr -u
Refreshing service 'container-suseconnect-zypp'.
Repository priorities are without effect. All enabled repositories share the same priority.
# | Alias | Name | Enabled | GPG Check | Refresh | URI
----+-------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+---------+-----------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | Basesystem_Module_x86_64:SLE-Module-Basesystem15-SP1-Debuginfo-Pool | SLE-Module-Basesystem15-SP1-Debuginfo-Pool | No | ---- | ---- | plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
.
.
.
Secondly, I've already started "containerbuild-regionsrv" service
and used "host network" when I built Docker Image
with reference to the following : https://documentation.suse.com/container/all/single-html/SLES-container/index.html
> sudo systemctl start containerbuild-regionsrv
> sudo systemctl enable containerbuild-regionsrv
> docker build --network host /build-directory/
my Dockerfile is
FROM registry.suse.com/suse/sle15:15.1
# Extra metadata
LABEL version="1.0"
LABEL description="Base SLES 15 SP1 SAP image"
# Create zypper repos and empty folder in NEW Container
RUN mkdir -p /etc/zypp/repos.d \
&& mkdir -p /jail
# add repo from local repo
RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/ \
&& zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Updates/SLE-Module-Basesystem/15-SP1/x86_64/update_debug/
.
.
.
# Update repos and install missing packages:
RUN update-ca-certificates && zypper ref -s && zypper update -y
here's my Question.
Why my URI of SLES repo start with not "https://" but "plugin:/"? Is there no problem for adding repo inside container?
When I build Docker Image from Dockerfile, My result is :
# docker build --network host -t base_os .
Sending build context to Docker daemon 4.359GB
Step 1/6 : FROM registry.suse.com/suse/sle15:15.1
---> d6d9e74d8ba3
Step 2/6 : LABEL version="1.0"
---> Running in 51b8f6dc39e5
Removing intermediate container 51b8f6dc39e5
---> 12b8756a372c
Step 3/6 : LABEL description="Base SLES 15 SP1 SAP image"
---> Running in 1b57cfcdceea
Removing intermediate container 1b57cfcdceea
---> aa8ddd1de6b4
Step 4/6 : RUN mkdir -p /etc/zypp/repos.d && mkdir -p /jail
---> Running in fd5a0d6cf9bc
Removing intermediate container fd5a0d6cf9bc
---> 982b38ddd9c7
Step 5/6 : RUN zypper ar plugin:/susecloud?credentials=Basesystem_Module_x86_64&path=/repo/SUSE/Products/SLE-Module-Basesystem/15-SP1/x86_64/product_debug/
---> Running in 74416afc3982
Removing intermediate container 74416afc3982
---> 3e78bccdfcd1
Step 6/6 : RUN update-ca-certificates && zypper ref -s && zypper update -y
---> Running in 0d52ac4d4e28
Refreshing service 'container-suseconnect-zypp'.
Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
All services have been refreshed.
Warning: There are no enabled repositories defined.
Use 'zypper addrepo' or 'zypper modifyrepo' commands to add or enable repositories.
Problem retrieving the repository index file for service 'container-suseconnect-zypp':
[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
I think, because of 'container-suseconnect-zypp' issue, I can't install some additional packages needed for SAP inside my container, even if I'll build Docker image without Dockerfile step 6/6.
Is this problem related with Azure VM using PAYG SLES15?
Do I need SSL certificate used by RMT?

Related

Override UID GID in VSCode Remote-containers session

I am running VSCode in a Windows 10 machine, connecting to a Docker instance on a remote Linux host, to develop C++ projects. The docker instance mounts local folders for source code files, and user is set to match the user on Linux host to avoid file ownership and permission problems.
On Windows 10 I use WSL1, the default user has both UID/GID 1000, and the VSCode's docker processes use these IDs to launch and connect to the docker instance on remote. Is there a way to override the UID/GID VSCode uses so they match the IDs on remote?
Thanks,
Step 1/4 : FROM devenv:latest
---> 120d987bae07
Step 2/4 : RUN groupadd -g 301765 chengd
---> Using cache
---> 58a697ed3565
Step 3/4 : RUN useradd -l -u 301765 -g chengd chengd
---> Using cache
---> b5c7c2b48a83
Step 4/4 : USER chengd
---> Using cache
---> f310c9d1e05b
Successfully built f310c9d1e05b
Successfully tagged vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf:latest
[7325 ms] Start: Run: wsl -d Ubuntu-20.04 -e /bin/sh -c cd '/home/da/repos/devenv' && DISPLAY='1' ELECTRON_RUN_AS_NODE='1' SSH_ASKPASS='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\scripts\ssh-askpass.bat' VSCODE_SSH_ASKPASS_NODE='D:\Users\ChengD\AppData\Local\Programs\Microsoft VS Code\Code.exe' VSCODE_SSH_ASKPASS_MAIN='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\dist\common\sshAskpass.js' VSCODE_SSH_ASKPASS_HANDLE='\\.\pipe\ssh-askpass-7e8e4f69496930d0e88509584ba46ab3357d9ff1-sock' DOCKER_CONTEXT='tcp_201' VSCODE_SSH_ASKPASS_COUNTER='5' docker 'inspect' '--type' 'image' 'vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf'
[10240 ms] Start: Run: wsl -d Ubuntu-20.04 -e /bin/sh -c cd '/home/da/repos/devenv' && DISPLAY='1' ELECTRON_RUN_AS_NODE='1' SSH_ASKPASS='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\scripts\ssh-askpass.bat' VSCODE_SSH_ASKPASS_NODE='D:\Users\ChengD\AppData\Local\Programs\Microsoft VS Code\Code.exe' VSCODE_SSH_ASKPASS_MAIN='d:\Users\ChengD\.vscode\extensions\ms-vscode-remote.remote-containers-0.177.2\dist\common\sshAskpass.js' VSCODE_SSH_ASKPASS_HANDLE='\\.\pipe\ssh-askpass-7e8e4f69496930d0e88509584ba46ab3357d9ff1-sock' DOCKER_CONTEXT='tcp_201' VSCODE_SSH_ASKPASS_COUNTER='6' docker 'build' '-f' '/tmp/vsch/updateUID.Dockerfile-0.177.2' '-t' 'vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf-uid' '--build-arg' 'BASE_IMAGE=vsc-devenv-9da1a5f5cedc16a80d314a148acdbcaf' '--build-arg' 'REMOTE_USER=chengd' '--build-arg' 'NEW_UID=1000' '--build-arg' 'NEW_GID=1000' '--build-arg' 'IMAGE_USER=chengd' '/tmp/vsch'
Solved the problem by setting updateRemoteUserUID to false in devcontainer.json.

Docker build step name cannot start with number

I'm building a docker image for a Sybase database. Docker build command fails because the name of the build step "server" cannot start with a number.
I have searched A LOT for a way to change the build step machine's name and my solution so far is to retry the build until I get a name that starts with a letter...
Step 1/7 : FROM my_image as docker_sybase_db
---> d266899b4eef
Step 2/7 : COPY *.zip /mnt/backup/
---> Using cache
---> 9e8e405848ce
Step 3/7 : COPY entrypoint.sh ~
---> Using cache
---> 5c0c923985db
Step 4/7 : ENV HOSTNAME docker_sybase_db
---> Using cache
---> f2b39a7280a0
Step 5/7 : RUN init_db.sh
---> Running in 0ae1a95b3203
Server name '0ae1a95b3203' begins with an illegal character. The first
character of a server name must be an alphabetic ascii character.
Error running command 'srvbuild -r /tmp/my_super_build.rs':
If I can't modify this old sybase init script, am I out of luck here ?
EDIT: Here is what I am trying to do
Create a database instance
Load a backup
Package that pre-loaded instance into a container.
Loading the backup takes a lot of time and this old database system requires the server name to start with a letter, not a number.
You could try and see if LolHens's idea of changing the hostname in the container namespace (during the docker build) works for you.
docker build . | tee >((grep --line-buffered -Po '(?<=^change-hostname ).*' || true) | \
while IFS= read -r id; do \
nsenter --target "$(docker inspect -f '{{ .State.Pid }}' "$id")"\
--uts hostname 'new-hostname'; \
done)
The docker build output is parsed to:
detect a "change-hostname" directive
do a nsenter, which runs a program in the UTS (UNIX Time Sharing) namespace, with a different hostname (different than the SHA-generated random one)
That means your RUN step should be:
RUN echo "change-hostname $(hostname)"; \
sleep 1; \
printf '%s\n' "$(hostname)" > /etc/hostname; \
printf '%s\t%s\t%s\n' "$(perl -C -0pe 's/([\s\S]*)\t.*$/$1/m' /etc/hosts)" "$(hostname)" > /etc/hosts; \
init_db.sh
That way, init_db.sh should run in an intermediate container with a different hostname (one you do have control over, and which would not start with a number).

undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>

I'm learning docker using book docker in practice.
I am working on technique 47 in chapter 5.
This recipe is about using chef for managing docker configurations.
The github link is here.
When I build the docker image from the container, I encounter below error.
$ docker build -t chef-example .
Sending build context to Docker daemon 9.728kB
Step 1/12 : FROM ubuntu:latest
---> ccc7a11d65b1
Step 2/12 : RUN apt-get update && apt-get install -yy git curl
---> Using cache
---> ef956c61c59f
Step 3/12 : RUN curl -L https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.3.5-1_amd64.deb -o chef.deb
---> Using cache
---> 1260301dbe67
Step 4/12 : RUN dpkg -i chef.deb && rm chef.deb
---> Using cache
---> 8c1aeaf84423
Step 5/12 : COPY . /chef
---> 18986195e732
Removing intermediate container 758dfce43670
Step 6/12 : WORKDIR /chef/cookbooks
---> fbdd9c386801
Removing intermediate container 936393187cb4
Step 7/12 : RUN knife cookbook site download apache2
---> Running in 2ba7d0765ae2
WARNING: No knife configuration file found
Downloading apache2 from the cookbooks site at version 5.0.1 to /chef/cookbooks/apache2-5.0.1.tar.gz
Cookbook saved: /chef/cookbooks/apache2-5.0.1.tar.gz
---> 8b3fa14f4416
Removing intermediate container 2ba7d0765ae2
Step 8/12 : RUN knife cookbook site download iptables
---> Running in 94275acfdb44
WARNING: No knife configuration file found
Downloading iptables from the cookbooks site at version 4.3.1 to /chef/cookbooks/iptables-4.3.1.tar.gz
Cookbook saved: /chef/cookbooks/iptables-4.3.1.tar.gz
---> c8a4c6d17253
Removing intermediate container 94275acfdb44
Step 9/12 : RUN knife cookbook site download logrotate
---> Running in 27b5f736d6cf
WARNING: No knife configuration file found
Downloading logrotate from the cookbooks site at version 2.2.0 to /chef/cookbooks/logrotate-2.2.0.tar.gz
Cookbook saved: /chef/cookbooks/logrotate-2.2.0.tar.gz
---> 1b4b4460bdc9
Removing intermediate container 27b5f736d6cf
Step 10/12 : RUN /bin/bash -c 'for f in $(ls *gz); do tar -zxf $f; rm $f; done'
---> Running in 7e6b912d910e
---> d5e77acc14f1
Removing intermediate container 7e6b912d910e
Step 11/12 : RUN chef-solo -c /chef/config.rb -j /chef/attributes.json
---> Running in a0c7f7f7a00a
[2017-12-05T08:01:08+00:00] INFO: Forking chef instance to converge...
[2017-12-05T08:01:08+00:00] INFO: *** Chef 11.18.0.rc.1 ***
[2017-12-05T08:01:08+00:00] INFO: Chef-client pid: 9
[2017-12-05T08:01:09+00:00] INFO: Setting the run_list to
["recipe[apache2::default]", "recipe[mysite::default]"] from CLI options
[2017-12-05T08:01:09+00:00] INFO: Run List is
[recipe[apache2::default], recipe[mysite::default]]
[2017-12-05T08:01:09+00:00] INFO: Run List expands to [apache2::default, mysite::default]
[2017-12-05T08:01:09+00:00] INFO: Starting Chef Run for 8089fe031125
[2017-12-05T08:01:09+00:00] INFO: Running start handlers
[2017-12-05T08:01:09+00:00] INFO: Start handlers complete.
[2017-12-05T08:01:09+00:00] ERROR: Running exception handlers
[2017-12-05T08:01:09+00:00] ERROR: Exception handlers complete
[2017-12-05T08:01:09+00:00] FATAL: Stacktrace dumped to /chef/cache/chef-stacktrace.out
[2017-12-05T08:01:09+00:00] ERROR: undefined method `source_url' for #<Chef::Cookbook::Metadata:0x000000006f1378>
[2017-12-05T08:01:09+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The command '/bin/sh -c chef-solo -c /chef/config.rb -j /chef/attributes.json' returned a non-zero code: 1
I'm new to chef. Not sure why I'm getting this error.
Your cookbook doesn't have a metadata.rb which is probably breaking things, but you're also using Chef 11.18 which is entirely out of support at this point. Current Chef is 13.6.4.
Also we don't really recommend using Chef to build container images in most cases. It can definitely work, but overall Chef was built to manage servers so this will result in server-like fat images in most cases.

How do i give a non root user access to docker when using docker-dind?

I'm trying to run a Go CD agent using docker-dind to auto build some of my docker images.
I'm having trouble getting the user go to have access to the docker daemon.
When I try and access docker info I get the following:
[go] Task: /bin/sh ./builder.shtook: 2.820s
[START]
[USER] go
[TAG] manual
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/fish/angular-cli/json: dial unix /var/run/docker.sock: connect: permission denied
Sending build context to Docker daemon 3.072kB
Step 1/8 : FROM node:8-alpine
---> 4db2697ce114
Step 2/8 : MAINTAINER jack#fish.com
---> Using cache
---> 22f46bf6b4c1
Step 3/8 : VOLUME /usr/local/share/.cache/yarn/v1
---> Using cache
---> 86b979e7a2b4
Step 4/8 : RUN apk add --no-cache --update build-base python
---> Using cache
---> 4a08b0a1fc9d
Step 5/8 : RUN yarn global add #angular/cli#1.5.3
---> Using cache
---> 6fe4530181a5
Step 6/8 : EXPOSE 4200
---> Using cache
---> 480edc47696e
Step 7/8 : COPY ./docker-entrypoint.sh /
---> Using cache
---> 329f9eaa5c76
Step 8/8 : ENTRYPOINT /docker-entrypoint.sh
---> Using cache
---> cb1180ff8e9f
Successfully built cb1180ff8e9f
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/fish/angular-cli/json: dial unix /var/run/docker.sock: connect: permission denied
My root user can accesss docker info properly, but the go user fails.
$ cat /etc/group
root:x:0:root
bin:x:1:root,bin,daemon
daemon:x:2:root,bin,daemon
sys:x:3:root,bin,adm
....
adm:x:4:root,adm,daemon
wheel:x:10:root
xfs:x:33:xfs
ping:x:999:
nogroup:x:65533:
nobody:x:65534:
dockremap:x:101:dockremap,go
go:x:1000:go
My docker.sock permissions are as follows:
$ ls -alh /var/run/docker.sock
srw-rw---- 1 root 993 0 Apr 20 2017 /var/run/docker.sock
What do I need to append to my Dockerfile in order to allow the go user to access the docker daemon?
When running a dind container, IE docker in docker, it its common place to volume mount /var/run/docker.sock:/var/run/docker.sock from the host into the dind-container.
When this occurs, the PID is not only owned by root, but by a numeric group id from the host.
Running the following inside the container should show you the host GID:
$ ls -alh /var/run/docker.sock
srw-rw---- 1 root 993 0 Apr 20 2017 /var/run/docker.sock
The above process is owned by group 993, 993 is derived from the host machines /etc/group -> docker role.
As it is nearly impossible to ensure that we have a common group id when the image is first built, the group id should be assigned at runtime using your docker-entrypoint.sh file.
My personal goal is to get this runtime user of 'go' for a GO CD go-agent, but one could substitute this approach for jenkins or any other runtime user.
As the dind & go-agent are both based off alpine linux, the following will work for alpine-linux:
#setup docker group based on hosts mount gid
echo "Adding hosts GID to docker system group"
# this only works if the docker group does not already exist
DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
BUILD_USER=go
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
#addgroup is distribution specific
addgroup -S -g ${DOCKER_GID} ${DOCKER_GROUP}
addgroup ${BUILD_USER} ${DOCKER_GROUP}
fi
If you exec into the container, and cat your /etc/group file, you should see the following:
docker:x:993:go
This is a slightly modified version of #Jack's answer.
I created a docker-entrypoint.sh which will determine the GID and reuse the group if it already exists. Primarily on Docker for Windows machines the Docker socket is using root. This would need runuser as su will only work if the user's shell is not set to nologin which in my case is set to nologin
#!/bin/sh
set -e
DOCKER_SOCKET=/var/run/docker.sock
RUNUSER=jobberuser
if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
DOCKER_GROUP=$(getent group ${DOCKER_GID} | awk -F ":" '{ print $1 }')
if [ $DOCKER_GROUP ]
then
addgroup $RUNUSER $DOCKER_GROUP
else
addgroup -S -g ${DOCKER_GID} docker
addgroup $RUNUSER docker
fi
fi
exec runuser -u $RUNUSER -- $#
In order to allow other users to access Docker you need to:
sudo groupadd docker
sudo usermod -aG docker go
If you are running this command as the go user, you need to logout and login after performing above task.

Is my hyperledger peer make successful?

I am following http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Chaincode-setup/ and using Options 1 i.e. vagrant development environment.When I run make membersrvc && membersrvc i get below message :
build/bin/membersrvc
CGO_CFLAGS=" " CGO_LDFLAGS="-lrocksdb -lstdc++ -lm -lz -lbz2 -lsnappy"
GOBIN=/opt/gopath/src/github.com/hyperledger/fabric/build/bin go install -
ldflags "-X github.com/hyperledger/fabric/metadata.Version=0.7.0-snapshot-
131b36c" github.com/hyperledger/fabric/membersrvc
Binary available as build/bin/membersrvc
I assume membersrvc is running because "ps -a | grep membersrvc" returns
2486 pts/0 00:00:01 membersrvc
After this I ran "make peer" and got this :
Building docker javaenv-image
docker build -t hyperledger/fabric-javaenv build/image/javaenv
Sending build context to Docker daemon 44.03 kB
Step 1 : FROM openjdk:8
---> 96cddf5ae9f1
Step 2 : RUN wget https://services.gradle.org/distributions/gradle-2.12-
bin.zip -P /tmp --quiet
---> Using cache
---> 3dbbd6c16d7e
Step 3 : RUN unzip -qo /tmp/gradle-2.12-bin.zip -d /opt && rm /tmp/gradle-
2.12-b in.zip
---> Using cache
---> bd1d42253704
Step 4 : RUN ln -s /opt/gradle-2.12/bin/gradle /usr/bin
---> Using cache
---> 248e99587f37
Step 5 : ENV MAVEN_VERSION 3.3.9
---> Using cache
---> 27105db40f7a
Step 6 : ENV USER_HOME_DIR "/root"
---> Using cache
---> 03f5e84bf9ce
Step 7 : RUN mkdir -p /usr/share/maven /usr/share/maven/ref && curl -fsSL
http ://apache.osuosl.org/maven/maven-
3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_V ERSION-
bin.tar.gz | tar -xzC /usr/share/maven --strip-components=1 && ln -
s /usr/share/maven/bin/mvn /usr/bin/mvn
---> Running in 6ec30acda848
This stays on the window forever and nothing happens after this.
After this i try to run "peer node start --peer-chaincodedev" in another window
but i get below error:
No command 'peer' found, did you mean:
Why is not my peer created yet?
#PySa - a correct build of the Peer will drop you back to the cmd line and if you then issue the cmd peer it will show you the help / switches. To make / build the memberservices and peer all you have to do is the following:
vagrant up
ssh into the machine
cd /hyperledger
make membersrvc
make peer - this can take a LOOOOONG time depending on your
machine & internet connection - the process has to download a LOT of
data to complete correctly.
Once the above is done I would also strongly suggest you run make unit-test and when that's done make behave - again these will take a long to run but assuming all is well by the time it's done you'll be able to run membersrvc and peer node start (each in their own terminal windows) without problems...
FYI - the memberservices does NOT report anything to the console - the peer however does...

Resources