Error while running make in Hyperledger Fabric on MacOS - docker

I have installed all the prerequisites for Fabric.
OS: Mac OS X EI Capitan
Docker version 18.03.0-ce, build 0520e24
docker-compose version 1.20.1, build 5d8c71b
go version go1.10.3 darwin/amd64
GOPATH="/usr/local/go"
GOROOT="/usr/local/go"
I have placed the Fabric in /usr/local/go/src/github.com/hyperledger/fabric and added GOPATH in the PATH variable of the system.
While running the make docker command, I get the following error:
Building .build/docker/bin/peer
can't load package: package github.com/hyperledger/fabric/peer: cannot find package "github.com/hyperledger/fabric/peer" in any of:
/opt/go/src/github.com/hyperledger/fabric/peer (from $GOROOT)
/opt/gopath/src/github.com/hyperledger/fabric/peer (from $GOPATH)
make: *** [.build/docker/bin/peer] Error 1
I also went through the Makefile rule for the same:
# We (re)build a package within a docker context but persist the $GOPATH/pkg
# directory so that subsequent builds are faster
$(BUILD_DIR)/docker/bin/%: $(PROJECT_FILES)
$(eval TARGET = ${patsubst $(BUILD_DIR)/docker/bin/%,%,${#}})
#echo "Building $#"
#mkdir -p $(BUILD_DIR)/docker/bin $(BUILD_DIR)/docker/$(TARGET)/pkg
#$(DRUN) \
-v $(abspath $(BUILD_DIR)/docker/bin):/opt/gopath/bin \
-v $(abspath $(BUILD_DIR)/docker/$(TARGET)/pkg):/opt/gopath/pkg \
$(BASE_DOCKER_NS)/fabric-baseimage:$(BASE_DOCKER_TAG) \
go install -tags "$(GO_TAGS)" -ldflags "$(DOCKER_GO_LDFLAGS)" $(pkgmap.$(#F))
#touch $#
The error seems to be the with the line $(BASE_DOCKER_NS)/fabric-baseimage:$(BASE_DOCKER_TAG) \. I tried to replace $(BASE_DOCKER_NS) with absolute path, i.e. /usr/local/go/src/github.com/hyperledger. It again gives the error:
docker: invalid reference format.

The issue is likely that Is /usr/local/go is not shared with Docker. Assuming you are using Docker for Mac, you can check this by right-clicking on the Docker icon in the status bar and selecting Preferences and then the File Sharing tab. You will need to add /usr/local/go if it's not in the list. If you are using Docker Toolbox, you'll need to add shared folders via the VirtualBox GUI.
If you run make docker for Fabric 1.4 and earlier, it there's multiple stages to the build. The first stage involves building the binaries in a Docker container and mounts the current directory as a volume. This is the error you are getting since the host path is not shared with Docker.
If you run make docker on the master branch, you won't run into this issue as master uses multi-stage Docker builds instead.

Related

What does running the multiarch/qemu-user-static does before building a container?

Can someone explain me in simple terms what does
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes -c yes
do when called right before doing a docker build a container from a Dockerfile?
I have the notion that it is to permit the use of containers from other architectures into the X86 architecture, but I am not sure I quite understand the explanation I found in some sites.
Does the presence of the above instruction(docker run) implies that the Dockerfile of the build stage is for another architecture?
I too had this question recently, and I don't have a complete answer, but here is what I do know, or at least believe:
Setup & Test
The magic to setup - required once per reboot of system, is just this:
# start root's docker (not via any `-rootless` scripts, obviously)
sudo systemctl start docker
# setup QEMU static executables formats
sudo docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
# test
docker run --rm -t arm64v8/ubuntu uname -m
# shoudl expect:
# >> aarch64
# optional: shutdown root's docker
sudo systemctl stop docker
Note that the test example assumes that you are running that your own personal "rootless-"docker, therefore as yourself, not as root (nor via sudo), and it works just dandy.
Gory Details
... which are important if you want to understand how/why this works.
The main sources for this info:
https://docs.docker.com/desktop/multi-arch/
https://github.com/multiarch/qemu-user-static (what we are using)
https://hub.docker.com/r/multiarch/qemu-user-static/ (using buildx to build multi/cross-arch images)
https://dbhi.github.io/qus/related.html (an alternative to qemu-user-static)
https://github.com/dbhi/qus (source repo for above)
https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 (manual curation of same)
https://en.wikipedia.org/wiki/Binfmt_misc
The fundamental trick to making this work is to install new "magic" strings into the kernel process space so that when an (ARM) executable is run inside a docker image, it recognizes the bin-fmt and uses the QEMU interpreter (from the multiarch/* docker image) to execute it. Before we setup the bin formats, the contents look like this:
root#odysseus # mount | grep binfmt_misc
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=45170)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
root#odysseus # ls /proc/sys/fs/binfmt_misc/
jar llvm-6.0-runtime.binfmt python2.7 python3.6 python3.7 python3.8 register sbcl status
After we start (root's) dockerd and setup the formats:
root#odysseus # systemctl start docker
root#odysseus # docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
[...]
root#odysseus # ls /proc/sys/fs/binfmt_misc/
jar python3.8 qemu-armeb qemu-microblazeel qemu-mipsn32 qemu-ppc64le qemu-sh4eb qemu-xtensaeb
llvm-6.0-runtime.binfmt qemu-aarch64 qemu-hexagon qemu-mips qemu-mipsn32el qemu-riscv32 qemu-sparc register
python2.7 qemu-aarch64_be qemu-hppa qemu-mips64 qemu-or1k qemu-riscv64 qemu-sparc32plus sbcl
python3.6 qemu-alpha qemu-m68k qemu-mips64el qemu-ppc qemu-s390x qemu-sparc64 status
python3.7 qemu-arm qemu-microblaze qemu-mipsel qemu-ppc64 qemu-sh4 qemu-xtensa
Now we can run an ARM version of ubuntu:
root#odysseus # docker run --rm -t arm64v8/ubuntu uname -m
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
aarch64
The warning is to be expected since the host CPU is AMD, and can be gotten rid of by specifying the platform to docker:
root#odysseus # docker run --rm --platform linux/arm64 -t arm64v8/ubuntu uname -m
aarch64
How does this really work?
At the base of it is just QEMU's ability to interpose a DBM (dynamic binary modification) interpreter to translate the instruction set of one system to that of the underlying platform.
The only trick we have to do is tell the underlying system where to find those interpreters. Thats what the qemu-user-static image does in registering the binary format magic strings / interpreters. So, what's in those binfmts?
root#odysseus # cat /proc/sys/fs/binfmt_misc/qemu-aarch64
enabled
interpreter /usr/bin/qemu-aarch64-static
flags: F
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff
Huh - that's interesting, especially because on the host system there is no /usr/bin/qemu-aarch64-static, and it's not in the target image either, so where does this thing live? It's in the qemu-user-static image itself, with the appropriate tag of the form: <HOST-ARCH>-<GUEST-ARCH>, as in multiarch/qemu-user-static:x86_64-aarch64.
# Not on the local system
odysseus % ls /usr/bin/qemu*
ls: cannot access '/usr/bin/qemu*': No such file or directory
# Not in the target image
odysseus % docker run --rm --platform linux/arm64 -t arm64v8/ubuntu bash -c 'ls /usr/bin/qemu*'
/usr/bin/ls: cannot access '/usr/bin/qemu*': No such file or directory
# where is it?
odysseus % docker run --rm multiarch/qemu-user-static:x86_64-aarch64 sh -c 'ls /usr/bin/qemu*'
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown.
# Hmm, no `sh` in that image - let's try directly...
odysseus % docker run --rm multiarch/qemu-user-static:x86_64-aarch64 /usr/bin/qemu-aarch64-static --version
qemu-aarch64 version 7.0.0 (Debian 1:7.0+dfsg-7)
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers
# AHA - there it is.
That's the real magic that I don't yet quite understand. Somehow docker is, I believe, using that image to spin up the QEMU interpreter, and then feeding it the code from the actual image/container you want to run, as in the uname example from earlier. Some web-searching left me unsatiated as to how this magic is achieved, but I'm guessing if I kept following links from here I might find the true source of that slight-of-hand.
https://docs.docker.com/desktop/multi-arch/
To complement #crimson-egret's answer: The fix-binary flag in binfmt_misc was used to make the statically compiled qemu emulator work across different namespaces/chroots/containers.
In the doc for binfmt_misc you can find the explanation of the fix-binary flag:
F - fix binary
The usual behaviour of binfmt_misc is to spawn the binary lazily when the misc format file is invoked. However, this doesn’t work very well in the face of mount namespaces and changeroots, so the F mode opens the binary as soon as the emulation is installed and uses the opened image to spawn the emulator, meaning it is always available once installed, regardless of how the environment changes.
This bug report also explained:
...
The fix-binary flag of binfmt is meant to specifically deal with this. The interpreter file (e.g. qemu-arm-static) is loaded when its binfmt rule is installed instead of when a file that requires it is encountered. When the kernel then encounters a file which requires that interpreter it simply execs the already open file descriptor instead of opening a new one (IOW: the kernel already has the correct file descriptor open, so possibly divergent roots no longer play into finding the interpreter thus allowing namespaces/containers/chroots of a foreign architecture to be run like native ones).
If you use the qemu-user-static image without the -p yes option, the fix-binary flag won't be added, and running the arm64 container won't work because now the kernel will actually try to open the qemu emulator in the container's root filesystem:
$ docker run --rm --privileged multiarch/qemu-user-static --reset
[...]
$ cat /proc/sys/fs/binfmt_misc/qemu-aarch64
enabled
interpreter /usr/bin/qemu-aarch64-static
flags:
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff
$ docker run --rm -t arm64v8/ubuntu uname -m
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
exec /usr/bin/uname: no such file or directory
failed to resize tty, using default size

How to build docker-ce from source on macOS

Anyone knows if there's a guide to build from source and replace docker binary on Mac with it?
The readme doesn't say so I try some make target but got https://github.com/docker/for-mac/issues/3353
Edited
What I was trying to do, to be exact, is to debug docker cli to see why the Auth doesn't work for a single developer in my former company, regardless all factor checked and verified to be correct.
To do this, checkout the repo of docker cli (it was confusing at first which part of docker live where). But the cli is at:
git#github.com:docker/cli.git
Build it( this build dist for all platform), assuming you have make already:
make -f docker.Makefile binary cross
Then either use this binary(this is for Mac), for example:
build/docker-darwin-amd64 pull mysql
Or backup and replace your original /usr/local/bin/docker with the binary above.
Since on macOS you're not going to run the engine, you may try a different approach. Building the docker client using the Makefile requires docker engine, and a docker client, which you may not have.
I'm building docker (the client) from the docker/cli repository as a plain Go project:
Clone the repo:
$ git clone https://github.com/docker/cli.git
Build master, or checkout a specific tag:
$ git checkout v19.03.6
cd into the repo, create a build directory, and create the require Go project structure:
$ cd cli
$ mkdir -p build/src/github.com/
$ cd build/src/github.com/
$ ln -s ../../.. cli
cd into the build directory and set the GOPATH:
$ cd ../..
$ export GOPATH=$(pwd)
Build the docker client:
$ go build github.com/docker/cli/cmd/docker
Copy the binary from the build directory, e.g.:
$ cp docker /usr/local/bin
You'll notice that some build-related information is not set:
./docker version
Client:
Version: unknown-version
API version: 1.40
Go version: go1.13.8
Git commit: unknown-commit
Built: unknown-buildtime
OS/Arch: darwin/amd64
Experimental: false
You can pass a suitable -ldflags argument to set those variables as in:
$ go build \
-ldflags \
"-X github.com/docker/cli/cli/version.GitCommit=${docker_gitcommit} \
-X github.com/docker/cli/cli/version.Version=${version} \
-X \"github.com/docker/cli/cli/version.BuildTime=${build_time}\""
provided you have set the docker_gitcommit, version and build_time variables. The escaped quotes in the third flag are required if build_time contain spaces (as the upstream docker binaries do).
Hope this helps.

How to serve a tensorflow model using docker image tensorflow/serving when there are custom ops?

I'm trying to use the tf-sentencepiece operation in my model found here https://github.com/google/sentencepiece/tree/master/tensorflow
There is no issue building the model and getting a saved_model.pb file with variables and assets. However, if I try to use the docker image for tensorflow/serving, it says
Loading servable: {name: model version: 1} failed:
Not found: Op type not registered 'SentencepieceEncodeSparse' in binary running on 0ccbcd3998d1.
Make sure the Op and Kernel are registered in the binary running in this process.
Note that if you are loading a saved graph which used ops from tf.contrib, accessing
(e.g.) `tf.contrib.resampler` should be done before importing the graph,
as contrib ops are lazily registered when the module is first accessed.
I am unfamiliar with how to build anything manually, and was hoping that I could do this without many changes.
One approach would be to:
Pull a docker development image
$ docker pull tensorflow/serving:latest-devel
In the container, make your code changes
$ docker run -it tensorflow/serving:latest-devel
Modify the code to add the op dependency here.
In the container, build TensorFlow Serving
container:$ tensorflow_serving/model_servers:tensorflow_model_server && cp bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server /usr/local/bin/
Use the exit command to exit the container
Look up the container ID:
$ docker ps
Use that container ID to commit the development image:
$ docker commit $USER/tf-serving-devel-custom-op
Now build a serving container using the development container as the source
$ mkdir /tmp/tfserving
$ cd /tmp/tfserving
$ git clone https://github.com/tensorflow/serving .
$ docker build -t $USER/tensorflow-serving --build-arg TF_SERVING_BUILD_IMAGE=$USER/tf-serving-devel-custom-op -f tensorflow_serving/tools/docker/Dockerfile .
You can now use $USER/tensorflow-serving to serve your image following the Docker instructions

Docker TICK Sandbox does not provide UDF Python functionality

I'm running this docker image to use the TICK Kapacitor locally.
The problem I face is that when I try to use User Defined Functions, e.g any of these examples I get the error message that /usr/bin/python2 does not exist.
I add the following to the kapacitor.conf:
[udf.functions]
[udf.functions.tTest]
prog = "/usr/bin/python2"
args = ["-u", "/tmp/kapacitor_udf/mirror.py"]
timeout = "10s"
[udf.functions.tTest.env]
PYTHONPATH = "/tmp/kapacitor_udf/kapacitor/udf/agent/py"
Further attempts from my side including altering the image used to build Kapacitor to install python works but the agent seems to fail to compile anyway.
Is there anyone who managed to get UDFs running using the Kapacitor Docker image?
Thanks
Docker image from the official repository: docker pull kapacitor does not have python installed inside. You can verify this by running shell in the container:
PS> docker exec -it kapacitor bash
and execute one of the command options:
$ python -VERSION
$ python: command not found
or
$ readlink -f $(which python) | xargs -I% sh -c 'echo -n "%:"; % -V'
$ readlink: missing operand
or
$ find / -type f -executable -iname 'python *'
void returns. And oppositely if python is available, commands return version and list of executable files
Note: Here and further all command snippets are given for Powershell on Windows. And all command snippets inside docker container are given for bash shell as Linux images are used.
Basicly, there is two options to get kapacitor image with python inside to execute UDFs:
Install the python in the kapacitor image, i.e. build new docker image from very kapacitor image.
Example could be found here:
Build a new verion of kapacitor image from one of the python official images
The second option is more natural as you get consistent python installation and keep efforts on doing work of installing python which already done by the docker community.
So following option 2 we'll perform:
Examine the Dockefile of the official kapacitor image
Choose an appropriate python image
Create new project and Dockerfile for kapacitor
Build and Check the kapacitor image
Examine Dockefile of official kapacitor image
General note:
For any image, the original Dockerfiles can be obtained in this way:
https://hub.docker.com/
-> Description Tab
-> Supported tags and respective Dockerfile links section
-> each of the tags is a link that leads to the Dockerfile
So for kapacitor everything is in the influxdata-docker git repository
Then in the Dockerfile we find that the image is created based on
FROM buildpack-deps: stretch-curl
here:
buildpack-deps
the image provided by the project of the same name https://hub.docker.com/_/buildpack-deps
curl
This variant includes just the curl, wget, and ca-certificates packages. This is perfect for cases like the Java JRE, where downloading JARs is very common and
  necessary, but checking out code isn't.
stretch
short version name of the OS, in this case Debian 9 stretch https://www.debian.org/News/2017/20170617
Buildpack-deps images are in turn built based on
FROM debian: stretch
And Debian images from the minimum docker image
FROM: scratch
Choose appropriate python image
Among python images, for example 3.7, you can find similar versions inheriting from buildpack-deps
FROM buildpack-deps: stretch
Following the inheritance, we'll see:
FROM buildpack-deps: stretch
FROM buildpack-deps: stretch-smc
FROM buildpack-deps: stretch-curl
FROM debian: stretch
In other words, the python: 3.7-stretch image only adds functionality to the Debian compared to the kapacitor image.
This means that we can to rebuild kapacitor image on top of the python image: 3.7-stretch with no risk or gaining incompatibility.
Docker context folder preparation
Clone the repository
https://github.com/influxdata/influxdata-docker.git
Create the folder influxdata-docker/kapacitor/1.5/udf_python/python3.7
Copy the following three files into it from influxdata-docker/kapacitor/1.5/:
Dockerfile
entrypoint.sh
kapacitor.conf
In the copied Dockerfile FROM buildpack-deps: stretch-curl replace with FROM python: 3.7-stretch
Be carefuly! If we work on Windows and because of scientific curiosity open the entrypoint.sh file in the project folder, then be sure to check that it does not change the end-line character from Linux (LF) to Windows variant: (CR LF).
   Otherwise, when you start the container later, you get an error:
or in the container log:
exec: bad interpreter: No such file or directory
or if you'll start debugging and, running the container with bash, will do:
$ root # d4022ac550d4: / # exec /entrypoint_.sh
$ bash: /entrypoint_.sh: / bin / bash ^ M: bad interpreter: No such file or directory
Building
Run PS> docker build -f. \ Dockerfile -t kapacitor_python_udf
Again, in case of Windows environment
If during the build execution an error occurs of the form:
E: Release file for http://security.ubuntu.com/ubuntu/dists/bionic-security/InRelease is not valid yet (invalid for another 9h 14min 10s). Updates for this repository will not be applied.
then your computer clock probably went out of sync and/or Docker Desktop incorrectly initialized the time after the system returned from sleep. See the issue)
To fix it, restart Docker Desktop and / or Windows settings -> Date and time settings -> Clock synchronization -> perform Sync
You can also read more here
Launch and check
Launching the container with the same actions as for the standard image. Example:
PS> docker run --name=kapacitor -d `
--net=influxdb-network `
-h kapacitor `
-p 9092:9092 `
-e KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086 `
-v ${PWD}:/var/lib/kapacitor `
-v ${PWD}/kapacitor.conf:/etc/kapacitor/kapacitor.conf:ro `
kapacitor
Check:
PS> docker exec -it kapacitor_2 bash
$ python -VERSION
$ Python 3.7.7
$ readlink -f $(which python) | xargs -I% sh -c 'echo -n "%:"; % -V'
$ /usr/local/bin/python3.7: Python 3.7.7

How can you cache gradle inside docker?

I'm trying to cache things that my gradle build download each time currently. For that I try to mount a volume with the -v option like -v gradle_cache:/root/.gradle
The thing is each time I rerun the build with the exat same command it still downloads everything again. The full command I use to run the image is
sudo docker run --rm -v gradle_cache:/root/.gradle -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:jdk8-alpine gradle jar
I also checked in the directory where docker saves the volumes content at /var/lib/docker/volumes/gradle_cache/_data but that is also empty.
my console log
What am I missing to make this working?
Edit: As per request I rerun the command with the --scan option.
And also with a diffrent gradle home:
$ sudo docker run --rm -v gradle_cache:/root/.gradle -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:jdk8-alpine gradle jar --gradle-user-home /root/.gradle
FAILURE: Build failed with an exception.
* What went wrong:
Failed to load native library 'libnative-platform.so' for Linux amd64.
After looking at the Dockerfile for the Container I'm using I found out, that the right option to use is -v gradle_cache:/home/gradle/.gradle.
What made me think that the files were cached in /root/.gradle is that the Dockerfile also sets that up as a symlink from /home/gradle/.gradle:
ln -s /home/gradle/.gradle /root/.gradle
So inspecting the filesystem after a build made it look like the files were stored there.
Since 6.2.1, Gradle now supports a shared, read-only dependency cache for this scenario:
It’s a common practice to run builds in ephemeral containers. A container is typically spawned to only execute a single build before it is destroyed. This can become a practical problem when a build depends on a lot of dependencies which each container has to re-download. To help with this scenario, Gradle provides a couple of options:
copying the dependency cache into each container
sharing a read-only dependency cache between multiple containers
https://docs.gradle.org/current/userguide/dependency_resolution.html#sub:ephemeral-ci-cache describes the steps to create and use the shared cache.
Alternatively to have more control on the cache directory you can use this:
ENV GRADLE_USER_HOME /path/to/custom/cache/dir
VOLUME $GRADLE_USER_HOME

Resources