I am building docker from this version of this source code:
https://github.com/boucher/docker/tree/cr-combined
after cloning the code :
git clone -b cr-combined --single-branch https://github.com/boucher/docker.git
cd docker
#make build
#make binary
And then copied the resulting file #./bundles/../docker to the usr/bin directory
After reopening the terminal and starting the docker engine again.
its shows that i am using my own built version but
This version should have two main docker commands that won't show up in my built one
1- checkpoint
2- restore
could you please help me and tell me where it went wrong
Here is what I do:
$ git clone https://github.com/boucher/docker
$ cd docker
$ git checkout cr-combined
$ env AUTO_GOPATH=1 DOCKER_EXPERIMENTAL=1 \
DOCKER_BUILDTAGS='exclude_graphdriver_btrfs \
exclude_graphdriver_devicemapper' ./hack/make.sh binary
$ ./bundles/1.10.0-dev/binary/docker-1.10.0-dev --help | grep checkpoint
checkpoint Checkpoint one or more running containers
restore Restore one or more checkpointed containers
Hope this helps.
Related
I have this docker file:
FROM alpine/git AS git
RUN apk fix && apk --no-cache --update add zip
VOLUME /root
WORKDIR /root
FROM minio/mc AS minio
COPY --from=git /usr/bin/git /usr/bin/git
COPY --from=git /usr/bin/zip /usr/bin/zip
ENTRYPOINT ["sh"]
Basically I need the following tools:
git
minio client
zip
Which, per the docs I read, the above should work - it builds without and error
But when I try to execute anything I get:
Sending build context to Docker daemon 8.192kB
Step 1/8 : FROM alpine/git AS git
---> 22d84a66cda4
Step 2/8 : RUN apk fix && apk --no-cache --update add zip
---> Using cache
---> 5ce4d94085d9
Step 3/8 : VOLUME /root
---> Using cache
---> 89329e40cbba
Step 4/8 : WORKDIR /root
---> Using cache
---> 37f2c9216bb1
Step 5/8 : FROM minio/mc AS minio
---> 396036e5ac42
Step 6/8 : COPY --from=git /usr/bin/git /usr/bin/git
---> 543676360b6d
Step 7/8 : COPY --from=git /usr/bin/zip /usr/bin/zip
---> 936165b36d2e
Step 8/8 : ENTRYPOINT ["sh"]
---> Running in 4ee0c4b491f9
Removing intermediate container 4ee0c4b491f9
---> 1279af2dd755
Successfully built 1279af2dd755
Successfully tagged fabric:latest
[centos#ip-10-6-5-12 ~]$ sudo docker run -it fabric
sh-4.4# which zip
sh: which: command not found
sh-4.4# zip
sh: /usr/bin/zip: No such file or directory
sh-4.4# which git
sh: which: command not found
sh-4.4# git
sh: /usr/bin/git: No such file or directory
sh-4.4# git
sh: /usr/bin/git: No such file or directory
sh-4.4# apt
sh: apt: command not found
sh-4.4# ls -al /usr/bin/git
-rwxr-xr-x. 1 root root 2911912 Oct 19 04:51 /usr/bin/git
So, the file is there. Root (me) owns it and yet it says No such file or directory ? I am perplexed
Anyone know what's up?
Conclusion
Don't use copy to add git or zip, because they have dependencies, just copying git or zip is not enough.
Dockerfile
FROM minio/mc AS minio
RUN \
microdnf update --nodocs && \
microdnf install git zip --nodocs && \
microdnf clean all
ENTRYPOINT ["sh"]
Install Log
...
Installing: gzip;1.9-13.el8_5;x86_64;ubi-8-baseos-rpms
Installing: cracklib;2.9.6-15.el8;x86_64;ubi-8-baseos-rpms
Installing: cracklib-dicts;2.9.6-15.el8;x86_64;ubi-8-baseos-rpms
Installing: libpwquality;1.4.4-5.el8;x86_64;ubi-8-baseos-rpms
Installing: pam;1.3.1-22.el8;x86_64;ubi-8-baseos-rpms
Installing: util-linux;2.32.1-38.el8;x86_64;ubi-8-baseos-rpms
Installing: openssh;8.0p1-16.el8;x86_64;ubi-8-baseos-rpms
Installing: openssh-clients;8.0p1-16.el8;x86_64;ubi-8-baseos-rpms
Installing: less;530-1.el8;x86_64;ubi-8-baseos-rpms
Installing: git-core;2.31.1-2.el8;x86_64;ubi-8-appstream-rpms
Installing: git-core-doc;2.31.1-2.el8;noarch;ubi-8-appstream-rpms
Installing: perl-Git;2.31.1-2.el8;noarch;ubi-8-appstream-rpms
Installing: git;2.31.1-2.el8;x86_64;ubi-8-appstream-rpms
Installing: zip;3.0-23.el8;x86_64;ubi-8-baseos-rpms
From the installation log, you can see that git or zip has other dependent packages that need to be installed.
Run it
$ docker run -it fabric
sh-4.4# git
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
[--super-prefix=<path>] [--config-env=<name>=<envvar>]
<command> [<args>]
These are common Git commands used in various situations:
start a working area (see also: git help tutorial)
clone Clone a repository into a new directory
init Create an empty Git repository or reinitialize an existing one
work on the current change (see also: git help everyday)
add Add file contents to the index
mv Move or rename a file, a directory, or a symlink
restore Restore working tree files
rm Remove files from the working tree and from the index
sparse-checkout Initialize and modify the sparse-checkout
examine the history and state (see also: git help revisions)
bisect Use binary search to find the commit that introduced a bug
diff Show changes between commits, commit and working tree, etc
grep Print lines matching a pattern
log Show commit logs
show Show various types of objects
status Show the working tree status
grow, mark and tweak your common history
branch List, create, or delete branches
commit Record changes to the repository
merge Join two or more development histories together
rebase Reapply commits on top of another base tip
reset Reset current HEAD to the specified state
switch Switch branches
tag Create, list, delete or verify a tag object signed with GPG
collaborate (see also: git help workflows)
fetch Download objects and refs from another repository
pull Fetch from and integrate with another repository or a local branch
push Update remote refs along with associated objects
'git help -a' and 'git help -g' list available subcommands and some
concept guides. See 'git help <command>' or 'git help <concept>'
to read about a specific subcommand or concept.
See 'git help git' for an overview of the system.
sh-4.4# zip
Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license.
Zip 3.0 (July 5th 2008). Usage:
zip [-options] [-b path] [-t mmddyyyy] [-n suffixes] [zipfile list] [-xi list]
The default action is to add or replace zipfile entries from list, which
can include the special name - to compress standard input.
If zipfile and list are omitted, zip compresses stdin to stdout.
-f freshen: only changed files -u update: only changed or new files
-d delete entries in zipfile -m move into zipfile (delete OS files)
-r recurse into directories -j junk (don't record) directory names
-0 store only -l convert LF to CR LF (-ll CR LF to LF)
-1 compress faster -9 compress better
-q quiet operation -v verbose operation/print version info
-c add one-line comments -z add zipfile comment
-# read names from stdin -o make zipfile as old as latest entry
-x exclude the following names -i include only the following names
-F fix zipfile (-FF try harder) -D do not add directory entries
-A adjust self-extracting exe -J junk zipfile prefix (unzipsfx)
-T test zipfile integrity -X eXclude eXtra file attributes
-y store symbolic links as the link instead of the referenced file
-e encrypt -n don't compress these suffixes
-h2 show more help
sh-4.4# exit
Dokcer image size
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
fabric 1.0.0 d2415a18eb37 8 minutes ago 264MB
fabric latest d2415a18eb37 8 minutes ago 264MB
You've copied binaries that are linked against dynamic libraries that don't exist in your target image:
sh-4.4# type zip
zip is /usr/bin/zip
sh-4.4# ldd /usr/bin/zip
linux-vdso.so.1 (0x00007ffed7553000)
libc.musl-x86_64.so.1 => not found
sh-4.4# type git
git is /usr/bin/git
sh-4.4# ldd /usr/bin/git
linux-vdso.so.1 (0x00007ffdd8da8000)
libpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007f56c752e000)
libz.so.1 => /lib64/libz.so.1 (0x00007f56c7316000)
libc.musl-x86_64.so.1 => not found
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f56c70f6000)
libc.so.6 => /lib64/libc.so.6 (0x00007f56c6d30000)
/lib/ld-musl-x86_64.so.1 => /lib64/ld-linux-x86-64.so.2 (0x00007f56c7aa6000)
When copying binaries into an image, they need to either be statically linked, or you need to ensure all of the libraries already exist in the target image. Linux package managers do this for you, which in the target image you've picked is microdnf:
microdnf install zip git
I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit
Anyone knows if there's a guide to build from source and replace docker binary on Mac with it?
The readme doesn't say so I try some make target but got https://github.com/docker/for-mac/issues/3353
Edited
What I was trying to do, to be exact, is to debug docker cli to see why the Auth doesn't work for a single developer in my former company, regardless all factor checked and verified to be correct.
To do this, checkout the repo of docker cli (it was confusing at first which part of docker live where). But the cli is at:
git#github.com:docker/cli.git
Build it( this build dist for all platform), assuming you have make already:
make -f docker.Makefile binary cross
Then either use this binary(this is for Mac), for example:
build/docker-darwin-amd64 pull mysql
Or backup and replace your original /usr/local/bin/docker with the binary above.
Since on macOS you're not going to run the engine, you may try a different approach. Building the docker client using the Makefile requires docker engine, and a docker client, which you may not have.
I'm building docker (the client) from the docker/cli repository as a plain Go project:
Clone the repo:
$ git clone https://github.com/docker/cli.git
Build master, or checkout a specific tag:
$ git checkout v19.03.6
cd into the repo, create a build directory, and create the require Go project structure:
$ cd cli
$ mkdir -p build/src/github.com/
$ cd build/src/github.com/
$ ln -s ../../.. cli
cd into the build directory and set the GOPATH:
$ cd ../..
$ export GOPATH=$(pwd)
Build the docker client:
$ go build github.com/docker/cli/cmd/docker
Copy the binary from the build directory, e.g.:
$ cp docker /usr/local/bin
You'll notice that some build-related information is not set:
./docker version
Client:
Version: unknown-version
API version: 1.40
Go version: go1.13.8
Git commit: unknown-commit
Built: unknown-buildtime
OS/Arch: darwin/amd64
Experimental: false
You can pass a suitable -ldflags argument to set those variables as in:
$ go build \
-ldflags \
"-X github.com/docker/cli/cli/version.GitCommit=${docker_gitcommit} \
-X github.com/docker/cli/cli/version.Version=${version} \
-X \"github.com/docker/cli/cli/version.BuildTime=${build_time}\""
provided you have set the docker_gitcommit, version and build_time variables. The escaped quotes in the third flag are required if build_time contain spaces (as the upstream docker binaries do).
Hope this helps.
I'm trying to use the tf-sentencepiece operation in my model found here https://github.com/google/sentencepiece/tree/master/tensorflow
There is no issue building the model and getting a saved_model.pb file with variables and assets. However, if I try to use the docker image for tensorflow/serving, it says
Loading servable: {name: model version: 1} failed:
Not found: Op type not registered 'SentencepieceEncodeSparse' in binary running on 0ccbcd3998d1.
Make sure the Op and Kernel are registered in the binary running in this process.
Note that if you are loading a saved graph which used ops from tf.contrib, accessing
(e.g.) `tf.contrib.resampler` should be done before importing the graph,
as contrib ops are lazily registered when the module is first accessed.
I am unfamiliar with how to build anything manually, and was hoping that I could do this without many changes.
One approach would be to:
Pull a docker development image
$ docker pull tensorflow/serving:latest-devel
In the container, make your code changes
$ docker run -it tensorflow/serving:latest-devel
Modify the code to add the op dependency here.
In the container, build TensorFlow Serving
container:$ tensorflow_serving/model_servers:tensorflow_model_server && cp bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server /usr/local/bin/
Use the exit command to exit the container
Look up the container ID:
$ docker ps
Use that container ID to commit the development image:
$ docker commit $USER/tf-serving-devel-custom-op
Now build a serving container using the development container as the source
$ mkdir /tmp/tfserving
$ cd /tmp/tfserving
$ git clone https://github.com/tensorflow/serving .
$ docker build -t $USER/tensorflow-serving --build-arg TF_SERVING_BUILD_IMAGE=$USER/tf-serving-devel-custom-op -f tensorflow_serving/tools/docker/Dockerfile .
You can now use $USER/tensorflow-serving to serve your image following the Docker instructions
I am seeing a build failure on travis-ci, which I cannot reproduce on my local machine. Are there instructions somewhere for setting up a VM that is identical to the travis-ci linux build environment? I'm glad to have travis-ci already reveal a new bug, but less excited to debug it by sending in commits that add debug code.
For container-based builds, there are now instructions on how to setup a docker image locally.
Unfortunately, quite a few steps are still manual. Here are the commands you need to get it up and running:
# change the image according to the language chosen in .travis.yml
$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash
# now that you are in the docker image, switch to the travis user
sudo su - travis
# Install a recent ruby (default is 1.9.3)
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
cd builds
git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
travis # to create ~/.travis
ln -s `pwd` ~/.travis/travis-build
bundle install
# Create project dir, assuming your project is `me/project` on GitHub
cd ~/builds
mkdir me
cd me
git clone https://github.com/me/project.git
cd project
# change to the branch or commit you want to investigate
travis compile > ci.sh
# You most likely will need to edit ci.sh as it ignores matrix and env
bash ci.sh
You can use Travis Build which is a library (which means you've to place it in ~/.travis/) to generate a shell based build script (travis compile) which can be then uploaded to the VMs using SSH and executed.
Below steps are just guidance in order to get you into the right track (if anything is missing, let me know).
Docker
Example command to run container (which can be found at Docker Hub):
docker run -it travisci/ubuntu-ruby:18.04 /bin/bash
Run your container, clone your repository then test it manually.
See: Running a Container Based Docker Image Locally
SSH access
Check out this answer. Basically you need to setup bounce host, then configure your build to run SSH tunnel.
Here is the example .travis.yml:
sudo: required
dist: trusty
language: python
python: "2.7"
script:
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travisci#$bouncehostip
Local setup
Here are the steps to test it on your local environment:
cd ~
git clone https://github.com/travis-ci/travis-build.git
ln -s ~/travis-build/ ~/.travis/travis-build
sudo gem install bundler
bundle install --gemfile ~/.travis/travis-build/Gemfile
cd repo-dir/
travis login -g <github_token>
vim .travis.yaml
travis lint # to validate script
travis compile # to transform into shell script
Vagrant/VM
After you did travis compile which would produce the bash script as result of your .travis.yml, you can use use vagrant to run this script into virtualized environment using provided Vagrantfile and the following steps:
vagrant up
vagrant ssh
cd /vagrant
bundle exec rspec spec
You probably need to install more tools in order to test it.
Here is some git hint which avoids you to generates unnecessary commits when doing trial & errors commits for Travis CI testing:
Fork the repo (or use separate branch).
After initial commit, keep adding --amend to replace your previous commit:
git commit --amend -m 'Same message.' -a
Push the amended commit by force (e.g. into already opened PR):
git push fork -f
Now Travis CI would re-check the same commit over and over again.
See also: How to run travis-ci locally.
I'm facing the same issue right now. I used to use CircleCI before, where you could just login to VM via ssh, but this doesn't work with Travis-CI VMs.
I was able to debug it (to a certain point) by setting up Travis-ci VM clone via Travis-Cookbooks. You would need to install VirtualBox and Vagrant on your computer first before cloning this repository.
Once you have Travis-Cookbooks cloned, open the folder, launch command prompt|terminal and type vagrant up. Once Vagrant finishes setting up VM (may take a long time) on your machine, you can connect to it via ssh by running vagrant ssh.
From there, you would need to clone your own repository (or just copy the code to VM) and apply the steps from your .travis.yml file.
Eregon's answer failed for me at travis compile, there error looks like:
/home/travis/.rvm/rubies/ruby-2.3.0/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- travis/support (LoadError)
I got it working with the following adjustments: (Adjustments marked with # CHANGED. I'm using the node environment)
# change the image according to the language chosen in .travis.yml
# Find images at https://quay.io/organization/travisci
docker run -it quay.io/travisci/travis-node-js /bin/bash
# now that you are in the docker image, switch to the travis user
su travis
# Install a recent ruby (default is 1.9.3) to make bundle install work
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
sudo mkdir builds # CHANGED
cd builds
sudo git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
travis # to create ~/.travis
ln -s `pwd` ~/.travis/travis-build
bundle install
bundler add travis # CHANGED
sudo mkdir bin # CHANGED
sudo chmod a+w bin/ # CHANGED
bundler binstubs travis # CHANGED
# Create project dir, assuming your project is `me/project` on GitHub
cd ~/builds
mkdir me
cd me
git clone https://github.com/me/project.git
cd project
# change to the branch or commit you want to investigate
~/.travis/travis-build/bin/travis compile > ci.sh # CHANGED
# You most likely will need to edit ci.sh as it ignores matrix and env
# In particular I needed to edit --branch=’’ to the branch name
bash ci.sh