Docker - require specific docker service version - docker

Currently I have a bug, maybe in my code, maybe in docker base images, maybe even in docker itself, but I know for sure, that my app works great on docker-ce 17.09 and hang up after some time on docker-ce 17.12
Is there any way to specify docker version in Dockerfile or in docker-compose.yml so app will throw error while trying to build it on not supported docker version.
I understand this is not a good idea, and I need to find out this bug, but for temporary workaround this error message is enough for me.

i think there is no docker direct approach to it. But you can pass the docker version with ARG to your Dockerfile and then add RUN command that checks if it is the required version. To cancel the build process you have to exit with other number than 0.
build your image with this line
docker_version=`docker version --format "{{.Server.Version}}"` \
&& docker build -t my_image --build-arg DOCKER_VERSION=$docker_version .
then in your Dockerfile check if it is required docker version
FROM debian
ARG DOCKER_VERSION
RUN [[ $DOCKER_VERSION == "17.12.0-ce" ]] && echo "YES" || exit 1

Related

update solidity version in docker container

I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda
Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.

RUN pwd does not seem to work in my dockerfile

I am studying on Docker these days and confused that why RUN pwd just does not seem to work while running my docker file.
I am working on IOS
and the full content of my docker file can be seen as below:
FROM ubuntu:latest
MAINTAINER xxx
RUN mkdir -p /ln && echo hello world > /ln/wd6.txt
WORKDIR /ln
RUpwd
CMD ["more" ,"wd6.txt"]
as far as my understanding,
after building the docker image with the tag 'wd8'and running it, I supposed the result should show like this
~ % docker run wd8
::::::::::::::
wd6.txt
::::::::::::::
hello world
ln
however, the fact is without ln.
I have tried with RUN $pwd, and also added ENV at the beginning of my dockerfile, both do not work.
Please help point out where the problem is.
ps: so I should not expect to see the directory 'ln' on my disk, right? since it is supposed to be created within the container...?
enter image description here
1227
There are actually multiple reasons you don't see the output of the pwd command, some of them already mentioned in the comments:
the RUN statements in your Dockerfile are only executed during the build stage, i.e. using docker build and not with docker run
when using the BuildKit backend (which is the case here) the output of successfully run commands is collapsed; to see them anyway use the --progress=plain flag
running the same build multiple times will use the build cache of the previous build and not execute the command again; you can disable this with the --no-cache flag

Docker build: No matching distribution found

I try to build a docker image with pip RUN pip3 install *package* --index-url=*url* --trusted-host=*url*. However, it fails with the following error:
Could not find a version that satisfies the requirement *package* (from versions: )
No matching distribution found for *package*.
However, after I removed the package and successfully build the image, I could successfully install the package from docker container!
The bash I used to build image is: sudo docker build --network=host -t adelai:deploy . -f bernard.Dockerfile.
Please try
docker run --rm -ti python bash
Then run your pip ... inside this container.
The problem is solved: I set the environment variable during build (ARG http_proxy="*url*") and unset it (ENV http_proxy=) just before the installation.
I am not an expert in docker, but guess the reason is that the environment variables are discarded after the build, which cause the environments are different between dockerfile and docker container.
#Matthias Reissner gives a solid guide, but this answer absolutely provide a more detailed way to debug problems during docker building.

Building Docker Images over remote repositories Artifactory

I use Artifactory as remote repository to build my docker image. Now befor I execute the command $ docker build I have to change the docker file so that each line should be changed.
FROM rocker/shiny
RUN apt-get update
RUN apt-get update && apt-get install -y
.
.
.
There are roughly 100 lines in the docker file.
In order to say that docker build should run over Artifactory I have to change every line like as follows:
FROM docker-remote-docker-io.artifacts/rocker/shiny
Is there any possibility to set docker or change . ~/.profile to avoid the changeing every line in the docher file?
The option URL in docker build is not what I need! ;)
You don't say where you are building but you can setup a proxy to dockerhub
Luckly there is a feature on Docker Engine that goes mostly unnoticed:
the “--registry-mirror” daemon option. Engine options are configured
somewhat differently on each Linux distro, but in CentOS/RHEL you can
do it editing the “/etc/sysconfig/docker” file and restarting Docker:
This way you don't have to change your FROM lines

force a docker build to 'rebuild' a single step

I know docker has a --no-cache=true option to force a clean build of a docker image. For me however, all I'd really like to do is force the last step to run in my dockerfile, which is a CMD command that runs a shell script.
For whatever reason, when I modify that script and save it, a typical docker build will reuse the cached version of that step. Is there a way to force docker not to do so, just on that one portion?
Note that this would invalidate the cache for all Dockerfile directives after that line. This is requested in Issue 1996 (not yet implemented, and now (2021) closed), and issue 42799 (mentioned by ub-marco in the comments).
The current workaround is:
FROM foo
ARG CACHE_DATE=2016-01-01
<your command without cache>
docker build --build-arg CACHE_DATE=$(date) ....
That would invalidate cache after the ARG CACHE_DATE line for every build.
acdcjunior reports in the comments having to use:
docker build --build-arg CACHE_DATE=$(date +%Y-%m-%d_%H:%M:%S)
Another workaround from azul:
Here's what I am using to rebuild in CI if changes in git happened:
export LAST_SERVER_COMMIT=`git ls-remote $REPO "refs/heads/$BRANCH" | grep -o "^\S\+"`
docker build --build-arg LAST_SERVER_COMMIT="$LAST_SERVER_COMMIT"
And then in the Dockerfile:
ARG LAST_SERVER_COMMIT
RUN git clone ...
This will only rebuild the following layers if the git repo actually changed.

Resources