I try to build a docker image with pip RUN pip3 install *package* --index-url=*url* --trusted-host=*url*. However, it fails with the following error:
Could not find a version that satisfies the requirement *package* (from versions: )
No matching distribution found for *package*.
However, after I removed the package and successfully build the image, I could successfully install the package from docker container!
The bash I used to build image is: sudo docker build --network=host -t adelai:deploy . -f bernard.Dockerfile.
Please try
docker run --rm -ti python bash
Then run your pip ... inside this container.
The problem is solved: I set the environment variable during build (ARG http_proxy="*url*") and unset it (ENV http_proxy=) just before the installation.
I am not an expert in docker, but guess the reason is that the environment variables are discarded after the build, which cause the environments are different between dockerfile and docker container.
#Matthias Reissner gives a solid guide, but this answer absolutely provide a more detailed way to debug problems during docker building.
Related
I had set a little docker project for myself and thought it may be fun to try and get azerothcore running on my synology.
I have cloned the repository, but was unable to run the acore.sh script to build the docker containers as synology uses 7zip, and acore.sh threw an error because it couldn't unzip the archives.
I wondered if it was possible for me to find out what scripts were attempting to unzip things, and change the commands to call 7z?
running acore.sh throws an error because it can't find unzip. however synology use 7zip.
user#DS920:/volume1/docker/wow/azerothcore-wotlk$ ./acore.sh docker build NOTICE: file </volume1/docker/wow/azerothcore-wotlk/conf/config.sh> not found, we use default configuration only. Deno version check: /volume1/docker/wow/azerothcore-wotlk/apps/bash_shared/deno.sh: line 18: ./deps/deno/bin/deno: No such file or directory Installing Deno... Error: unzip is required to install Deno (see: https://github.com/denoland/deno_install#unzip-is-required).
The error message points to /volume1/docker/wow/azerothcore-wotlk/apps/bash_shared/deno.sh and says
Error: unzip is required to install Deno
If you look into deno.sh script you'll see the command which installs deno:
curl -fsSL https://deno.land/x/install/install.sh | DENO_INSTALL="$AC_PATH_DEPS/deno" sh
If you download this script you'll see unzip there.
I would suggest trying to install unzip, e.g. like described here: How to install IPKG on Synology NAS
You can bypass the ./acore.sh dashboard with standard docker commands.
to build:
$ docker compose --profile app build
to run:
$ docker compose --profile app up # -d for background
Using standard docker commands has the added side benefit of not needing to install deno locally since it's already being installed to the container.
Have your tried:
sudo opkg install unzip
Inside a dockerfile I had:
RUN python3 --version
While I see the command being executed when I build my docker image like this:
docker build -t dn ./
I'm not seeing an output.
So I though about changing it to:
RUN echo python3 --version
But that didn't solve my problem, any suggestions?
Note: I don't want all commands to show output in terminal only the ones I specify.
Try setting the environment variable DOCKER_BUILDKIT=0 and then execute the docker build command again. DOCKER_BUILDKIT=0 docker build -t dn ./
as explained here https://makeoptim.com/en/tool/docker-build-not-output
I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda
Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.
Currently I have a bug, maybe in my code, maybe in docker base images, maybe even in docker itself, but I know for sure, that my app works great on docker-ce 17.09 and hang up after some time on docker-ce 17.12
Is there any way to specify docker version in Dockerfile or in docker-compose.yml so app will throw error while trying to build it on not supported docker version.
I understand this is not a good idea, and I need to find out this bug, but for temporary workaround this error message is enough for me.
i think there is no docker direct approach to it. But you can pass the docker version with ARG to your Dockerfile and then add RUN command that checks if it is the required version. To cancel the build process you have to exit with other number than 0.
build your image with this line
docker_version=`docker version --format "{{.Server.Version}}"` \
&& docker build -t my_image --build-arg DOCKER_VERSION=$docker_version .
then in your Dockerfile check if it is required docker version
FROM debian
ARG DOCKER_VERSION
RUN [[ $DOCKER_VERSION == "17.12.0-ce" ]] && echo "YES" || exit 1
FROM centos
RUN yum -y update
ENV zk=dx
RUN mkdir $zk
after building image and after running fallowing command
docker run -it -e zk="hifi" <image ID>
I get a directory with name dx but not with hifi
can anyone help me how to set a Dockerfile variable from docker run command
This has behaved this way because:
The RUN commands in the Dockerfile are executed when the Docker image is built (like almost all Dockerfile instructions) - ie. when you run docker build
The docker run command runs when the container is run from the image.
So when you run docker run and set the value to "hifi", the image already exists which has a directory called "dx" in it. The directory creation task has already been performed - updating the environment variable to "hifi" won't change it.
You cannot set a Dockerfile build variable at run time. The build has already happened.
Incidentally, you're over-writing the value of the zk variable right before you create the directory. If you did successfully pass "hifi" into the docker build, it would be over-written and the folder would always be called "dx".