I run into this error when trying to build a Docker image. My requirements.txt file only contains 'torch==1.9.0'. This version clearly exists, but after downloading for a minute or longer, this error pops up.
There is a pytorch docker container on docker hub that has the latest releases: https://hub.docker.com/r/pytorch/pytorch/tags?page=1&ordering=last_updated
Maybe you can either base your docker container on that container (if it works for you) or you can compare the Dockerfile of your container with the Dockerfile of the container on docker hub to see if you are missing any system level dependencies or configurations...
Modify your Docker file to install requirements using:
RUN pip install -r requirements.txt --no-cache-dir
This will solve ram/memory related issues with large packages like torch.
Related
I have downloaded the latest Docker image for the Airflow and am able to spin up the instance succesfully. On my local system I have installed minio server using homebrew on my Mac.
I have created a DAG file to upload data to my Minio bucket. I have done a sample upload using python and it is working as expected (using the minio python libraries). On the Airflow server I am seeing the following errors -
ModuleNotFoundError: No module named 'minio'
Can someone pleae help me how can I have the pip3 minio library to the docker container so that this error can be resolved? I am new to containers and would really appreciate a easy guide or link that I can refer to help me with this error.
One of the things I did try is to fiddle with the attribute - _PIP_ADDITIONAL_REQUIREMENTS that comes in the AIRFLOW DOCKER image following this link but to no avail.
I added the values as - minio but didn't work.
you can create a Dockerfile that extend the basic airflow and install your packages.
Create Dockerfile
FROM apache/airflow:2.3.0
USER root
RUN apt-get update
USER airflow
RUN pip install -U pip
RUN pip install --no-cache-dir minio # or you can copy requirments.txt and install from it
Build your docker
docker build -t my_docker .
Run the new docker image (if you are using the docker-compose then change the airflow image to your image)
I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda
Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.
I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true
I try to build a docker image with pip RUN pip3 install *package* --index-url=*url* --trusted-host=*url*. However, it fails with the following error:
Could not find a version that satisfies the requirement *package* (from versions: )
No matching distribution found for *package*.
However, after I removed the package and successfully build the image, I could successfully install the package from docker container!
The bash I used to build image is: sudo docker build --network=host -t adelai:deploy . -f bernard.Dockerfile.
Please try
docker run --rm -ti python bash
Then run your pip ... inside this container.
The problem is solved: I set the environment variable during build (ARG http_proxy="*url*") and unset it (ENV http_proxy=) just before the installation.
I am not an expert in docker, but guess the reason is that the environment variables are discarded after the build, which cause the environments are different between dockerfile and docker container.
#Matthias Reissner gives a solid guide, but this answer absolutely provide a more detailed way to debug problems during docker building.
This is probably a really stupid question, but one has got to start somewhere. I am playing with NVDIA's rapids.ai gpu-enhanced docker container, but this (presumably by design) does not come with pytorch. Now, of course, I can do a pip install torch torch-ignite every time, but this is both annoying and resource-consuming (and pytorch is a large download). What is the approved method for persisting a pip install in a container?
Create a new Dockerfile that builds a new image based on the existing one:
FROM the/rapids-ai/image
RUN pip install torch torch-ignite
And then
$ ls Dockerfile
Dockerfile
$ docker build -t myimage .
You can now do:
$ docker run myimage