I am really new with docker. I have it run now for airflow. For one of the airflow DAGs, I perform python jobs.<job_name>.run which is located on the server + within the docker. However, this python code needs packages to run and I am having trouble installing those.
If I put in the Dockerfile a RUN pip install ... it doesn't seem to work. If I go 'in' the docker container by docker exec -ti <name_of_worker> bash and I perform pip freeze, no packages show up.
However, if I perform the pip install command while I am in the worker, the airflow DAG will run successfully. However, I shouldn't have to perform this task every time I rebuild my containers. Can anyone help me?
Related
I have downloaded the latest Docker image for the Airflow and am able to spin up the instance succesfully. On my local system I have installed minio server using homebrew on my Mac.
I have created a DAG file to upload data to my Minio bucket. I have done a sample upload using python and it is working as expected (using the minio python libraries). On the Airflow server I am seeing the following errors -
ModuleNotFoundError: No module named 'minio'
Can someone pleae help me how can I have the pip3 minio library to the docker container so that this error can be resolved? I am new to containers and would really appreciate a easy guide or link that I can refer to help me with this error.
One of the things I did try is to fiddle with the attribute - _PIP_ADDITIONAL_REQUIREMENTS that comes in the AIRFLOW DOCKER image following this link but to no avail.
I added the values as - minio but didn't work.
you can create a Dockerfile that extend the basic airflow and install your packages.
Create Dockerfile
FROM apache/airflow:2.3.0
USER root
RUN apt-get update
USER airflow
RUN pip install -U pip
RUN pip install --no-cache-dir minio # or you can copy requirments.txt and install from it
Build your docker
docker build -t my_docker .
Run the new docker image (if you are using the docker-compose then change the airflow image to your image)
I installed oyente using docker installation as described in the link
https://github.com/enzymefinance/oyente using the following command.
docker pull luongnguyen/oyente && docker run -i -t luongnguyen/oyente
I can analyse older smart contracts but I get compilation error when I try it on newer contracts. I need to update the version of solc but I couldn't.
On the container the current version is
solc, the solidity compiler commandline interface
Version: 0.4.21+commit.dfe3193c.Linux.g++ .
I read that the best way to update it is to use the command npm so I executed the following command but I am getting errors cause I assume npm version is not new also.
docker exec -i container_name bash -c "npm install -g solc"
I would appreciate, cause I am trying to sole this for hours now. Thanks in advance,
Ferda
Docker's standard model is that an image is immutable: it contains a fixed version of your application and its dependencies, and if you need to update any of this, you need to build a new image and start a new container.
The first part of this, then, looks like any other Node package update. Install Node in the unlikely event you don't have it on your host system. Run npm update --save solc to install the newer version and update your package.json and package-lock.json files. This is the same update you'd do if Docker weren't involved.
Then you can rebuild your Docker image with docker build. This is the same command you ran to initially build the image. Once you've created the new image, you can stop, delete, and recreate your container.
# If you don't already have Node, get it
# brew install nodejs
# Update the dependency
npm update --save solc
npm run test
# Rebuild the image
docker build -t image_name .
# Recreate the container
docker stop container_name
docker rm container_name
docker run -d --name container_name image_name
npm run integration
git add package*.json
git commit -m 'update solc version to 0.8.14'
Some common Docker/Node setups try to store the node_modules library tree in an anonymous volume. This can't be easily updated, and hides the node_modules tree that gets built from the image. If you have this setup (maybe in a Compose volumes: block) I'd recommend deleting any volumes or mounts that hide the image contents.
Note that this path doesn't use docker exec at all. Think of this like getting a debugger inside your running process: it's very useful when you need it, but anything you do there will be lost as soon as the process or container exits, and it shouldn't be part of your normal operational toolkit.
I run into this error when trying to build a Docker image. My requirements.txt file only contains 'torch==1.9.0'. This version clearly exists, but after downloading for a minute or longer, this error pops up.
There is a pytorch docker container on docker hub that has the latest releases: https://hub.docker.com/r/pytorch/pytorch/tags?page=1&ordering=last_updated
Maybe you can either base your docker container on that container (if it works for you) or you can compare the Dockerfile of your container with the Dockerfile of the container on docker hub to see if you are missing any system level dependencies or configurations...
Modify your Docker file to install requirements using:
RUN pip install -r requirements.txt --no-cache-dir
This will solve ram/memory related issues with large packages like torch.
This is probably a really stupid question, but one has got to start somewhere. I am playing with NVDIA's rapids.ai gpu-enhanced docker container, but this (presumably by design) does not come with pytorch. Now, of course, I can do a pip install torch torch-ignite every time, but this is both annoying and resource-consuming (and pytorch is a large download). What is the approved method for persisting a pip install in a container?
Create a new Dockerfile that builds a new image based on the existing one:
FROM the/rapids-ai/image
RUN pip install torch torch-ignite
And then
$ ls Dockerfile
Dockerfile
$ docker build -t myimage .
You can now do:
$ docker run myimage
I'd like to have some kind of "development docker image" in which npm install is executed every time I restart my Docker Container (becuase I don't want to build, push and pull the new dev image every day from my local machine to our Docker server).
So I thought I could do sth. like this in my Dockerfile:
CMD npm install git+ssh://git#mycompany.de/my/project.git#develop && npm start
Sadly, this doesn't work. The container stops immediately after docker start and I don't know why, because this works:
RUN npm install git+ssh://git#mycompany.de/my/project.git#develop
CMD npm start
(Just for testing, that's of course not what I want to have). But maybe I have some wrong perception of CMD and someone could enlighten me?
Make your CMD point to a shell script.
CMD ["/my/path/to/entrypoint.sh"]
with that script being:
#!/bin/bash
npm install git+ssh://git#mycompany.de/my/project.git#develop
npm start
# whatever else
I find this easier for a few reasons:
Inevitably these commands increase with more being done
It makes it much easier to run containers interactively, as you can run them with docker run mycontainer /bin/bash and then execute your shell script manually. This is helpful in debugging