I have downloaded the latest Docker image for the Airflow and am able to spin up the instance succesfully. On my local system I have installed minio server using homebrew on my Mac.
I have created a DAG file to upload data to my Minio bucket. I have done a sample upload using python and it is working as expected (using the minio python libraries). On the Airflow server I am seeing the following errors -
ModuleNotFoundError: No module named 'minio'
Can someone pleae help me how can I have the pip3 minio library to the docker container so that this error can be resolved? I am new to containers and would really appreciate a easy guide or link that I can refer to help me with this error.
One of the things I did try is to fiddle with the attribute - _PIP_ADDITIONAL_REQUIREMENTS that comes in the AIRFLOW DOCKER image following this link but to no avail.
I added the values as - minio but didn't work.
you can create a Dockerfile that extend the basic airflow and install your packages.
Create Dockerfile
FROM apache/airflow:2.3.0
USER root
RUN apt-get update
USER airflow
RUN pip install -U pip
RUN pip install --no-cache-dir minio # or you can copy requirments.txt and install from it
Build your docker
docker build -t my_docker .
Run the new docker image (if you are using the docker-compose then change the airflow image to your image)
Related
I run into this error when trying to build a Docker image. My requirements.txt file only contains 'torch==1.9.0'. This version clearly exists, but after downloading for a minute or longer, this error pops up.
There is a pytorch docker container on docker hub that has the latest releases: https://hub.docker.com/r/pytorch/pytorch/tags?page=1&ordering=last_updated
Maybe you can either base your docker container on that container (if it works for you) or you can compare the Dockerfile of your container with the Dockerfile of the container on docker hub to see if you are missing any system level dependencies or configurations...
Modify your Docker file to install requirements using:
RUN pip install -r requirements.txt --no-cache-dir
This will solve ram/memory related issues with large packages like torch.
I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true
I am trying to dockerize my flask application.
I created a docker file that would create an Ubuntu 18.04 instance and create my server image.
my Dockerfile is placed in the flask application
FROM ubuntu:18.04
EXPOSE 5007
RUN apt-get install redis-server
RUN pip3 install -f requirements.txt
RUN python3 main.py
When I run sudo docker build -t test .
I get the following error:
get https //registry-1.docker.io/v2/ net/http request canceled while waiting for connection
Some of the forums mentioned that I'd have to create a http_proxy in /etc/systemd/docker.service.d/http-proxy.conf
However, what is the proxy that I have to create here? The documentation just says "some.proxy:port"
How do I solve this error?
I am really new with docker. I have it run now for airflow. For one of the airflow DAGs, I perform python jobs.<job_name>.run which is located on the server + within the docker. However, this python code needs packages to run and I am having trouble installing those.
If I put in the Dockerfile a RUN pip install ... it doesn't seem to work. If I go 'in' the docker container by docker exec -ti <name_of_worker> bash and I perform pip freeze, no packages show up.
However, if I perform the pip install command while I am in the worker, the airflow DAG will run successfully. However, I shouldn't have to perform this task every time I rebuild my containers. Can anyone help me?
I'm so confuse that Openshift offer a way to set up document workstation locally with ascii_binder, that's ok, i can do it. but there is question, i want to set up openshift-docs in docker container, any way i have tried is useless.
Here is my idea:
I use asciibinder build in openshift-docs and generated _preview directory
After that, I made a image base on nginx and copy all files include _preview directory in to image's directory /usr/share/nginx/html.
After image generated, i use docker run to setup a container.
I entered in the container, changed the default.conf in /etc/nginx/conf.d, made the root become /usr/share/nginx/html/_preview/openshift-origin/latest.
After that, i restart container and entered it again.
Changed current directory to /usr/share/nginx/html , and use command asciibinder watch.
But when i view it in browser, there are many sources like js and css not found.
is my idea right? if it's wrong, so How can i set up openshift-docs in docker container?
my Dockerfile
FROM nginx:1.13.0
MAINTAINER heshengbang "trulyheshengbang#gmail.com"
ENV REFRESHED_AT 2018-04-06
RUN apt-get -qq update
RUN apt-get -qq install vim
RUN apt-get -qq install ruby ruby-dev build-essential nodejs git
RUN gem install ascii_binder
COPY . /usr/share/nginx/html/
CMD ["nginx", "-g", "daemon off;"]
Use this:
https://github.com/openshift-s2i/s2i-asciibinder
They even supply an example of deploying:
https://github.com/openshift/openshift-docs.git
The README only shows s2i command line usage to build a docker image and run it, but to deploy in OpenShift you can run:
oc new-app openshift/asciibinder-018-centos7~https://github.com/openshift/openshift-docs.git
oc expose svc openshift-docs
You can deploy an asciibinder website on OpenShift with the following template: https://github.com/openshift/openshift-docs/blob/master/asciibinder-template.yml.
You can import this with
oc create -f https://raw.githubusercontent.com/openshift/openshift-docs/master/asciibinder-template.yml
Then deploy from the web console via
Make sure you have an assemble script similar to https://github.com/openshift/openshift-docs/blob/master/.s2i/bin/assemble in your project.