I am trying to dockerize my flask application.
I created a docker file that would create an Ubuntu 18.04 instance and create my server image.
my Dockerfile is placed in the flask application
FROM ubuntu:18.04
EXPOSE 5007
RUN apt-get install redis-server
RUN pip3 install -f requirements.txt
RUN python3 main.py
When I run sudo docker build -t test .
I get the following error:
get https //registry-1.docker.io/v2/ net/http request canceled while waiting for connection
Some of the forums mentioned that I'd have to create a http_proxy in /etc/systemd/docker.service.d/http-proxy.conf
However, what is the proxy that I have to create here? The documentation just says "some.proxy:port"
How do I solve this error?
Related
Is it possible to run docker-compose commands from with a Docker container? As an example, I am trying to install https://datahubproject.io/docs/quickstart/ FROM within a Docker container that is built using the Dockerfile shown below. The Dockerfile creates a Linux container with the prerequisites the datahubproject.io project needs (Python) and clones the repository code to a Docker container. I then want to be able to execute the Docker compose scripts from the repository code (that is cloned to the newly built Docker container) to create the Docker containers needed to run the datahubproject.io project. This is not a docker commit question.
To try this, I have the following docker-compose.yml script:
version: '3.9'
# This is the docker configuration script
services:
datahub:
# run the commands in the Dockerfile (found in this directory)
build: .
# we need tty set to true to keep the container running after the build
tty: true
...and a Dockerfile (to setup a Linux environment with the requirements needed for datahubproject.io quickstart):
FROM debian:bullseye
ENV DEBIAN_FRONTEND noninteractive
# install some of the basics our environment will need
RUN apt-get update && apt-get install -y \
git \
docker \
pip \
python3-venv
# clone the GitHub code
RUN git clone https://github.com/kuhlaid/datahub.git --branch master --single-branch
RUN python3 -m venv venv
# # the `source` command needs the bash shell
SHELL ["/bin/bash", "-c"]
RUN source venv/bin/activate
RUN python3 -m pip install --upgrade pip wheel setuptools
RUN python3 -m pip install --upgrade acryl-datahub
CMD ["datahub version"]
CMD ["./datahub/docker/quickstart.sh"]
I run docker compose up from a command line where these two scripts are located to run the Dockerfile and create the start container that will be used to install the datahubproject.io project.
I receive this error:
datahub-datahub-1 | Quickstarting DataHub: version head
datahub-datahub-1 | Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
datahub-datahub-1 | No Datahub Neo4j volume found, starting with elasticsearch as graph service
datahub-datahub-1 | ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
I do not know if what I am trying to do is even possible with Docker. Any suggestions to make this work? - thank you
Can docker-compose commands by executed from within a Docker container?
Yes. A command like any other.
Is it possible to run docker-compose commands from with a Docker container?
Yes.
Any suggestions to make this work?
Like with docker on the host. Either run docker daemon or connect to one with DOCKER_HOST. DIND is relevant https://hub.docker.com/_/docker .
The answer seems to be to modify the docker-compose.yml script to contain two additional settings:
version: '3.9'
# This is the docker configuration script
services:
datahub:
# run the commands in the Dockerfile (found in this directory)
build: .
# we need tty set to true to keep the container running after the build
tty: true
# ---------- adding the following two settings seems to fixes the issue of the `CMD ["./datahub/docker/quickstart.sh"]` failing in the Dockerfile
stdin_open: true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I have a simple FastAPI project called toyrest that runs a trivial API. The code looks like this.
from fastapi import FastAPI
__version__ = "1.0.0"
app = FastAPI()
#app.get("/")
def root():
return "hello"
I've built the usual Python package infrastructure around it. I can install the package. If I run uvicorn toyrest:app the server launches on port 8000 and everything works.
Now I'm trying to get this to run in a Docker image. I have the following Dockerfile.
# syntax=docker/dockerfile:1
FROM python:3
# Create a user.
RUN useradd --user-group --system --create-home --no-log-init user
USER user
ENV PATH=/home/user/.local/bin:$PATH
# Install the API.
WORKDIR /home/user
COPY --chown=user:user . ./toyrest
RUN python -m pip install --upgrade pip && \
pip install -r toyrest/requirements.txt
RUN pip install toyrest/ && \
rm -rf /home/user/toyrest
CMD ["uvicorn", "toyrest:app"]
I build the Docker image and run it, forwarding port 8000 to the running container.
docker run -p 8000:8000 toyrest:1.0.0
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
When I try to connect to http://127.0.0.1:8000/ I get no response.
Presumably I am doing the port forwarding incorrectly. I've tried various permutations of the port forwarding argument (e.g. -p 8000, -p 127.0.0.1:8000:8000) to no avail.
This is such a basic Docker command that I can't see how I'm getting it wrong, but somehow I am. What am I doing wrong?
try to add this line to yourCMD in ̀dockerfile`:
CMD ["uvicorn", "toyrest:app","--host", "0.0.0.0"]
I have downloaded the latest Docker image for the Airflow and am able to spin up the instance succesfully. On my local system I have installed minio server using homebrew on my Mac.
I have created a DAG file to upload data to my Minio bucket. I have done a sample upload using python and it is working as expected (using the minio python libraries). On the Airflow server I am seeing the following errors -
ModuleNotFoundError: No module named 'minio'
Can someone pleae help me how can I have the pip3 minio library to the docker container so that this error can be resolved? I am new to containers and would really appreciate a easy guide or link that I can refer to help me with this error.
One of the things I did try is to fiddle with the attribute - _PIP_ADDITIONAL_REQUIREMENTS that comes in the AIRFLOW DOCKER image following this link but to no avail.
I added the values as - minio but didn't work.
you can create a Dockerfile that extend the basic airflow and install your packages.
Create Dockerfile
FROM apache/airflow:2.3.0
USER root
RUN apt-get update
USER airflow
RUN pip install -U pip
RUN pip install --no-cache-dir minio # or you can copy requirments.txt and install from it
Build your docker
docker build -t my_docker .
Run the new docker image (if you are using the docker-compose then change the airflow image to your image)
I am trying to build a Flask docker image. I get the error:
zsh: command not found: flask
I followed this old tutorial to get things working. https://medium.com/#rokinmaharjan/running-a-flask-application-in-docker-80191791e143
In order to just learn how to start flask website with Docker I have made everything simple. My Docker image should just open a Hello world front page.
My example.py:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World'
if __name__ == '__main__':
app.run()
My Dockerfile:
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-pip -y
RUN pip install flask
COPY example.py /home/example.py
ENTRYPOINT FLASK_APP=/home/example.py flask run --host=0.0.0.0
I run
sudo docker build . -t flask-app
to build the image.
When I run
docker run -p 8080:5000 flask-app
I get the error:
zsh: command not found: flask
What am I missing here?
Well, indeed you're following a really old tutorial.
I'm not going to enter into detail whether using Flask directly without a WSGI server is something you should do, so I'm just going to focus on your question.
Concise answer: you don't have the installed modules by pip in your PATH, so of course you cannot invoke them. Flask is one of this modules.
Extended answer: keep reading.
First of all, using that base image you're downloading an old version of both Python and pip, secondary: you don't need a fully fledged operative system in order to run a Flask application. There are already base images with Python like python:3.9.10-slim-buster with way less dependencies and possible vulnerabilities than an old image from Ubuntu 16.
FROM python:3.9.10-slim-buster
Second, you shouldn't rely on what do you have on the base image and you should use an environment (venv) for your application, where you can install Flask and any other dependency of the application which should be listed on the requirements.txt. Also you should choose in which working directory you would like to place your code (/usr/src/app is a common place normally).
Indicating which port are you exposing by default is also a good thing to do (even though everyone knows that Flask exposes port 5000).
FROM python:3.9.10-slim-buster
WORKDIR /usr/src/app
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python3 -m pip install flask
COPY example.py .
ENTRYPOINT FLASK_APP=example flask run --host=0.0.0.0
EXPOSE 5000
and as a result:
❯ docker run -p 8080:5000 flask-app
* Serving Flask app 'example' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
I want to run a whole application from a single docker container, the application has three components.
neo4j database that, must be accessible via a localhost port say bolt port 7687
a flask application that must access the database and the results or output of the same available across a localhost port say 5000
a web application page index.html that acts as the front end of the flask application. this will access the flask application via 5000 port.
i need the first two coponents to run from the same container.
i got the flask application containorised but could not get both running.
i use a neo4j-community version and #not a neo4j docker image. so inorder to run the same we must execute neo4j start from neo4j-community/bin file
the docker file is stated below
FROM python:3.7
VOLUME ./:app/
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
COPY . /app/
WORKDIR /app
RUN cd neo4j-community-3.5.3/bin/
CMD ["neo4j start"]
RUN cd ../../
RUN cd flask_jan_24/
RUN pip install -r requirements.txt
CMD ["flask_jan_24/app_flask.py"]
EXPOSE 5000
Issue is that you have actually started Neo4j in the RUN statement (which is part of the build process).
Actually you should have a shell script which must launch all the required services (like neo4j or anything else) in background and then at the end you should launch the actual flask application.