I am trying to run a simple flask app in debug mode using docker-compose. I have created my Dockerfile as follows:
FROM jazzdd/alpine-flask
EXPOSE 80
My docker-compose file looks like this:
version: '2'
networks:
test_network:
driver: bridge
services:
db:
networks:
- test_network
image: postgres:9.5.3
env_file:
- docker.env
expose:
- 5432
app:
networks:
- test_network
build: .
env_file:
- docker.env
expose:
- 80
ports:
- 80:80
volumes:
- ./app/:/app
command: -d
My docker.env just has password to postgres database. I created a simple python file as follows:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return "Hello, World"
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0')
Now to run the app, I am using docker-compose up -d --build command. I would assume that after the app starts on the server, when I make any change to app.py file, it will be reflected on the webpage without me having to restart the containers. I'm not seeing the expected behavior. I tried setting my local env variable FLASK_DEBUG=1 but not sure if that would help. Am I missing something?
I also referenced this page but didn't see anything useful.
A sample (simplifed) runthru demostrating file edits with no need for container restarts outlined below for your reference.
app.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return "Hello, World"
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0',port=80)
You will need to specify the port for the flask development server in order to match the exposed container port of 80.
screenshot can be viewed here
Summary of Steps in screenshot (MAC OS X):
starting with Empty directory
Create app.py
docker run
curl localhost (this will display Hello, World)
edit app.py
curl localhost (this should display the new edits)
in my case I had a conflict with gevent. here's the workaround:
import os
if not (os.environ.get('FLASK_DEBUG') == 1):
from gevent import monkey
monkey.patch_all()
Related
I have created 2 services in docker swarm using images app1 and app2, where service app1 makes a call to app2 but in docker swarm service app1 can't connect to service app2 at app2:5000 (<service-name>:<port>) getting error requests.exceptions.ConnectionError, on the contrary If I create normal container (without docker swarm) then app1 can easily call app2 at app2:5000 (<container-name>:<port>)
Inside docker swarm following commands have been used to create service
$ sudo docker service create --name app1 -p 5001:5000 app1:latest
$ sudo docker service create --name app2 -p 5002:5000 app2:latest
Outside docker swarm following commands are used to run containers
$ sudo docker-compose build
$ sudo docker-compose up
Code used to build Images app1 and app2 are shown below
app.py (App1)
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
#app.route('/')
def func1():
return jsonify('This is App #1')
#app.route('/call')
def func2():
res = requests.get('http://app2:5000/call')
res = res.json()
return jsonify(res)
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
app.py (App2)
from flask import Flask, jsonify
app = Flask(__name__)
#app.route('/')
def func1():
return jsonify('This is App #2')
#app.route('/call')
def func2():
return jsonify('Call to App2 is Successful')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
docker-compose.yml
version: '3.3'
services:
app1:
build: ./app1
image: "app1:latest"
container_name: app1
ports:
- "5001:5000"
networks:
- net1
app2:
build: ./app2
image: "app2:latest"
container_name: app2
ports:
- "5002:5000"
networks:
- net1
networks:
net1:
external: true
Putting services in the same network fixed the issue
I want to create simple Kotlin app that uses PostgresSql and Kotlin Ktor, everything should be embedded in docker container.
So far I managed to run separately PostgresSql and PgAdmin which connected to each other successfully and I created docker-compose.yml file for that that works fine form me. The problem starts when I want to add to it my Kotlin app.
Here is my docker-compose.yml file
version: "3.9"
networks:
m8network:
ipam:
config:
- subnet: 172.20.0.0/24
services:
postgres:
image: postgres
environment:
- "POSTGRES_USER=SomeFancyUser"
- "POSTGRES_PASSWORD=pwd"
- "POSTGRES_DB=MSC8"
ports:
- "5432:5432"
volumes:
# - postgres-data:/var/lib/postgresql/data
- D:\docker\myApp\data:/var/lib/postgresql/data
networks:
m8network:
ipv4_address: 172.20.0.6
pgadmin:
image: dpage/pgadmin4
depends_on:
- postgres
environment:
- "PGADMIN_DEFAULT_EMAIL=SomeFancyUser#domain.com"
- "PGADMIN_DEFAULT_PASSWORD=pwd"
# - "PGADMIN_ENABLE_TLS=False"
ports:
- "5001:80"
networks:
m8network:
app:
build: .
ports:
- "5000:8080"
links:
- postgres
depends_on:
- postgres
restart: on-failure
networks:
m8network:
#volumes:
# postgres-data:
# driver: local
And heres is my app source code.
package com.something.m8
import com.squareup.sqldelight.db.SqlDriver
import com.squareup.sqldelight.sqlite.driver.asJdbcDriver
import com.zaxxer.hikari.HikariConfig
import com.zaxxer.hikari.HikariDataSource
import io.ktor.application.*
import io.ktor.html.*
import io.ktor.http.*
import io.ktor.response.*
import io.ktor.routing.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
import kotlinx.html.*
import java.io.PrintWriter
import java.util.*
fun HTML.index() {
head {
title("Hello from Ktor!")
}
body {
div {
+"Hello from Ktor"
}
}
}
fun main() {
println("starting app")
val props = Properties()
props.setProperty("dataSourceClassName", "org.postgresql.ds.PGSimpleDataSource")
props.setProperty("dataSource.user", "SomeFancyUser")
props.setProperty("dataSource.password", "pwd")
props.setProperty("dataSource.databaseName", "M8")
props.setProperty("dataSource.portNumber", "5432")
props.setProperty("dataSource.serverName", "172.20.0.6")
props["dataSource.logWriter"] = PrintWriter(System.out)
println("a")
val config = HikariConfig(props)
println("b")
val ds = HikariDataSource(config)
println("c")
val driver: SqlDriver = ds.asJdbcDriver()
println("d")
MSC8.Schema.create(driver)
println("e")
embeddedServer(Netty, port = 8080,
// host = "127.0.0.1"
) {
routing {
get("/") {
call.respondHtml(HttpStatusCode.OK, HTML::index)
}
get("/m8/{code}") {
val code = call.parameters["code"]
println("code $code")
call.respondRedirect("https://google.com")
}
}
}.start(wait = true)
}
And the Dockerfile for app
#FROM openjdk:8
FROM gradle:6.7-jdk8
WORKDIR /var/www/html
RUN mkdir -p ./app/
WORKDIR /var/www/html/app
COPY build.gradle.kts .
COPY gradle.properties .
COPY settings.gradle.kts .
COPY Redirect/src ./Redirect/src
COPY Redirect/build.gradle.kts ./Redirect/build.gradle.kts
COPY gradlew .
COPY gradle ./gradle
EXPOSE 8080
USER root
WORKDIR /var/www/html
RUN pwd
RUN ls
RUN chown -R gradle ./app
USER gradle
WORKDIR /var/www/html/app
RUN ./gradlew run
With this setup I have two problems
First problem:
When I run docker-compose.exe up --build I receive exception HikariPool$PoolInitializationException: Failed to initialize pool: The connection attempt failed. on line val ds = HikariDataSource(config)
I set up static ip for postgres (172.20.0.6) and when I'm using this ip in PGAdmin it works so why my app cannot connect to the postgres?
Second problem:
I tried to test if app is starting properly and everything works fine in most basics. So I commented all source code related to the connection to the DB since that point when I run docker-compose.exe up --build my app displays only letter e from line println("e") and at this point everything seems to be frozen, postgres and PGAdming doesn't startup, and after that container seems to be unresponsive and app doesn't respond on port 5000 or 8080. Is there any way that I can run app so it won't block exectution of other parts?
First problem:
I started using host name instead IP adress so now I'm using postgres instead 172.20.0.6. And the rest of it is connected to second problem
Second problem:
The issue was that I was starting the app during build phase of container.
Instead RUN ./gradlew run I used
RUN gradle build
ENTRYPOINT ["gradle","run"]
Also I noticed that I don't have to use gradle wrapper while I'm using FROM gradle:6.7-jdk8
Now everyting is working fine.
as the question says. Here it is my situation.
My project folder is:
PROJ
|____docker-compose.yml
|____servDir/
|____Dockerfile
|____server.py
In the docker-compose.yml:
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "5001:5002"
server.py:
from flask import Flask, request
import json
app = Flask(__name__)
PORT = 5001
#app.route("/greetings/")
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=int(PORT), debug=True)
When I run docker-compose up and go to http://localhost:5001/greetings/ I receive a ERR_CONNECTION_REFUSED.
Instead, if I set ports as 5001:5001, I'm able to receive the page content.
Why? Should I set them always equals, in order to reach the container by browser?
I thought that ports configuration was HOST:CONTAINER, and that browser would be a HOST sevice.
UPDATE:
Dockerfile:
FROM python:3
WORKDIR /home/python/app/
COPY . /home/python/app/
RUN chmod a+x *.py
CMD ["python", "./server.py"]
This is right : HOST:CONTAINER
Try to use this to expose it for your localhost and LAN :
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "0.0.0.0:5001:5002"
or this to only your localhost :
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "127.0.0.1:5001:5002"
Also, you wrote :
When I run docker-compose up and go to http://localhost:6002/greetings/ I receive a ERR_CONNECTION_REFUSED.
Looking at your docker compose you should access it like that instead:
http://localhost:6002 --> http://localhost:5001
Change server.py config :
from flask import Flask, request
import json
app = Flask(__name__)
PORT = 5002
#app.route("/greetings/")
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=int(PORT), debug=True)
I use docker and also use docker-compose for tie each container.
In my python flask code, refer environment variable like this.
import os
from app import db, create_app
app = create_app(os.getenv('FLASK_CONFIGURATION') or 'development')
if __name__ == '__main__':
print(os.getenv('FLASK_CONFIGURATION'))
app.run(host='0.0.0.0', debug=True)
And docker-compose.yml here.
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-prod
ports:
- '80:80'
networks:
- backend
links:
- web_project
depends_on:
- web_project
environment:
- FLASK_CONFIGURATION=production
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-prod
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web-prod/dockerfile
container_name: web_project
hostname: web_project_prod
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
environment:
- FLASK_CONFIGURATION=production
networks:
backend:
driver: 'bridge'
I set FLASK_CONFIGURATION=production via environment command.
But when I execute, maybe FLASK_CONFIGURATION=production doesn't work.
I also tried to ENV FLASK_CONFIGURATION production to each dockerfile. (doesn't work too)
Strange thing is, When I enter to my container via bash(docker exec -it bash) and check the environment variable with export, it was set perfectly.
Is there any wrong code in my docker settings?
Thanks.
[SOLVED]
It is caused by supervisor.
When using supervisor, it's shell is isolated with original.
So we have to define our environment variables into supervisor.conf
Your flask code is looks ok, and as you said ... in bash this ENV variable exists,
My advice to you is to find way to put this variable to .env file in your project.
I will explain why i'm saying it regarding similar issue that i had with cron:
The cron run in his "own world" because the system run and execute it, and because of it he don't share those ENV variables that the bash of the main container process holding.
So i assume (please give feed back if not) that flask run too in similar way in his "own world" and don't have access to those ENV that Docker set.
So, there for, i created bash script that read all ENV variable and write them to the .env file of the project, this script run after the container created.
In this way, no matter from where and how you run the code/script ... those ENV variables will always be exists.
I have already a mysql container named "mysqlDemoStorage" running, exposing port 3306 to 0.0.0.0:3306. I also have a flask app which provides a login page and table-displaying page. The flask app works quite well in host. The login page connects to "user" table in the mysql container and the table-displaying page connects to another table holding all the data to display.
The docker-compose file I used to create the mysql container is as follows:
version: '3'
services:
mysql:
container_name: mysqlDemoStorage
environment:
MYSQL_ROOT_PASSWORD: "demo"
command:
--character-set-server=utf8
ports:
- 3306:3306
image: "docker.io/mysql:latest"
restart: always
Now I want to dockerize the flask app so that I can still view the app from host. The mysql container detail is as followed:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c48955b3589e mysql:latest "docker-entrypoint.s…" 13 days ago Up 49 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp mysqlDemoStorage
The dockerfile of the flask app I wrote is as follows:
FROM python:latest
WORKDIR /storage_flask
ADD . /storage_flask
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python","run.py"]
The flask image can be successfuly built, but when I run the image, I fail to load the page. One point I think that causes the problem is the init.py file to initiate the flask app, which is as follows:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_bcrypt import Bcrypt
from flask_login import LoginManager
app = Flask(__name__)
app.config['SECRET_KEY'] = 'aafa4f8047ce31126011638be8530da6'
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:demo#localhost:3306/storage'
db = SQLAlchemy(app)
bcrypt = Bcrypt(app)
login_manager = LoginManager(app)
login_manager.login_view = "login"
login_manager.login_message_category = 'info'
from storage_flask import routes
I was thinking passing the IP of the mysql container to the flask container as the config string for DB connection. But I'm not sure how to do it.
Could someone help to solve the problem? Thank you
change this line
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:demo#localhost:3306/storage'
to
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:demo#mysql:3306/storage'
You also need to make sure that both containers are connected to the same network, for that you need to update your docker-compose file to be something like the below file
version: '3.7'
networks:
my_network_name:
name: my_network_name
external: false
services:
mysql:
container_name: mysqlDemoStorage
environment:
MYSQL_ROOT_PASSWORD: "demo"
command:
--character-set-server=utf8
ports:
- 3306:3306
image: "docker.io/mysql:latest"
restart: always
networks:
- my_network_name
second file
version: '3.7'
networks:
my_network_name:
name: my_network_name
external: true
services:
python_app:
container_name: pythonDemoStorage
ports:
- 5000:5000
image: "Myimage"
restart: always
networks:
- my_network_name