I'm developing a nuxt js universal ssr app. The depoloyment config is as followed:
Nuxt js universal app running on a docker container(running on nuxt 3010 port )
Linux caddy webserver
Laravel backend running on a seperate docker container
The server is using reverse proxy to set backend on domain and the etc/hosts file includes backend domain setting it as 127.0.0.1
The Problem:
Server side requests change the axios base url to http://localhost:3010 (instead of https:api.domain.com/) and result in 404 not found errors or timeouts or even 127.0.0.1 ECONNREFUSED nuxt error. THE SAME REQUESTS WORK FINE on client side and this only happens upon initial loading of the page(the server side rendering of the page)
Why is the url changing? how should i prevent it from changing base url?
docker file includes:
RUN apt-get update
ENV APP_ROOT /src
ENV DOCKER_API https://api.domain.com
WORKDIR ${APP_ROOT}
COPY . ${APP_ROOT}
RUN npm ci
RUN npm run build
docker-compose file:
version: "3"
services:
nuxt:
build: ./app/
ports:
- "3010:3010"
environment:
ALI_URL: https://api.domain.com/
restart: always
command:
"npm run start"
nuxt server config:
server:{
port:3010
}
Related
I have a project that when running locally, outputs server started at /127.0.0.1:5000 and I can access it locally on the said port just fine.
I am trying to run it through docker. I have the following:
DockerFile:
FROM mozilla/sbt
ADD build.sbt /root/build/
RUN cd /root/build && sbt compile
EXPOSE 5000
WORKDIR /root/build
CMD sbt run
and the following docker-compose.yml:
version: '3.1'
services:
sbt:
build:
context: ./
dockerfile: ./Dockerfile
image: sbt
ports:
- "8080:5000"
volumes:
- "./:/root/build"
I try running it through docker-compose up and I can see the logs about the server starting, but can't access the service through the specified port, namely 8080. Am I missing something?
fyi, the above setup is inspired by this post where I have changed the base image and also got rid of the external-network bit that I did not understand.
If app starts by default on port 5000, bu you need to start it in another port with docker you should use:
ports:
- "8080:5000"
Internally your app continues using the 5000 port but docker bind that port to another, in the example : 8080
Description
I have a React application that makes the following request:
await axios.get(`${process.env.REACT_APP_MASTER_HOST}/api/agl-history`, { headers: { Authorization: `Bearer ${token}` }, data: { label, date } });
In some cases it works but when the backend is started with docker-compose up -d, it gives me the following in Chrome:
Access to XMLHttpRequest at 'http://localhost:8080/api/agl-history' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
When it works
When I manually build the image with sudo docker build . -t test and run, sudo docker run -p 8080:8080 -t test, it works perfectly
When I run the flask server with flask run --host=0.0.0.0
When it does not work
docker-compose up -d
// docker-compose.yml, relevant service is master
version: "3.9"
services:
agl-history:
depends_on:
- mariadb
build: ./agl-history
restart: on-failure
networks:
- main
master:
depends_on:
- mariadb
build: ./master
restart: on-failure
ports:
- 8080:8080
networks:
- main
mariadb:
image: "mariadb:10.5.10"
restart: on-failure
environment:
MYSQL_ROOT_PASSWORD: ${MARIADB_PASSW}
ports:
- 3306:3306
volumes:
- /var/lib/docker/volumes/add3-data:/var/lib/mysql
networks:
- main
networks:
main:
driver: bridge
// Dockerfile
FROM python:3.9.5-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
ENV FLASK_RUN_PORT 8080
ENV FLASK_APP source/server_config.py
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
// Flask endpoint
from datetime import date
from flask import Flask
from flask import jsonify
from datetime import datetime
from flask_cors import CORS
# app reference
app = Flask(__name__)
cors = CORS(app)
# This method executes before any API request
#app.before_request
def before_request():
print('before API request')
#app.route('/api/agl-history', methods=['GET'])
def get_agl_history():
print('during')
response = jsonify([
{
'id': 1,
'customerId': 777,
'campaignName': 'Test campaing',
'adGroupName': 'Test ad group',
'execDate': datetime.now(),
'label': '92'
}
]
)
return response
# This method executes after every API request.
#app.after_request
def after_request(response):
return response
Notes
React frontend and server backend is going to be run on different hosts eventually
What am I doing wrong? Any help is greatly appreciated! Thanks!
I think the issue comes from your Flask backend and it's CORS headers. I'd suggest using --host=127.0.0.1 in your flask command. Also, check this answer, it provides a lot of information.
It could be that browser is sending preflight ( OPTIONS ) request because you have Authorization headers , i had same problem before with react dev server and express,
I solved the problem with , OPTIONS route on same path with :
Access-Control-Request-Headers : Authorization
But i am maybe wrong here , because cors is confusing thing :)
Browers issue preflight request(OPTION call) when the origin (frontend localhost:3000) is different from the API origin (localhost:8080)
For development, you can allow all the origin in the flask server. This is the default configuration. Seems this is the case with your working scenarios
But when running via the docker-compose the hostname of the flask server would be master. From your failing scenario, it seems, flask does only accept from that hostname.
So when deploying in a production environment I would suggest adding cors origin configuration allowed origin as front end domain
For your docker-compose scenario, you could configure below in flask server
CORS_ORIGINS: “localhost:3000”
I did not realize that docker-compose does not rebuild images by default so I was using an older build that did not have CORS enabled.
The only change that was needed to fix the issue was:
cors = CORS(app, origins="*")
when starting flask server. Then, start the services with:
sudo docker-compose up -d --build master
I have developed a small project using flask/tensorflow. It runs under app server - gunicorn.
I have to also include nginx into the project for serving static files. Without docker app is running fine. All parts(gunicorn, nginx, flask) cooperate as intended. It's now time to move this project to an online server, and i need to do it via docker.
Nginx and gunicorn->flask app communicate via unix socket. In my localhost environment i used socket inside app root folder - myapp/app.sock, it all worked great.
Problem now is that i can't quite understand how do i tell nginx inside docker to use same socket file and tell gunicorn to listen to it. I get the following error:
upstream: http:// unix:/var/run/app.sock failed (No such file or directory) while connecting to upstream
Tried using different paths to socket file, but no luck - same error.
docker-compose file:
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/remote-app:/etc/nginx/sites-enabled/remote-app
- /etc/nginx/proxy_params:/etc/nginx/proxy_params
ports:
- 8000:8000
build: .
command: gunicorn --bind unix:/var/run/app.sock wsgi:app --reload --workers 1 --timeout 60
environment:
- FLASK_APP=prediction_service.py
- FLASK_DEBUG=1
- PYTHONUNBUFFERED=True
restart: on-failure
main Dockerfile (for main app, it builds app fine, all is working):
FROM python:3.8-slim
RUN pip install flask gunicorn flask_wtf boto3 tqdm
RUN pip install numpy==1.18.5
RUN pip install tensorflow==2.2.0 onnxruntime==1.4.0
COPY *.ipynb /temp/
COPY *.hdf5 /temp/
COPY *.onnx /temp/
COPY *.json /temp/
COPY *.py /temp/
WORKDIR /temp
nginx.conf is 99% same as default with only increased file size for uploading to 8M
proxy-params is just a preset of configurtion params for making nginx proxy requests
and remote-app is a config for my app(simple one):
server {
listen 8000;
server_name localhost;
location / {
include proxy_params;
proxy_pass htpp://unix:/var/run/app.sock; //**tried /temp/app.sock here same issue**
}
}
So if i open localhost(even without port 8000) i can get nginx answer. If i try to open localhost:8000 i get that socket error( that is pasted above with strong text ).
I would avoid using sockets for this, as there is IP communication between containers/services, and you really should have a separate service for the app server.
Something like:
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/remote-app:/etc/nginx/sites-enabled/remote-app
- /etc/nginx/proxy_params:/etc/nginx/proxy_params
ports:
- 80:80
- 143:143
app_server:
build: .
command: gunicorn --bind '0.0.0.0:5000' wsgi:app --reload --workers 1 --timeout 60
environment:
- FLASK_APP=prediction_service.py
- FLASK_DEBUG=1
- PYTHONUNBUFFERED=True
restart: on-failure
Notice instead of binding gunicorn to the socket, it is bound to all IP interfaces of the app_server container on port 5000.
With the separate service app_server alongside your current nginx service, you can simply treat these values like DNS aliases in each container. So in the nginx config:
proxy_pass http://app_server:5000/
So if i open localhost(even without port 8000) i can get nginx answer.
That sounds like you mean connecting to localhost on port 80 which could be a nginx server running on the host machine. This is also suggested by this line in your compose file: /etc/nginx/proxy_params:/etc/nginx/proxy_params.
That's loading the file from a local installation of nginx on the host. You should probably be aware of this, as having that server running also could confuse you when debugging, and launching this compose file somewhere would mean /etc/nginx/proxy_params has to be present on the host.
You should probably store this in the project directory, like the other files which are mounted, and mount it like:
- ./nginx/proxy_params:/etc/nginx/proxy_params
I am running an ASP.NET Core 3.0 multi-container application on an Azure Linux App Service for Containers using Docker Compose. Containers are built and pushed to an Azure Container Registry via CI pipelines and CD pipelines deploy to the app service using a "docker-compose.[environment].yml".
I am trying to use nginx and jwilder's docker-gen as a reverse proxy (separate containers to avoid having the docker socket bound to a publicly exposed container service), and use virtual host names to access the various services over the net.
I seem to be going round in circles between the following 3 errors:
1. The web app displaying the 'Welcome to Nginx' page with logs repeating:
2020-02-26T15:18:44.021444322Z 2020/02/26 15:18:44 Watching docker events
2020-02-26T15:18:44.022275908Z 2020/02/26 15:18:44 Error retrieving docker server info: Get http://unix.sock/info: dial unix /tmp/docker.sock: connect: no such file or directory
2020-02-26T15:18:44.022669201Z 2020/02/26 15:18:44 Error listing containers: Get http://unix.sock/containers/json?all=1: dial unix /tmp/docker.sock: connect: no such file or directory
2020-02-26T15:18:44.405594944Z 2020/02/26 15:18:44 Docker daemon connection interrupted
2. 502 Bad Gateway
502 - Web server received an invalid response while acting as a gateway or proxy server.
There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.
3. The app service's 'Application Error :(' page.
Here is my docker-compose.development.yml:
version: "3.7"
services:
nginx:
image: nginx
environment:
DEFAULT_HOST: ***.azurewebsites.net
ports:
- "80:80"
volumes:
- "${WEBAPP_STORAGE_HOME}/tmp/nginx:/etc/nginx/conf.d"
dockergen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes_from:
- nginx
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "${WEBAPP_STORAGE_HOME}./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl"
webapp:
image: ***.azurecr.io/digitalcore:dev
restart: always
environment:
VIRTUAL_HOST: ***.azurewebsites.net
depends_on:
- nginx
"webapp" service dockerfile (exposes ports 80, 443):
FROM mcr.microsoft.com/dotnet/core/sdk:3.0 AS build
WORKDIR /source
COPY . .
RUN dotnet restore
RUN dotnet publish --output /app/ --configuration Release --no-restore
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0 AS runtime
WORKDIR /app
COPY --from=build /app .
EXPOSE 80 443
ENTRYPOINT ["dotnet", "DigitalCore.WebApp.dll"]
Error #1 seems to be the closest I have got to seeing it working, with the issues being centred around configuring the volumes correctly for an Azure Linux App Service (the ${WEBAPP_STORAGE_HOME} made an appearance after much digging).
Networks and Container Names only seemed to make things worse in my efforts so far, so they were removed to try and keep things to the bare essentials to get it working. The "webapp" service is where my focus is at the moment
Can anybody spot where I'm going wrong?! I will be eternally grateful for any words of wisdom...
UPDATE:
Some progress it would seem - after removing the "ro" permissions from the container volume, docker.sock is now being found, but docker-gen is unable to connect to the endpoint.
2020-02-26T22:21:44.186316399Z 2020/02/26 22:21:44 Watching docker events
2020-02-26T22:21:44.187487428Z 2020/02/26 22:21:44 Error retrieving docker server info: cannot connect to Docker endpoint
2020-02-26T22:21:44.188270247Z 2020/02/26 22:21:44 Error listing containers: cannot connect to Docker endpoint
2020-02-26T22:21:44.500471940Z 2020/02/26 22:21:44 Docker daemon connection interrupted
UPDATE 2
I have now built the containers and pushed to the Azure Container Registry so I am not pulling from different locations. This is my current docker-compose:
version: "3.7"
services:
nginx:
image: ***.azurecr.io/nginx:dev
ports:
- "80:80"
volumes:
- "${WEBAPP_STORAGE_HOME}/etc/nginx/conf.d"
dockergen:
image: ***.azurecr.io/dockergen:dev
privileged: true
command: -notify-sighup nginx -watch -only-exposed /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes_from:
- nginx
volumes:
- "${WEBAPP_STORAGE_HOME}/var/run/docker.sock:/tmp/docker.sock"
- "${WEBAPP_STORAGE_HOME}./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl"
webapp:
image: ***.azurecr.io/digitalcore:dev
restart: always
environment:
VIRTUAL_HOST: ***.azurewebsites.net
depends_on:
- nginx
I'm trying to get a basic Flask backend, and frontend framework in separate containers communicating with each other via docker-compose.
Caveat here is that I'm using Windows 10 Home so I need to be using Docker Toolbox so I've had to add a few networking rules for port forwarding. However, I can't seem to access http://localhost:5000 for my backend. I get ECONNREFUSED. I'm just trying to get basic communication between the frontend and backend communications to simulate frontend/api communication.
Given my port forwarding rules, I can access http://localhost:8080 and I can view the static portions of the app. However, I can't access the backend or can I tell if it they are communicating. New to both Flask and Docker so please forgive my ignorance. Coming from a .NET background, Windows seems to really make this a pain. Thank you for your help.
Here is my project structure:
Here is my application.py:
# Start with a basic flask app webpage.
from flask_socketio import SocketIO, emit
from flask import Flask, render_template, url_for, copy_current_request_context
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
app.config['DEBUG'] = True
#turn the flask app into a socketio app
socketio = SocketIO(app)
#app.route('/')
def index():
#only by sending this page first will the client be connected to the socketio instance
return render_template('index.html')
if __name__ == '__main__':
socketio.run(app)
Dockerfile for the backend:
FROM python:2.7
ADD ./requirements.txt /backend/requirements.txt
WORKDIR /backend
RUN pip install -r requirements.txt
ADD . /backend
ENTRYPOINT ["python"]
CMD ["/backend/application.py"]
EXPOSE 5000
Dockerfile for frontend:
FROM node:latest
COPY . /src
WORKDIR /src
RUN npm install --loglevel warn
RUN npm run production
EXPOSE 8080
CMD [ "node", "server.js" ]
And my docker-compose.yml:
version: '2'
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
restart: always
ports:
- "5000:5000"
env_file:
- .env
frontend:
build: ./frontend
ports:
- "8080:8080"
Your issue with Flask configuration, as long as you get this error ECONNREFUSED while trying to connect it means that there is no service running on port 5000 with the ip you are trying to use and that's because this function socketio.run(app) defaults to 127.0.0.1 which will be the localhost inside the container itself. In order to make your application accessible from outside the container or through the container ip in general you have to pass another parameter to that function called host with value 0.0.0.0 in order to be listen on any interface inside the container.
socketio.run(app, host='0.0.0.0')
Quoted from the documentation:
run(app, host=None, port=None, **kwargs)
Run the SocketIO web server.
Parameters:
app – The Flask application instance.
host – The hostname or IP address for the server to listen on. Defaults to 127.0.0.1.
port – The port number for the server to listen on. Defaults to 5000.