Dockerfile Healthcheck with environment variable - docker

I've a REST Service written in C++ which has an endpoint for localhost:somePort/health. The port is configured in a yaml based config file.
I've created a script which extracts the port from the yaml file. But my problem is to assign the result to the HEALTHCHECK command in my Dockerfile.
So let's say I have a script /app/get_port.sh echoing the actual port used on startup. How do I pass that port to the HEALTHCHECK command? For example to make this work:
HEALTHCHECK --interval=10s --timeout=4s CMD curl -f "http://localhost:$MY_PORT/health" || exit

The workaround was for me to use a subshell within the curl command:
RUN echo "Healthcheck port: $(/app/get_port.sh)\n"
HEALTHCHECK --interval=10s --timeout=4s CMD curl -f "http://localhost:$(/app/get_port.sh)/health" || exit 1
Though I wish Docker would get advanced options for handling ENV.

Mine was a node app, which i needed to pass a base url path into from the docker-compose.yaml file. So did this with a separate healthcheck.js file.
Dockerfile
HEALTHCHECK --interval=10s --timeout=5s --retries=3 --start-period=15s CMD node healthcheck.js > /dev/null || exit 1
healthcheck.js
var http = require("http");
const BASEPATH = process.env.BASEPATH || "/";
var options = {
host : "127.0.0.1",
port: 3000,
path: BASEPATH,
timeout : 2000
};
var request = http.request(options, (res) => {
console.log(`STATUS: ${res.statusCode}`);
if (res.statusCode == 200) {
process.exit(0);
}
else if (res.statusCode == 301) {
process.exit(0);
}
else if (res.statusCode == 302) {
process.exit(0);
}
else {
process.exit(1);
}
});
request.on('error', function(err) {
console.log('ERROR',err);
process.exit(1);
});
request.end();
docker-compose.yaml
posterr:
image: petersem/posterr:dev
container_name: posterr
restart: unless-stopped
ports:
- "9876:3000"
volumes:
- /c/docker/posterr/randomthemes:/randomthemes
- /c/docker/posterr/config:/usr/src/app/config
environment:
- TZ=Australia/Brisbane
- BASEPATH=/mypath

Just use ${PORT} - no workarounds needed
To reference an environment variable in a Dockerfile, just use wrap it inside a ${…} and for then the runtime value is used e.g. for the healthcheck.
You can also set a default like this ${PORT:-80} I don't remember where I saw/read this. But it works :)
So my Dockerfile looks like this
FROM node:lts-alpine
RUN apk add --update curl
ENV NODE_ENV production
ENV HOST '0.0.0.0'
EXPOSE ${PORT:-80}
# Other steps
HEALTHCHECK --interval=5m --timeout=3s \
CMD curl -f http://localhost:${PORT}/health || exit 1
CMD ["node", "server.js"]

I found that you can do this inline with a little bit of parsing.
redis:
image: 'docker.io/library/redis'
command: 'redis-server --requirepass "password"'
environment:
REDIS_HOST_PASSWORD: 'password'
healthcheck:
test: 'redis-cli -a "$$(env | grep REDIS_HOST_PASSWORD | cut -d = -f2-)" ping'

Related

SyntaxHighlight always enabled with latest swaggerapi/swagger-ui

I am running swaggerapi/swagger-ui using docker in my local and syntaxHighlighting is not working as expected with syntaxHighlight=false query-parameter. I tried various approaches to disable syntaxHighlighting.
After each approach, I accessed this url http://localhost:8080/?syntaxHighlight=falseand expected syntaxHighlighting to be disabled for all the request & response body. But all are enabled by default.
Approach 1:
use docker-compose.yaml with QUERY_CONFIG_ENABLED: "true"
spin up the container using the command docker compose up.
docker-compose.yaml
version: '3'
services:
swagger-ui:
ports:
- 8080:8080
image: swaggerapi/swagger-ui:v4.15.5
restart: always
environment:
QUERY_CONFIG_ENABLED: "true"
URLS: "[
{ url: 'https://petstore.swagger.io/v2/swagger.json', name: 'Petstore' }
]"
Approach 2:
use docker file
build own image using the command docker build -t .
run the docker image docker run -p 8080
Dockerfile
FROM swaggerapi/swagger-ui
ENV QUERY_CONFIG_ENABLED="true"
ENV URLS="[ { url: 'https://petstore.swagger.io/v2/swagger.json', name: 'Petstore' } ]"
Approach 3: (same as approach #2 with swagger-config.json)
Followed steps mentioned in Approach2, but query configuration itself is not enabled.
Dockerfile
FROM swaggerapi/swagger-ui
`ENV CONFIG_URL="file:///D:/Argon/Repo/Skyline/SwaggerAPI/swagger-config.json"`
ENV URLS="[ { url: 'https://petstore.swagger.io/v2/swagger.json', name: 'Petstore' } ]"
swagger-config.json
{
"queryConfigEnabled": true,
"syntaxHighlight": false,
"syntaxHighlight.activate": false,
"displayOperationId": true
}
Please let me know whether I am missing something here.

Docker reason: connect ECONNREFUSED 0.0.0.0:80

I've got a nuxtjs app runnning in a docker container, i'm trying to use the localhost on the nuxt application to connect to an endpoint but unfortunately i get the following
error RROR request to http://0.0.0.0/api/articles failed, reason: connect ECONNREFUSED 0.0.0.0:80
I'm trying to do a fetch in the nuxt config to generate a sitemap from a getcockpit cms.
Here is a snippets of the fetch.
sitemap: {
hostname: `${process.env.BASE_URL}`,
gzip: true,
routes: async () => {
const articles = await fetch(`http://0.0.0.0/api/articles`, {
method: 'post',
body: JSON.stringify({
filter: {
Published: true
}
})
})
.then((res) => res.json())
.then((values) => {
const results = values.entries
return results
.filter((result) => result.url.length > 0)
.map((result) => result.url)
})
.catch((err) => console.error(err))
return articles
}
},
And a copy of Docker File
FROM node:12.16.1-alpine
# create destination directory
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
# update and install dependency
RUN apk update && apk upgrade
RUN apk add git
# copy the app, note .dockerignore
COPY . /usr/src/nuxt-app/
RUN npm ci
# build necessary, even if no static files are needed,
# since it builds the server as well
RUN npm run build
# expose 5000 on container
EXPOSE 5000
RUN npm config set https-proxy 127.0.0.1:9000
# set app serving to permissive / assigned
ENV NUXT_HOST=0.0.0.0
# set app port
ENV NUXT_PORT=5000
ENV HOST 0.0.0.0
ENV PORT 5000
ENV HTTP_PROXY http://docker.for.mac.localhost.internal:3128
ENV HTTPS_PROXY https://docker.for.mac.localhost.internal:3128
ENV FTP_PROXY ftp://docker.for.mac.localhost.internal:3128
ENV NO_PROXY http://docker.for.mac.localhost.internal:3128
ENV BASE_URL=http://nuxt
ENV GET_ARTICLES_API_TOKEN=token
ENV GET_ARTICLES_URL=example.com
# start the app
CMD [ "npm", "start"]
Current Docker Compose
version: "2"
services:
nuxt:
build: .
ports:
- 5000:5000
environment:
ENV NUXT_HOST: 0.0.0.0
ENV NUXT_PORT: 5000
Have you tried visiting localhost:5000 on your browser?
The default port for http connections is 80 and your application listens on 5000. So when you trying to connect to 0.0.0.0:80, where nothing is listening, the connection is refused.

How to enable HTTPS on AWS EC2 running an NGINX Docker container?

I have an EC2 instance on AWS that runs Amazon Linux 2.
On it, I installed Git, docker, and docker-compose. Once done, I cloned my repository and ran docker-compose up to get my production environment up. I go to the public DNS, and it works.
I now want to enable HTTPS onto the site.
My project has a frontend using React to run on an Nginx-alpine server. The backend is a NodeJS server.
This is my nginx.conf file:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://${PROJECT_NAME}_backend:${NODE_PORT}/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Here's my docker-compose.yml file:
version: "3.7"
services:
##############################
# Back-End Container
##############################
backend: # Node-Express backend that acts as an API.
container_name: ${PROJECT_NAME}_backend
init: true
build:
context: ./backend/
target: production
restart: always
environment:
- NODE_PATH=${EXPRESS_NODE_PATH}
- AWS_REGION=${AWS_REGION}
- NODE_ENV=production
- DOCKER_BUILDKIT=1
- PORT=${NODE_PORT}
networks:
- client
##############################
# Front-End Container
##############################
nginx:
container_name: ${PROJECT_NAME}_frontend
build:
context: ./frontend/
target: production
args:
- NODE_PATH=${REACT_NODE_PATH}
- SASS_PATH=${SASS_PATH}
restart: always
environment:
- PROJECT_NAME=${PROJECT_NAME}
- NODE_PORT=${NODE_PORT}
- DOCKER_BUILDKIT=1
command: /bin/ash -c "envsubst '$$PROJECT_NAME $$NODE_PORT' < /etc/nginx/conf.d/nginx.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
expose:
- "80"
ports:
- "80:80"
depends_on:
- backend
networks:
- client
##############################
# General Config
##############################
networks:
client:
I know there's a Docker image for certbot, but I'm not sure how to use it. I'm also worried about the way I'm proxying requests to /api/ to the server over http. Will that also give me any problems?
Edit:
Attempt #1: Traefik
I created a Traefik container to route all traffic through HTTPS.
version: '2'
services:
traefik:
image: traefik
restart: always
ports:
- 80:80
- 443:443
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/traefik/traefik.toml:/traefik.toml
- /opt/traefik/acme.json:/acme.json
container_name: traefik
networks:
web:
external: true
For the toml file, I added the following:
debug = false
logLevel = "ERROR"
defaultEntryPoints = ["https","http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "ec2-00-000-000-00.eu-west-1.compute.amazonaws.com"
watch = true
exposedByDefault = false
[acme]
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
I added this to my docker-compose production file:
labels:
- "traefik.docker.network=web"
- "traefik.enable=true"
- "traefik.basic.frontend.rule=Host:ec2-00-000-000-00.eu-west-1.compute.amazonaws.com"
- "traefik.basic.port=80"
- "traefik.basic.protocol=https"
I ran docker-compose up for the Traefik container, and then ran docker-compose up on my production image. I got the following error:
unable to obtain acme certificate
I'm reading the Traefik docs and apparently there's a way to configure the toml file specifically for Amazon ECS: https://docs.traefik.io/configuration/backends/ecs/
Am I on the right track?
Easiest way would be to setup a ALB and use it for HTTPS.
Create ALB
Add 443 Listener to ALB
Generate Certificate using AWS Certificate Manager
Set the Certificate to the default cert for the load balancer
Create Target Group
Add your EC2 Instance to the Target Group
Point the ALB to the Target Group
Requests will be served using the ALB with https
Enabling SSL is done through following the tutorial on Nginx and Let's Encrypt with Docker in Less Than 5 Minutes. I ran into some issues while following it, so I will try to clarify some things here.
The steps include adding the following to the docker-compose.yml:
##############################
# Certbot Container
##############################
certbot:
image: certbot/certbot:latest
volumes:
- ./frontend/data/certbot/conf:/etc/letsencrypt
- ./frontend/data/certbot/www:/var/www/certbot
As for the Nginx Container section of the docker-compose.yml, it should be amended to include the same volumes added to the Certbot Container, as well as add the ports and expose configurations:
service_name:
container_name: container_name
image: nginx:alpine
command: /bin/ash -c "exec nginx -g 'daemon off;'"
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- default
The data folder may be saved anywhere else, but make sure to know where it is and make sure to reference it properly when reused later. In this example, I am simply saving it in the same directory as the docker-compose.yml file.
Once the above configurations are put into place, a couple of steps are to be taken in order to initialize the issuance of the certificates.
Firstly, your Nginx configuration (default.conf) is to be changed to accommodate the domain verification request:
server {
listen 80;
server_name example.com www.example.com;
server_tokens off;
location / {
return 301 https://$server_name$request_uri;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name example.com www.example.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Once the Nginx configuration file is amended, a dummy certificate is created to allow for Let's Encrypt validation to take place. There is a script that does all of this automatically, which can be downloaded, into the root of the project, using CURL, before being amended to suit the environment. The script would also need to be made executable using the chmod command:
curl -L https://raw.githubusercontent.com/wmnnd/nginx-certbot/master/init-letsencrypt.sh > init-letsencrypt.sh && chmod +x init-letsencrypt.sh
Once the script is downloaded, it is to be amended as follows:
#!/bin/bash
if ! [ -x "$(command -v docker-compose)" ]; then
echo 'Error: docker-compose is not installed.' >&2
exit 1
fi
-domains=(example.org www.example.org)
+domains=(example.com www.example.com)
rsa_key_size=4096
-data_path="./data/certbot"
+data_path="./data/certbot"
-email="" # Adding a valid address is strongly recommended
+email="admin#example.com" # Adding a valid address is strongly recommended
staging=0 # Set to 1 when testing setup to avoid hitting request limits
if [ -d "$data_path" ]; then
read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then
exit
fi
fi
if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
echo "### Downloading recommended TLS parameters ..."
mkdir -p "$data_path/conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"
echo
fi
echo "### Creating dummy certificate for $domains ..."
path="/etc/letsencrypt/live/$domains"
mkdir -p "$data_path/conf/live/$domains"
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
openssl req -x509 -nodes -newkey rsa:1024 -days 1\
-keyout '$path/privkey.pem' \
-out '$path/fullchain.pem' \
-subj '/CN=localhost'" certbot
echo
echo "### Starting nginx ..."
-docker-compose up --force-recreate -d nginx
+docker-compose -f docker-compose.yml up --force-recreate -d service_name
echo
echo "### Deleting dummy certificate for $domains ..."
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
rm -Rf /etc/letsencrypt/live/$domains && \
rm -Rf /etc/letsencrypt/archive/$domains && \
rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot
echo
echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
domain_args=""
for domain in "${domains[#]}"; do
domain_args="$domain_args -d $domain"
done
# Select appropriate email arg
case "$email" in
"") email_arg="--register-unsafely-without-email" ;;
*) email_arg="--email $email" ;;
esac
# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi
-docker-compose run --rm --entrypoint "\
+docker-compose -f docker-compose.yml run --rm --entrypoint "\
certbot certonly --webroot -w /var/www/certbot \
$staging_arg \
$email_arg \
$domain_args \
--rsa-key-size $rsa_key_size \
--agree-tos \
--force-renewal" certbot
echo
echo "### Reloading nginx ..."
-docker-compose exec nginx nginx -s reload
+docker-compose exec service_name nginx -s reload
I have made sure to always include the -f flag with the docker-compose command just in case someone doesn't know what to change if they had a custom named docker-compose.yml file. I have also made sure to set the service name as service_name to make sure to differentiate between the service name and the Nginx command, unlike the tutorial.
Note: If unsure about the fact that the setup is working, make sure to set staging as 1 to avoid hitting request limits. It is important to remember to set it back to 0 once testing is done and redo all steps from amending the init-letsencrypt.sh file. Once testing is done and the staging is set to 0, it is important to stop previous running containers and delete the data folder for the proper initial certification to ensue:
$ docker-compose -f docker-compose.yml down && yes | docker system prune -a --volumes && sudo rm -rf ./data
Once the certificates are ready to be initialized, the script is to be run using sudo; it is very important to use sudo, as issues will occur with the permissions inside the containers if run without it.
$ sudo ./init-letsencrypt.sh
After the certificate is issued, there is the matter of automatically renewing the certificate; two things need to be done:
In the Nginx Container, Nginx would reload the newly obtained certificates through the following ammendment:
service_name:
...
- command: /bin/ash -c "exec nginx -g 'daemon off;'"
+ command: /bin/ash -c "while :; do sleep 6h & wait $${!}; nginx -s reload; done & exec nginx -g 'daemon off;'"
...
In the Certbot Container section, the following is to be add to check if the certificate is up for renewal every twelve hours, as recommended by Let's Encrypt:
certbot:
...
+ entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot; sleep 12h & wait $${!}; done;'"
Before running docker-compose -f docker-compose.yml up, the ownership of the data should be changed folder to the ec2-user; this is to avoid running into permission errors when running docker-compose -f docker-compose.yml up, or running it in sudo mode:
sudo chown ec2-user:ec2-user -R /path/to/data/
Don't forget to add a CAA record in your DNS provider for Let's Encrypt. You may read here for more information on how to do so.
If you run into any issues with the Nginx container because you are substituting variables and $server_name and $request_uri are not appearing properly, you may refer to this issue.

Docker with Mongo and Express

First of all, give thanks for reading my question and try to help me and apologize for my English.
I'm trying to run a docker with a express server project and mongo, but build Dockerfile perfectly but I do:
docker logs -f name
and show next error:
/usr/local/bin/docker-entrypoint-sh line 340: exec: npm not found
And, I think that mongodb doesnt run
Why doesnt recognize npm if I add node?
How can run npm and launch mongo?
I'm using ubuntu 17.10
Here is my Dockerfile:
# Install Node
FROM node:latest
# Install Mongo
FROM mongo:4.0.0-xenial
# Author
MAINTAINER MachineGun
# Create user Ubuntu 17.10 (64 bits)
RUN adduser --disabled-login dockeruser
# Work Directory
WORKDIR /home/expressserver
# Copy express project to docker
COPY expressserver expressserver
# Defaul user
USER dockeruser
# Config cointainer PORT:
# MongoDB listening on port: 27017
# Server listening on port: 8080
EXPOSE 27017 8080
# Exec MongoDB
CMD [mongo]
# Exec server with custom npm start
CMD ["npm", "run", "start:dev"]
Also, in app.js I use mongoose with next uri: mongodb://localhost:27017/ExpressServer, its ok?
mongoose.connection.openUri('mongodb://localhost:27017/ExpressServer', { useNewUrlParser: true }, (err, res) => {
if (err) {
console.log('Error: Database not running on port 27017: \x1b[31m%s\x1b[0m', 'offline');
// console.log('throw err: ', err);
throw err;
}
console.log('Database running on port 27017: \x1b[32m%s\x1b[0m', 'online');
});
I've solved the problem :D
Thanks a lot :D
Finally I used docker compose...
Here is my docker-compose.yml:
version: '2'
services:
server:
container_name: expressserver
build: ./
ports:
- "8080:8080"
links:
- mongo
mongo:
container_name: mongoDB
image: mongo:latest
volumes:
- /var/lib/mongodb:/data/db
ports:
- "27017:27017"
command: mongod --port 27017
Here is my Dockerfile:
FROM node:carbon
MAINTAINER MachineGun
RUN adduser --disabled-login dockeruser
WORKDIR /home/dockeruser
COPY expressserver expressserver
WORKDIR /home/dockeruser/expressserver
RUN npm install
USER dockeruser
EXPOSE 8080 27017
CMD ["npm", "start"]
And in my app.js to connect with mongoose:
const options = {
autoIndex: false, // Don't build indexes
reconnectTries: 30, // Retry up to 30 times
reconnectInterval: 500, // Reconnect every 500ms
poolSize: 10, // Maintain up to 10 socket connections
// If not connected, return errors immediately rather than waiting for reconnect
bufferMaxEntries: 0
};
// Database connection
const connectWithRetry = () => {
mongoose.connect("mongodb://mongo/expressdb", options)
.then(()=>{
console.log('Database running on port 27017: \x1b[32m%s\x1b[0m', 'online')
})
.catch( (err) => {
console.log('MongoDB connection unsuccessful on port 27017: \x1b[31m%s\x1b[0m', 'offline');
console.log('Retry after 5 seconds.');
setTimeout(connectWithRetry, 5000)
});
}
connectWithRetry();

webpack-dev-server proxy to docker container

I have 2 docker containers managed with docker-compose and can't seem to properly use webpack to proxy some request to the backend api.
docker-compose.yml :
version: '2'
services:
web:
build:
context: ./frontend
ports:
- "80:8080"
volumes:
- ./frontend:/16AGR/frontend:rw
links:
- back:back
back:
build:
context: ./backend
expose:
- "8080"
ports:
- "8081:8080"
volumes:
- ./backend:/16AGR/backend:rw
the service web is a simple react application served by a webpack-dev-server.
the service back is a node application.
I have no problem to access either app from my host :
$ curl localhost
> index.html
$ curl localhost:8081
> Hello World
I can also ping and curl the back service from the web container :
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
73ebfef9b250 16agr_web "npm run start" 37 hours ago Up 13 seconds 0.0.0.0:80->8080/tcp 16agr_web_1
a421fc24f8d9 16agr_back "npm start" 37 hours ago Up 15 seconds 0.0.0.0:8081->8080/tcp 16agr_back_1
$ docker exec -it 73e bash
$ root#73ebfef9b250:/16AGR/frontend# curl back:8080
Hello world
However i have a problem with the proxy.
Webpack is started with
webpack-dev-server --display-reasons --display-error-details --history-api-fallback --progress --colors -d --hot --inline --host=0.0.0.0 --port 8080
and the config file is
frontend/webpack.config.js :
var webpack = require('webpack');
var config = module.exports = {
...
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: "back:8080",
secure: false
}
}
}
...
}
When i try to request /api/test with a link in my app for exemple i get a generic error, the link and google did not help much :(
[HPM] Error occurred while trying to proxy request /api/test from localhost to back:8080 (EINVAL) (https://nodejs.org/api/errors.html#errors_common_system_errors)
I suspect some weird thing because the proxy is on the container and the request is on localhost but I don't really have an idea to solve this.
I think I managed to tacle the problem.
Just had to change the webpack configuration with the following
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: {
host: "back",
protocol: 'http:',
port: 8080
},
ignorePath: true,
changeOrigin: true,
secure: false
}
}
}

Resources