Dockerfile Docker-Compose VueJS App using HAProxy won't run - docker

I'm building my project VUEJS App using Trusted Third Party API, and I'm in the middle of building Dockerfile and docker-compose.yml and using haproxy to allow all methode access to API. But after running docker-compose up --build my first theApp stopped immediately, and always stop even after restart, here's my file
Dockerfile
FROM node:18.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "serve"]
docker-compose.yml
version: "3.7"
services:
theApp:
container_name: theApp
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src:/app/src
ports:
- "9990:9990"
haproxy:
image: haproxy:2.3
expose:
- "7000"
- "8080"
ports:
- "8080:8080"
volumes:
- ./haproxy:/usr/local/etc/haproxy
restart: "always"
depends_on:
- theApp
haproxy.cfg
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout tunnel 1h # timeout to use with WebSocket and CONNECT
#enable resolving throught docker dns and avoid crashing if service is down while proxy is starting
resolvers docker_resolver
nameserver dns 127.0.0.11:53
frontend stats
bind *:7000
stats enable
stats hide-version
stats uri /stats
stats refresh 10s
stats auth admin:admin
frontend project_frontend
bind *:8080
acl is_options method OPTIONS
use_backend cors_backend if is_options
default_backend project_backend
backend project_backend
# START CORS
http-response add-header Access-Control-Allow-Origin "*"
http-response add-header Access-Control-Allow-Headers "*"
http-response add-header Access-Control-Max-Age 3600
http-response add-header Access-Control-Allow-Methods "GET, DELETE, OPTIONS, POST, PUT, PATCH"
# END CORS
server pbe1 theApp:8080 check inter 5s
backend cors_backend
http-after-response set-header Access-Control-Allow-Origin "*"
http-after-response set-header Access-Control-Allow-Headers "*"
http-after-response set-header Access-Control-Max-Age "31536000"
http-request return status 200
here's the error from command
[NOTICE] 150/164342 (1) : New worker #1 (8) forked
haproxy_1 | [WARNING] 150/164342 (8) : Server project_backend/pbe1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1 | [NOTICE] 150/164342 (8) : haproxy version is 2.3.20-2c8082e
haproxy_1 | [NOTICE] 150/164342 (8) : path to executable is /usr/local/sbin/haproxy
haproxy_1 | [ALERT] 150/164342 (8) : backend 'project_backend' has no server available!
trisaic |
trisaic | > trisaic#0.1.0 serve
trisaic | > vue-cli-service serve
trisaic |
trisaic | INFO Starting development server...
trisaic | ERROR Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | at checkResourceSource (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:167:11)
trisaic | at Function.normalizeRule (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:198:4)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:110:20
trisaic | at Array.map (<anonymous>)
trisaic | at Function.normalizeRules (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:109:17)
trisaic | at new RuleSet (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:104:24)
trisaic | at new NormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/NormalModuleFactory.js:115:18)
trisaic | at Compiler.createNormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:636:31)
trisaic | at Compiler.newCompilationParams (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:653:30)
trisaic | at Compiler.compile (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:661:23)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:77:18
trisaic | at AsyncSeriesHook.eval [as callAsync] (eval at create (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:24:1)
trisaic | at AsyncSeriesHook.lazyCompileHook (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/Hook.js:154:20)
trisaic | at Watching._go (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:41:32)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:33:9
trisaic | at Compiler.readRecords (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:529:11)
trisaic exited with code 1
I already tried and googled but got stuck, am I missing something here?

Related

Docker-Compose with Commandbox cannot change web root

I'm using docker-compose to launch a commandbox lucee container and a mysql contianer.
I'd like to change the web root of the lucee server, to keep all my non-public files hidden (server.json etc, cfmigrations resources folder)
I've followed the docs and updated my server.json
https://commandbox.ortusbooks.com/embedded-server/server.json/packaging-your-server
{
"web":{
"webroot":"./public"
}
}
If I launch the server from Windows (box start from the app folder), the server loads my index.cfm from ./public at http://localhost, perfect.
But using this .yaml file, the webroot doesn't change to ./public and the contents of my /app folder is shown, with the "public" folder visible in the directory listing.
services:
db:
image: mysql:8.0.26
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
MYSQL_DATABASE: cf
MYSQL_USER: $MYSQL_USER
MYSQL_PASSWORD: $MYSQL_PASSWORD
MYSQL_SOURCE: $MYSQL_SOURCE
MYSQL_SOURCE_USER: $MYSQL_SOURCE_USER
MYSQL_SOURCE_PASSWORD: $MYSQL_SOURCE_PASSWORD
volumes:
- ./mysql:/var/lib/mysql
- ./assets/initdb:/docker-entrypoint-initdb.d
- ./assets/sql:/assets/sql
web:
depends_on:
- db
# Post 3.1.0 fails to boot if APP_DIR is set to non /app
# image: ortussolutions/commandbox:lucee5-3.1.0
image: ortussolutions/commandbox:lucee5
# build: .
ports:
- "80:80"
- "443:443"
environment:
- PORT=80
- SSL_PORT=443
- BOX_SERVER_WEB_SSL_ENABLE=true
- BOX_SERVER_WEB_DIRECTORYBROWSING=$CF_DIRECTORY_BROWSING
- BOX_INSTALL=true
- BOX_SERVER_WEB_BLOCKCFADMIN=$CF_BLOCK_ADMIN
- BOX_SERVER_CFCONFIGFILE=/app/.cfconfig.json
# - APP_DIR=/app/public
# - BOX_SERVER_WEB_WEBROOT=/app/public
- cfconfig_robustExceptionEnabled=$CF_ROBOUST_EXCEPTION_ENABLED
- cfconfig_adminPassword=$CF_ADMIN_PASSWORD
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_HOST=$MYSQL_HOST
- MYSQL_PORT=$MYSQL_PORT
volumes:
- ./app:/app
- ./assets/mysql-connector-java-8.0.26.jar:/usr/local/lib/CommandBox/lib/mysql-connector-java-8.0.26.jar
Here's the directory listing:
This is my project structure:
It seems like the server.json file is being ignored or at least the web.webroot property, but I've tried both of these settings, and neither solve the problem
- APP_DIR=/app/public
- BOX_SERVER_WEB_WEBROOT=/app/public
The commandbox docs suggest changing APP_DIR to fix the web root, "APP_DIR - Application directory (web root)."
https://hub.docker.com/r/ortussolutions/commandbox/
But if I do that, I get an error about the startup script being in the wrong place, which to me looks like it should be fixed:
https://github.com/Ortus-Solutions/docker-commandbox/issues/55
The BOX_SERVER_WEB_WEBROOT is in the same way server.json is (or at least that property). I've tried setting the following env vars as well (both upper and lower case) and it makes no diffence, but bear in mind server.json changes the webroot for me whe
BOX_SERVER_WEB_WEBROOT=./public
BOX_SERVER_WEB_WEBROOT=/app/public
BOX_SERVER_WEB_WEBROOT=public
The output as the web container starts up:
Set verboseErrors = true
INFO: CF Engine defined as lucee#5.3.8+189
INFO: Convention .cfconfig.json found at /app/.cfconfig.json
INFO: Server Home Directory set to: /usr/local/lib/serverHome
√ | Installing ALL dependencies
| √ | Installing package [forgebox:commandbox-cfconfig#1.6.3]
| √ | Installing package [forgebox:commandbox-migrations#3.2.3]
| | √ | Installing package [forgebox:cfmigrations#^2.0.0]
| | | √ | Installing package [forgebox:qb#^8.0.0]
| | | | √ | Installing package [forgebox:cbpaginator#^2.4.0]
+ [[ -n '' ]]
+ [[ -n '' ]]
INFO: Generating server startup script
√ | Starting Server
|------------------------------
| start server in - /app/
| server name - app
| server config file - /app//server.json
| WAR/zip archive already installed.
| Found CFConfig JSON in ".cfconfig.json" file in web root by convention
| .
| Importing luceeserver config from [/app/.cfconfig.json]
| Config transferred!
| Setting OS environment variable [cfconfig_adminPassword] into luceeser
| ver
| [adminPassword] set.
| Setting OS environment variable [cfconfig_robustExceptionEnabled] into
| luceeserver
| [robustExceptionEnabled] set.
| Start script for shell [bash] generated at: /app/server-start.sh
| Server start command:
| /opt/java/openjdk/bin/java
| -jar /usr/local/lib/CommandBox/lib/runwar-4.5.1.jar
| --background=false
| --host 0.0.0.0
| --stop-port 42777
| --processname app [lucee 5.3.8+189]
| --log-dir /usr/local/lib/serverHome//logs
| --server-name app
| --tray-enable false
| --dock-enable true
| --directoryindex true
| --timeout 240
| --proxy-peeraddress true
| --cookie-secure false
| --cookie-httponly false
| --pid-file /usr/local/lib/serverHome//.pid.txt
| --gzip-enable true
| --cfengine-name lucee
| -war /app/
| --web-xml-path /usr/local/lib/serverHome/WEB-INF/web.xml
| --http-enable true
| --ssl-enable true
| --ajp-enable false
| --http2-enable true
| --open-browser false
| --open-url https://0.0.0.0:443
| --port 80
| --ssl-port 443
| --urlrewrite-enable false
| --predicate-file /usr/local/lib/serverHome//.predicateFile.txt
| Dry run specified, exiting without starting server.
|------------------------------
| √ | Setting Server Profile to [production]
| |-----------------------------------------------------
| | Profile set from secure by default
| | Block CF Admin disabled
| | Block Sensitive Paths enabled
| | Block Flash Remoting enabled
| | Directory Browsing enabled
| |-----------------------------------------------------
INFO: Starting server using generated script: /usr/local/bin/startup.sh
[INFO ] runwar.server: Starting RunWAR 4.5.1
[INFO ] runwar.server: HTTP2 Enabled:true
[INFO ] runwar.server: Enabling SSL protocol on port 443
[INFO ] runwar.server: HTTP ajpEnable:false
[INFO ] runwar.server: HTTP warFile exists:true
[INFO ] runwar.server: HTTP warFile isDirectory:true
[INFO ] runwar.server: HTTP background:false
[INFO ] runwar.server: Adding additional lib dir of: /usr/local/lib/serverHome/WEB-INF/lib
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: Starting - port:80 stop-port:42777 warpath:file:/app/
[INFO ] runwar.server: context: / - version: 4.5.1
[INFO ] runwar.server: web-dirs: ["\/app"]
[INFO ] runwar.server: Log Directory: /usr/local/lib/serverHome/logs
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: XNIO-Option CONNECTION_LOW_WATER:1000000
[INFO ] runwar.server: XNIO-Option CORK:true
[INFO ] runwar.server: XNIO-Option WORKER_TASK_MAX_THREADS:30
[INFO ] runwar.server: XNIO-Option WORKER_IO_THREADS:8
[INFO ] runwar.server: XNIO-Option TCP_NODELAY:true
[INFO ] runwar.server: XNIO-Option WORKER_TASK_CORE_THREADS:30
[INFO ] runwar.server: XNIO-Option CONNECTION_HIGH_WATER:1000000
[INFO ] runwar.config: Parsing '/usr/local/lib/serverHome/WEB-INF/web.xml'
[INFO ] runwar.server: Extensions allowed by the default servlet for static files: 3gp,3gpp,7z,ai,aif,aiff,asf,asx,atom,au,avi,bin,bmp,btm,cco,crt,css,csv,deb,der,dmg,doc,docx,eot,eps,flv,font,gif,hqx,htc,htm,html,ico,img,ini,iso,jad,jng,jnlp,jpeg,jpg,js,json,kar,kml,kmz,m3u8,m4a,m4v,map,mid,midi,mml,mng,mov,mp3,mp4,mpeg,mpeg4,mpg,msi,msm,msp,ogg,otf,pdb,pdf,pem,pl,pm,png,ppt,pptx,prc,ps,psd,ra,rar,rpm,rss,rtf,run,sea,shtml,sit,svg,svgz,swf,tar,tcl,tif,tiff,tk,ts,ttf,txt,wav,wbmp,webm,webp,wmf,wml,wmlc,wmv,woff,woff2,xhtml,xls,xlsx,xml,xpi,xspf,zip,aifc,aac,apk,bak,bk,bz2,cdr,cmx,dat,dtd,eml,fla,gz,gzip,ipa,ia,indd,hey,lz,maf,markdown,md,mkv,mp1,mp2,mpe,odt,ott,odg,odf,ots,pps,pot,pmd,pub,raw,sdd,tsv,xcf,yml,yaml
[INFO ] runwar.server: welcome pages in deployment manager: [index.cfm, index.lucee, index.html, index.htm]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.felix.framework.ext.ClassPathExtenderFactory$DefaultClassLoaderExtender (file:/usr/local/lib/serverHome/WEB-INF/lib/lucee.jar) to method java.net.URLClassLoader.addURL(java.net.URL)
WARNING: Please consider reporting this to the maintainers of org.apache.felix.framework.ext.ClassPathExtenderFactory$DefaultClassLoaderExtender
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[INFO ] runwar.server: Direct Buffers: true
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: *** starting 'stop' listener thread - Host: 0.0.0.0 - Socket: 42777
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: Server is up - http-port:80 https-port:443 stop-port:42777 PID:286 version 4.5.1
This is all fairly new to me so I might have done something completely wrong, I'm wondering if it's a problem with the folder nesting, although I've tried rearranging it and can't come up with a working solution.
You're using a pre-warmed image
image: ortussolutions/commandbox:lucee5
That means the server has already been started and "locked in" to all its settings, including the web root. Use the vanilla commandbox image that has never had a server started, that way when you warm up the image, you'll be starting it with your settings the first time.
To set a custom web root, you'll want to add this to your docker file
ENV APP_DIR=/app/public

I am getting error when I try to dockerize my MERN application

Here is my Dockerfile for React.js with the error I got in terminal:
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./package.json /usr/src/app
RUN npm install
RUN npm build
EXPOSE 3000
CMD ["npm", "run", "start"]
Error:-
react_1 |
react_1 | > ecom-panther#0.1.0 start /usr/src/app
react_1 | > react-scripts start
react_1 |
react_1 | ℹ 「wds」: Project is running at http://172.18.0.2/
react_1 | ℹ 「wds」: webpack output is served from
react_1 | ℹ 「wds」: Content not from webpack is served from /usr/src/app/public
react_1 | ℹ 「wds」: 404s will fallback to /
react_1 | Starting the development server...
react_1 |
ecom-panther_react_1 exited with code 0
For Node and Express, I got this:
express_1 | (node:30) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
express_1 | server is running on port: 5000
express_1 | (node:30) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]
express_1 | at Pool.<anonymous> (/usr/src/app/node_modules/mongodb/lib/core/topologies/server.js:438:11)
express_1 | at emitOne (events.js:116:13)
express_1 | at Pool.emit (events.js:211:7)
express_1 | at createConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:561:14)
express_1 | at connect (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:994:11)
express_1 | at makeConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:31:7)
express_1 | at callback (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:264:5)
express_1 | at Socket.err (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:294:7)
express_1 | at Object.onceWrapper (events.js:315:30)
express_1 | at emitOne (events.js:116:13)
express_1 | at Socket.emit (events.js:211:7)
express_1 | at emitErrorNT (internal/streams/destroy.js:73:8)
express_1 | at _combinedTickCallback (internal/process/next_tick.js:139:11)
express_1 | at process._tickCallback (internal/process/next_tick.js:181:9)
express_1 | (node:30) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
express_1 | (node:30) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Docker file for backend:-
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
EXPOSE 5000
CMD ["npm","start"]
Here is my docker-compose.yml file
version: '3' # specify docker-compose version
# Define the service/container to be run
services:
react: #name of first service
build: client #specify the directory of docker file
ports:
- "3000:3000" #specify port mapping
express: #name of second service
build: server #specify the directory of docker file
ports:
- "5000:5000" #specify port mapping
links:
- database #link this service to the database service
database: #name of third service
image: mongo #specify image to build contasiner flow
ports:
- "27017:27017" #specify port mapping
How I can run frontend at browser and is there any easy approach to do this in a better way ?
Error 1:
Add stdin_open: true to your react service, like:
...
services:
react: #name of first service
build: client #specify the directory of docker file
stdin_open: true
ports:
- "3000:3000" #specify port mapping
...
You might need to rebuild or clean cached so "docker-compose up --build" or "docker-compose build --no-cache" then "docker-compose up"
Error 2:
In your database connections line in your index.js file or whatever you named should have :
mongodb://database:27017/
where "database" is your named MongoDB service. You can use your container IP address too with docker inspect <container> and use the IP the see there too. Ideally you want to have a ENV in your Dockerfile or docker-compose.yml:
ENV MONGO_URL mongodb://database:27017/

Cannot find `/etc/letsencrypt/live/` in a container

I have a server working well with the following docker-compose.yml. I can find in container /etc/letsencrypt/live/v2.10studio.tech/fullchain.pem and /etc/letsencrypt/live/v2.10studio.tech/privkey.pem.
version: "3"
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
ports:
- 80:8080/tcp
- 443:443/tcp
environment:
CERTBOT_EMAIL: owner#company.com
volumes:
- ./conf.d:/etc/nginx/user.conf.d:ro
- letsencrypt:/etc/letsencrypt
10studio:
image: bitnami/nginx:1.16
restart: always
volumes:
- ./build:/app
- ./default.conf:/opt/bitnami/nginx/conf/server_blocks/default.conf:ro
- ./configs/config.prod.js:/app/lib/config.js
depends_on:
- frontend
volumes:
letsencrypt:
networks:
default:
external:
name: 10studio
I tried to create another server with the same setting, but I could not find live under /etc/letsencrypt of the container.
Does anyone know what's wrong? where do files under /etc/letsencrypt/live come from?
Edit 1:
I have one file conf.d/.conf, I tried to rebuild and got the following message:
root#iZj6cikgrkjzogdi7x6rdoZ:~/10Studio/pfw# docker-compose up --build --force-recreate --no-deps
Creating pfw_pfw_1 ... done
Creating pfw_10studio_1 ... done
Attaching to pfw_pfw_1, pfw_10studio_1
10studio_1 | 11:25:33.60
10studio_1 | 11:25:33.60 Welcome to the Bitnami nginx container
pfw_1 | templating scripts from /etc/nginx/user.conf.d to /etc/nginx/conf.d
pfw_1 | Substituting variables
pfw_1 | -> /etc/nginx/user.conf.d/*.conf
pfw_1 | /scripts/util.sh: line 116: /etc/nginx/user.conf.d/*.conf: No such file or directory
pfw_1 | Done with startup
pfw_1 | Run certbot
pfw_1 | ++ parse_domains
pfw_1 | ++ for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | ++ xargs echo
pfw_1 | ++ sed -n -r -e 's&^\s*ssl_certificate_key\s*\/etc/letsencrypt/live/(.*)/privkey.pem;\s*(#.*)?$&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + auto_enable_configs
pfw_1 | + for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | + keyfiles_exist /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ parse_keyfiles /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ sed -n -e 's&^\s*ssl_certificate_key\s*\(.*\);&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + return 0
pfw_1 | + '[' conf = nokey ']'
pfw_1 | + set +x
10studio_1 | 11:25:33.60 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-nginx
10studio_1 | 11:25:33.61 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-nginx/issues
10studio_1 | 11:25:33.61 Send us your feedback at containers#bitnami.com
10studio_1 | 11:25:33.61
10studio_1 | 11:25:33.62 INFO ==> ** Starting NGINX setup **
10studio_1 | 11:25:33.64 INFO ==> Validating settings in NGINX_* env vars...
10studio_1 | 11:25:33.64 INFO ==> Initializing NGINX...
10studio_1 | 11:25:33.65 INFO ==> ** NGINX setup finished! **
10studio_1 |
10studio_1 | 11:25:33.66 INFO ==> ** Starting NGINX **
If I do docker-compose up -d --build, I still cannot find /etc/letsencrypt/live in the container.
Please go through the original site of this image staticfloat/nginx-certbot, it will create and automatically renew website SSL certificates.
With the configuraiton file under ./conf.d
Create a config directory for your custom configs:
$ mkdir conf.d
And a .conf in that directory:
server {
listen 443 ssl;
server_name server.company.com;
ssl_certificate /etc/letsencrypt/live/server.company.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/server.company.com/privkey.pem;
location / {
...
}
}
because /etc/letsencrypt is mounted from a persistent volume letsencrypt
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
...
volumes:
...
- letsencrypt:/etc/letsencrypt
volumes:
letsencrypt:
If you need reference /etc/letsencrypt/live, you need mount the same volume letsencrypt into your new application as well
It works after changing ports: - 80:8080/tcp to ports: - 80:80/tcp.
As /etc/letsencrypt is a mounted volume that is persisted over restarts of your container, I would assume that any process added these files to the volume. According to a quick search using my favorite search engine, /etc/letsencrypt/live is filled with files after creating certificates

How to properly run database (rethinkdb) with docker-compose?

I need help to run a database with docker and nodejs. I do not understand where I'm going wrong, but I can not make connection between my container with database and my container with node. This is the db link in docker: "https://hub.docker.com/_/rethinkdb/".Then follows:
my Dockerfile
FROM node:latest
ENV HOME=/src/jv-agricultor
RUN mkdir -p $HOME/
WORKDIR $HOME/
ADD package* $HOME/
RUN npm install
EXPOSE 80
ADD . $HOME/
CMD ["node", "node_modules/.bin/nodemon", "-L", "bin/www"]
My docker-compose.yml
version: "3"
volumes:
rethindb-data:
external: true
services:
db:
image: rethinkdb:latest
ports:
- "8080:8080"
- "29015:29015"
- "28015:28015"
api:
image: hello-nodemon
environment:
- NODE_ENV=development
- PORT=80
- DB_HOST=localhost
- DB_PORT=28015
deploy:
# replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "3000:80"
volumes:
- .:/src/jv-agricultor
- /src/jv-agricultor/node_modules
depends_on:
- db
networks:
- webnet
networks:
webnet:
i run: docker stack deploy -c docker-compose.yml webservice
My docker service
ID NAME MODE REPLICAS IMAGE PORTS
yez42a7w8khs webservice_api replicated 1/1 hello-nodemon:latest *:3000->80/tcp
n8idu78cp18m webservice_db replicated 1/1 rethinkdb:latest *:8080->8080/tcp,*:28015->28015/tcp,*:29015->29015/tcp
My docker service api (here is node/express)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
p20qdagcspjc webservice_api.1 hello-nodemon:latest abner Running Running 28 minutes ago
My Docker service db
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
3046xuo4l8ix webservice_db.1 rethinkdb:latest abner Running Running 30 minutes ago
My logs internal db
webservice_db.1.3046xuo4l8ix#abner | Recursively removing directory /data/rethinkdb_data/tmp
webservice_db.1.3046xuo4l8ix#abner | Initializing directory /data/rethinkdb_data
webservice_db.1.3046xuo4l8ix#abner | Running rethinkdb 2.3.6~0jessie (GCC 4.9.2)...
webservice_db.1.3046xuo4l8ix#abner | Running on Linux 4.15.0-24-generic x86_64
webservice_db.1.3046xuo4l8ix#abner | Loading data from directory /data/rethinkdb_data
webservice_db.1.3046xuo4l8ix#abner | Listening for intracluster connections on port 29015
webservice_db.1.3046xuo4l8ix#abner | Listening for client driver connections on port 28015
webservice_db.1.3046xuo4l8ix#abner | Listening for administrative HTTP connections on port 8080
webservice_db.1.3046xuo4l8ix#abner | Listening on cluster addresses: 127.0.0.1, 172.18.0.3, 10.0.5.182, 10.0.5.183, 10.255.11.212, 10.255.11.213
webservice_db.1.3046xuo4l8ix#abner | Listening on driver addresses: 127.0.0.1, 172.18.0.3, 10.0.5.182, 10.0.5.183, 10.255.11.212, 10.255.11.213
webservice_db.1.3046xuo4l8ix#abner | Listening on http addresses: 127.0.0.1, 172.18.0.3, 10.0.5.182, 10.0.5.183, 10.255.11.212, 10.255.11.213
webservice_db.1.3046xuo4l8ix#abner | Server ready, "069fd360acfb_jot" c1cf5173-cf0d-457f-9c8f-4ba1756c28d8
my app.js
...
var connect = require('./lib/connect');
console.log('DB_HOST: ' + process.env.DB_HOST);
console.log('DB_PORT: ' + process.env.DB_PORT);
console.log('PORT: ' + process.env.PORT);
console.log('NODE_ENV: ' + process.env.NODE_ENV);
...
My connect middleware
'use strict'
// import r from 'rethinkdb';
var r = require('rethinkdb');
module.exports._connect = (function _connect(req, res, next) {
r.connect( {host: process.env.DB_HOST, port: process.env.DB_PORT}, (err, conn) => {
console.log(err);
})
})();
My service docker api logs respose
webservice_api.1.p20qdagcspjc#abner | [nodemon] restarting due to changes...
webservice_api.1.p20qdagcspjc#abner | [nodemon] starting `node bin/www`
webservice_api.1.p20qdagcspjc#abner | DB_HOST: localhost
webservice_api.1.p20qdagcspjc#abner | DB_PORT: 28015
webservice_api.1.p20qdagcspjc#abner | PORT: 80
webservice_api.1.p20qdagcspjc#abner | NODE_ENV: development
webservice_api.1.p20qdagcspjc#abner | { ReqlDriverError: Could not connect to localhost:28015.
webservice_api.1.p20qdagcspjc#abner | connect ECONNREFUSED 127.0.0.1:28015
webservice_api.1.p20qdagcspjc#abner | at ReqlDriverError.ReqlError [as constructor] (/src/jv-agricultor/node_modules/rethinkdb/errors.js:23:13)
webservice_api.1.p20qdagcspjc#abner | at new ReqlDriverError (/src/jv-agricultor/node_modules/rethinkdb/errors.js:68:50)
webservice_api.1.p20qdagcspjc#abner | at TcpConnection.<anonymous> (/src/jv-agricultor/node_modules/rethinkdb/net.js:94:27)
webservice_api.1.p20qdagcspjc#abner | at Object.onceWrapper (events.js:273:13)
webservice_api.1.p20qdagcspjc#abner | at TcpConnection.emit (events.js:182:13)
webservice_api.1.p20qdagcspjc#abner | at Socket.<anonymous> (/src/jv-agricultor/node_modules/rethinkdb/net.js:705:22)
webservice_api.1.p20qdagcspjc#abner | at Socket.emit (events.js:187:15)
webservice_api.1.p20qdagcspjc#abner | at emitErrorNT (internal/streams/destroy.js:82:8)
webservice_api.1.p20qdagcspjc#abner | at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
webservice_api.1.p20qdagcspjc#abner | at process._tickCallback (internal/process/next_tick.js:63:19)
webservice_api.1.p20qdagcspjc#abner | From previous event:
webservice_api.1.p20qdagcspjc#abner | at Function.<anonymous> (/src/jv-agricultor/node_modules/rethinkdb/net.js:945:10)
webservice_api.1.p20qdagcspjc#abner | at Function.connect (/src/jv-agricultor/node_modules/rethinkdb/util.js:43:16)
webservice_api.1.p20qdagcspjc#abner | at _connect (/src/jv-agricultor/lib/connect.js:9:7)
webservice_api.1.p20qdagcspjc#abner | at Object.<anonymous> (/src/jv-agricultor/lib/connect.js:19:3)
webservice_api.1.p20qdagcspjc#abner | at Module._compile (internal/modules/cjs/loader.js:689:30)
webservice_api.1.p20qdagcspjc#abner | at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
webservice_api.1.p20qdagcspjc#abner | at Module.load (internal/modules/cjs/loader.js:599:32)
webservice_api.1.p20qdagcspjc#abner | at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
webservice_api.1.p20qdagcspjc#abner | at Function.Module._load (internal/modules/cjs/loader.js:530:3)
webservice_api.1.p20qdagcspjc#abner | at Module.require (internal/modules/cjs/loader.js:637:17)
webservice_api.1.p20qdagcspjc#abner | at require (internal/modules/cjs/helpers.js:20:18)
webservice_api.1.p20qdagcspjc#abner | at Object.<anonymous> (/src/jv-agricultor/app.js:14:15)
webservice_api.1.p20qdagcspjc#abner | at Module._compile (internal/modules/cjs/loader.js:689:30)
webservice_api.1.p20qdagcspjc#abner | at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
webservice_api.1.p20qdagcspjc#abner | at Module.load (internal/modules/cjs/loader.js:599:32)
webservice_api.1.p20qdagcspjc#abner | at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
webservice_api.1.p20qdagcspjc#abner | at Function.Module._load (internal/modules/cjs/loader.js:530:3)
webservice_api.1.p20qdagcspjc#abner | at Module.require (internal/modules/cjs/loader.js:637:17)
webservice_api.1.p20qdagcspjc#abner | at require (internal/modules/cjs/helpers.js:20:18)
webservice_api.1.p20qdagcspjc#abner | name: 'ReqlDriverError',
webservice_api.1.p20qdagcspjc#abner | msg:
webservice_api.1.p20qdagcspjc#abner | 'Could not connect to localhost:28015.\nconnect ECONNREFUSED 127.0.0.1:28015',
webservice_api.1.p20qdagcspjc#abner | frames: undefined,
webservice_api.1.p20qdagcspjc#abner | message:
webservice_api.1.p20qdagcspjc#abner | 'Could not connect to localhost:28015.\nconnect ECONNREFUSED 127.0.0.1:28015' }
docker-compose does inter-service communication by service name, so the value of DB_HOST should be db.
On a side note, unless you need to expose the database outside of the stack, you do not need the port mapping.
#Alex Karshin It is unnecessary to fully specify the container name. The first example in the docker-compose networking docs shows how simple it really is.
spawnia has a point, but might not have the right answer. If you take a look at docker ps -a the name of your database container is webservice_db. Therefore, you will not be successful if you try to connect to rethinkdb on localhost (cuz obviously it's not on localhost).
You must either hardcode the container name (webservice_db) to your config file, or do set it in docker-compose.yml. But if you do, I suggest you set the container names explicitly:
version: "3"
...
services:
db:
container_name: webservice_db
...
api:
container_name: webservice_api
environment:
- NODE_ENV=development
- PORT=80
- DB_HOST= webservice_db
- DB_PORT=28015
...
There, now it should work normally.

Docker Container Failed to Run

The Dockerfile for my application is as follows
# Tells the Docker which base image to start.
FROM node
# Adds files from the host file system into the Docker container.
ADD . /app
# Sets the current working directory for subsequent instructions
WORKDIR /app
RUN npm install
RUN npm install -g bower
RUN bower install --allow-root
RUN npm install -g nodemon
#expose a port to allow external access
EXPOSE 9000 9030 35729
# Start mean application
CMD ["nodemon", "server.js"]
The docker-compose.yml file is as follows
web:
build: .
links:
- db
ports:
- "9000:9000"
- "9030:9030"
- "35729:35729"
db:
image: mongo:latest
ports:
- "27017:27017"
And the error generated while running is as follows:-
web_1 | [nodemon] 1.11.0
web_1 | [nodemon] to restart at any time, enter `rs`
web_1 | [nodemon] watching: *.*
web_1 | [nodemon] starting `node server.js`
web_1 | Server running at http://127.0.0.1:9000
web_1 | Server running at https://127.0.0.1:9030
web_1 |
web_1 | /app/node_modules/mongodb/lib/server.js:261
web_1 | process.nextTick(function() { throw err; })
web_1 | ^
web_1 | MongoError: failed to connect to server [localhost:27017] on first connect
web_1 | at Pool.<anonymous> (/app/node_modules/mongodb-core/lib/topologies/server.js:313:35)
web_1 | at emitOne (events.js:96:13)
web_1 | at Pool.emit (events.js:188:7)
web_1 | at Connection.<anonymous> (/app/node_modules/mongodb-core/lib/connection/pool.js:271:12)
web_1 | at Connection.g (events.js:291:16)
web_1 | at emitTwo (events.js:106:13)
web_1 | at Connection.emit (events.js:191:7)
web_1 | at Socket.<anonymous> (/app/node_modules/mongodb-core/lib/connection/connection.js:165:49)
web_1 | at Socket.g (events.js:291:16)
web_1 | at emitOne (events.js:96:13)
web_1 | at Socket.emit (events.js:188:7)
web_1 | at emitErrorNT (net.js:1281:8)
web_1 | at _combinedTickCallback (internal/process/next_tick.js:74:11)
web_1 | at process._tickCallback (internal/process/next_tick.js:98:9)
web_1 | [nodemon] app crashed - waiting for file changes before starting...
I have uploaded the image for my application at DockerHub as crissi/airlineInsurance.
In docker you can't connect to an other container via localhost because each container is independend and has its own IP. You should use container_name:port. In your example it should be db:27017 to connect from your NodeJS application in 'web' to the MongoDB in 'db'.
So it's not the problem of your Dockerfile. It's the connection URL from your NodeJS application that points to localhost instead of db.

Resources