Docker-Compose with Commandbox cannot change web root - docker

I'm using docker-compose to launch a commandbox lucee container and a mysql contianer.
I'd like to change the web root of the lucee server, to keep all my non-public files hidden (server.json etc, cfmigrations resources folder)
I've followed the docs and updated my server.json
https://commandbox.ortusbooks.com/embedded-server/server.json/packaging-your-server
{
"web":{
"webroot":"./public"
}
}
If I launch the server from Windows (box start from the app folder), the server loads my index.cfm from ./public at http://localhost, perfect.
But using this .yaml file, the webroot doesn't change to ./public and the contents of my /app folder is shown, with the "public" folder visible in the directory listing.
services:
db:
image: mysql:8.0.26
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
MYSQL_DATABASE: cf
MYSQL_USER: $MYSQL_USER
MYSQL_PASSWORD: $MYSQL_PASSWORD
MYSQL_SOURCE: $MYSQL_SOURCE
MYSQL_SOURCE_USER: $MYSQL_SOURCE_USER
MYSQL_SOURCE_PASSWORD: $MYSQL_SOURCE_PASSWORD
volumes:
- ./mysql:/var/lib/mysql
- ./assets/initdb:/docker-entrypoint-initdb.d
- ./assets/sql:/assets/sql
web:
depends_on:
- db
# Post 3.1.0 fails to boot if APP_DIR is set to non /app
# image: ortussolutions/commandbox:lucee5-3.1.0
image: ortussolutions/commandbox:lucee5
# build: .
ports:
- "80:80"
- "443:443"
environment:
- PORT=80
- SSL_PORT=443
- BOX_SERVER_WEB_SSL_ENABLE=true
- BOX_SERVER_WEB_DIRECTORYBROWSING=$CF_DIRECTORY_BROWSING
- BOX_INSTALL=true
- BOX_SERVER_WEB_BLOCKCFADMIN=$CF_BLOCK_ADMIN
- BOX_SERVER_CFCONFIGFILE=/app/.cfconfig.json
# - APP_DIR=/app/public
# - BOX_SERVER_WEB_WEBROOT=/app/public
- cfconfig_robustExceptionEnabled=$CF_ROBOUST_EXCEPTION_ENABLED
- cfconfig_adminPassword=$CF_ADMIN_PASSWORD
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_HOST=$MYSQL_HOST
- MYSQL_PORT=$MYSQL_PORT
volumes:
- ./app:/app
- ./assets/mysql-connector-java-8.0.26.jar:/usr/local/lib/CommandBox/lib/mysql-connector-java-8.0.26.jar
Here's the directory listing:
This is my project structure:
It seems like the server.json file is being ignored or at least the web.webroot property, but I've tried both of these settings, and neither solve the problem
- APP_DIR=/app/public
- BOX_SERVER_WEB_WEBROOT=/app/public
The commandbox docs suggest changing APP_DIR to fix the web root, "APP_DIR - Application directory (web root)."
https://hub.docker.com/r/ortussolutions/commandbox/
But if I do that, I get an error about the startup script being in the wrong place, which to me looks like it should be fixed:
https://github.com/Ortus-Solutions/docker-commandbox/issues/55
The BOX_SERVER_WEB_WEBROOT is in the same way server.json is (or at least that property). I've tried setting the following env vars as well (both upper and lower case) and it makes no diffence, but bear in mind server.json changes the webroot for me whe
BOX_SERVER_WEB_WEBROOT=./public
BOX_SERVER_WEB_WEBROOT=/app/public
BOX_SERVER_WEB_WEBROOT=public
The output as the web container starts up:
Set verboseErrors = true
INFO: CF Engine defined as lucee#5.3.8+189
INFO: Convention .cfconfig.json found at /app/.cfconfig.json
INFO: Server Home Directory set to: /usr/local/lib/serverHome
√ | Installing ALL dependencies
| √ | Installing package [forgebox:commandbox-cfconfig#1.6.3]
| √ | Installing package [forgebox:commandbox-migrations#3.2.3]
| | √ | Installing package [forgebox:cfmigrations#^2.0.0]
| | | √ | Installing package [forgebox:qb#^8.0.0]
| | | | √ | Installing package [forgebox:cbpaginator#^2.4.0]
+ [[ -n '' ]]
+ [[ -n '' ]]
INFO: Generating server startup script
√ | Starting Server
|------------------------------
| start server in - /app/
| server name - app
| server config file - /app//server.json
| WAR/zip archive already installed.
| Found CFConfig JSON in ".cfconfig.json" file in web root by convention
| .
| Importing luceeserver config from [/app/.cfconfig.json]
| Config transferred!
| Setting OS environment variable [cfconfig_adminPassword] into luceeser
| ver
| [adminPassword] set.
| Setting OS environment variable [cfconfig_robustExceptionEnabled] into
| luceeserver
| [robustExceptionEnabled] set.
| Start script for shell [bash] generated at: /app/server-start.sh
| Server start command:
| /opt/java/openjdk/bin/java
| -jar /usr/local/lib/CommandBox/lib/runwar-4.5.1.jar
| --background=false
| --host 0.0.0.0
| --stop-port 42777
| --processname app [lucee 5.3.8+189]
| --log-dir /usr/local/lib/serverHome//logs
| --server-name app
| --tray-enable false
| --dock-enable true
| --directoryindex true
| --timeout 240
| --proxy-peeraddress true
| --cookie-secure false
| --cookie-httponly false
| --pid-file /usr/local/lib/serverHome//.pid.txt
| --gzip-enable true
| --cfengine-name lucee
| -war /app/
| --web-xml-path /usr/local/lib/serverHome/WEB-INF/web.xml
| --http-enable true
| --ssl-enable true
| --ajp-enable false
| --http2-enable true
| --open-browser false
| --open-url https://0.0.0.0:443
| --port 80
| --ssl-port 443
| --urlrewrite-enable false
| --predicate-file /usr/local/lib/serverHome//.predicateFile.txt
| Dry run specified, exiting without starting server.
|------------------------------
| √ | Setting Server Profile to [production]
| |-----------------------------------------------------
| | Profile set from secure by default
| | Block CF Admin disabled
| | Block Sensitive Paths enabled
| | Block Flash Remoting enabled
| | Directory Browsing enabled
| |-----------------------------------------------------
INFO: Starting server using generated script: /usr/local/bin/startup.sh
[INFO ] runwar.server: Starting RunWAR 4.5.1
[INFO ] runwar.server: HTTP2 Enabled:true
[INFO ] runwar.server: Enabling SSL protocol on port 443
[INFO ] runwar.server: HTTP ajpEnable:false
[INFO ] runwar.server: HTTP warFile exists:true
[INFO ] runwar.server: HTTP warFile isDirectory:true
[INFO ] runwar.server: HTTP background:false
[INFO ] runwar.server: Adding additional lib dir of: /usr/local/lib/serverHome/WEB-INF/lib
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: Starting - port:80 stop-port:42777 warpath:file:/app/
[INFO ] runwar.server: context: / - version: 4.5.1
[INFO ] runwar.server: web-dirs: ["\/app"]
[INFO ] runwar.server: Log Directory: /usr/local/lib/serverHome/logs
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: XNIO-Option CONNECTION_LOW_WATER:1000000
[INFO ] runwar.server: XNIO-Option CORK:true
[INFO ] runwar.server: XNIO-Option WORKER_TASK_MAX_THREADS:30
[INFO ] runwar.server: XNIO-Option WORKER_IO_THREADS:8
[INFO ] runwar.server: XNIO-Option TCP_NODELAY:true
[INFO ] runwar.server: XNIO-Option WORKER_TASK_CORE_THREADS:30
[INFO ] runwar.server: XNIO-Option CONNECTION_HIGH_WATER:1000000
[INFO ] runwar.config: Parsing '/usr/local/lib/serverHome/WEB-INF/web.xml'
[INFO ] runwar.server: Extensions allowed by the default servlet for static files: 3gp,3gpp,7z,ai,aif,aiff,asf,asx,atom,au,avi,bin,bmp,btm,cco,crt,css,csv,deb,der,dmg,doc,docx,eot,eps,flv,font,gif,hqx,htc,htm,html,ico,img,ini,iso,jad,jng,jnlp,jpeg,jpg,js,json,kar,kml,kmz,m3u8,m4a,m4v,map,mid,midi,mml,mng,mov,mp3,mp4,mpeg,mpeg4,mpg,msi,msm,msp,ogg,otf,pdb,pdf,pem,pl,pm,png,ppt,pptx,prc,ps,psd,ra,rar,rpm,rss,rtf,run,sea,shtml,sit,svg,svgz,swf,tar,tcl,tif,tiff,tk,ts,ttf,txt,wav,wbmp,webm,webp,wmf,wml,wmlc,wmv,woff,woff2,xhtml,xls,xlsx,xml,xpi,xspf,zip,aifc,aac,apk,bak,bk,bz2,cdr,cmx,dat,dtd,eml,fla,gz,gzip,ipa,ia,indd,hey,lz,maf,markdown,md,mkv,mp1,mp2,mpe,odt,ott,odg,odf,ots,pps,pot,pmd,pub,raw,sdd,tsv,xcf,yml,yaml
[INFO ] runwar.server: welcome pages in deployment manager: [index.cfm, index.lucee, index.html, index.htm]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.felix.framework.ext.ClassPathExtenderFactory$DefaultClassLoaderExtender (file:/usr/local/lib/serverHome/WEB-INF/lib/lucee.jar) to method java.net.URLClassLoader.addURL(java.net.URL)
WARNING: Please consider reporting this to the maintainers of org.apache.felix.framework.ext.ClassPathExtenderFactory$DefaultClassLoaderExtender
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[INFO ] runwar.server: Direct Buffers: true
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: *** starting 'stop' listener thread - Host: 0.0.0.0 - Socket: 42777
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: Server is up - http-port:80 https-port:443 stop-port:42777 PID:286 version 4.5.1
This is all fairly new to me so I might have done something completely wrong, I'm wondering if it's a problem with the folder nesting, although I've tried rearranging it and can't come up with a working solution.

You're using a pre-warmed image
image: ortussolutions/commandbox:lucee5
That means the server has already been started and "locked in" to all its settings, including the web root. Use the vanilla commandbox image that has never had a server started, that way when you warm up the image, you'll be starting it with your settings the first time.
To set a custom web root, you'll want to add this to your docker file
ENV APP_DIR=/app/public

Related

Cypress Docker container does not connect to running server

[ EDIT: I am not deleting the question even if it could be a duplicate of this one, because the original question might be harder to search. In case this were not advisable, please feel free to delete/close. ]
I have this docker-compose:
x-common-postgres-env:
&common-postgres-env
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_PORT: 5432
x-common-postgres:
&common-postgres
image: postgres:13.4
hostname: postgres
environment:
<< : *common-postgres-env
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "${POSTGRES_USER}", "-d", "${POSTGRES_DB}"]
x-common-django:
&common-django
build: .
environment:
&common-django-env
<< : *common-postgres-env
DJANGO_SECRET: ${DJANGO_SECRET}
ALLOWED_HOSTS: ".localhost 127.0.0.1 [::1]"
CORS_ALLOWED_ORIGINS: "http://localhost:8000"
CSRF_TRUSTED_ORIGINS: "http://localhost:8000"
healthcheck:
test: ["CMD", "wget", "-qO", "/dev/null", "http://localhost:8000"]
ports:
- "8000:8000"
services:
db:
<< : *common-postgres
profiles:
- prod
volumes:
- ./data/db:/var/lib/postgresql/data
db-test:
<< : *common-postgres
profiles:
- test
web:
<< : *common-django
profiles:
- prod
command: pdm run python manage.py runserver 0.0.0.0:8000
environment:
<< : *common-django-env
POSTGRES_HOST: db
volumes:
- ./KJ_import:/code/KJ_import
- ./docs:/code/docs
- ./KJ-JS:/code/KJ-JS
- ./static:/code/static
- ./media:/code/media
- ./templates:/code/templates
depends_on:
db:
condition: service_healthy
web-test:
<< : *common-django
profiles:
- test
command: pdm run python manage.py runserver 0.0.0.0:8000
environment:
<< : *common-django-env
POSTGRES_HOST: db-test
depends_on:
db-test:
condition: service_healthy
cypress:
# image: "cypress/included:9.2.0"
profiles:
- test
build:
context: .
dockerfile: Dockerfile.cy
# command: ["--browser", "chrome"]
environment:
CYPRESS_baseUrl: http://localhost:8000/
working_dir: /code/KJ-JS
volumes:
- ./KJ-JS:/code/KJ-JS
- ./media:/code/media
depends_on:
web-test:
condition: service_healthy
This Dockerfile.cy
FROM cypress/included:9.2.0
# WORKDIR /code/KJ-JS
COPY system.conf /etc/dbus-1/system.conf
RUN chmod 644 /etc/dbus-1/system.conf
COPY entrypoint.cy.sh /
ENTRYPOINT ["/bin/sh", "/entrypoint.cy.sh"]
and this entrypoint.cy.sh to activate the Cypress tests:
#!/bin/sh
echo "### Create DBus"
dbus-uuidgen > /var/lib/dbus/machine-id
mkdir -p /var/run/dbus
dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address &
# Wait for the D-Bus system bus address to be available
while [ -f /var/run/dbus/system_bus_socket ]; do
sleep 1
done
# Check if the dbus-daemon process is running
if ps -ef | grep -v grep | grep dbus-daemon > /dev/null; then
echo "### D-Bus daemon is running"
else
echo "### D-Bus daemon is not running"
fi
# Check if the D-Bus configuration files are correctly configured
if [ -f /etc/dbus-1/system.conf ]; then
echo "### D-Bus system configuration file is present"
else
echo "### D-Bus system configuration file is missing"
fi
# Make sure that the /var/run/dbus directory exists and is writable by the dbus-daemon process
if [ -d /var/run/dbus ]; then
if [ -w /var/run/dbus ]; then
echo "### /var/run/dbus is writable by the dbus-daemon process"
else
echo "### /var/run/dbus is not writable by the dbus-daemon process"
fi
else
echo "### /var/run/dbus does not exist"
fi
# Remove the /var/run/dbus/pid file if it exists
if [ -f /var/run/dbus/pid ]; then
rm -f /var/run/dbus/pid
echo "### /var/run/dbus/pid file removed"
else
echo "### /var/run/dbus/pid file does not exist"
fi
echo "### Bus active"
cd /code/KJ-JS
cypress run --headed --browser chrome
echo "### after cypress run"
exec "$#"
When I run the docker compose --profile test up, the db spins well, django gets up and running, but Cypress cannot seem to connect.
It complained of not having Dbus running, so I added it in the entrypoint shown above, and tested all of its components, yet the error message still comes up:
kj_import-web-test-1 | System check identified no issues (0 silenced).
kj_import-web-test-1 | December 28, 2022 - 02:32:40
kj_import-web-test-1 | Django version 2.2.28, using settings 'KJ_import.settings'
kj_import-web-test-1 | Starting development server at http://0.0.0.0:8000/
kj_import-web-test-1 | Quit the server with CONTROL-C.
kj_import-web-test-1 | [28/Dec/2022 02:32:42] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:32:42] "GET /static/favicon.ico HTTP/1.1" 200 9662
kj_import-web-test-1 | [28/Dec/2022 02:32:46] "GET /docs/register/ HTTP/1.1" 200 6551
kj_import-web-test-1 | [28/Dec/2022 02:32:49] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:33:02] "GET / HTTP/1.1" 200 5776
kj_import-cypress-1 | ### Create DBus
kj_import-cypress-1 | ### D-Bus daemon is running
kj_import-cypress-1 | ### D-Bus system configuration file is present
kj_import-cypress-1 | ### /var/run/dbus is writable by the dbus-daemon process
kj_import-cypress-1 | ### /var/run/dbus/pid file does not exist
kj_import-cypress-1 | ### Bus active
kj_import-cypress-1 | unix:path=/var/run/dbus/system_bus_socket,guid=1181acd37ea51796e63af6a863ab9ccf
kj_import-cypress-1 | [26:1228/013304.773071:ERROR:bus.cc(392)] Failed to connect to the bus: Address does not contain a colon
kj_import-cypress-1 | [26:1228/013304.773122:ERROR:bus.cc(392)] Failed to connect to the bus: Address does not contain a colon
kj_import-cypress-1 | [213:1228/013304.794142:ERROR:gpu_init.cc(453)] Passthrough is not supported, GL is swiftshader, ANGLE is
kj_import-cypress-1 | Cypress could not verify that this server is running:
kj_import-cypress-1 |
kj_import-cypress-1 | > http://localhost:8000/
kj_import-cypress-1 |
kj_import-cypress-1 | We are verifying this server because it has been configured as your `baseUrl`.
kj_import-cypress-1 |
kj_import-cypress-1 | Cypress automatically waits until your server is accessible before running tests.
kj_import-cypress-1 |
kj_import-cypress-1 | We will try connecting to it 3 more times...
kj_import-cypress-1 | We will try connecting to it 2 more times...
kj_import-cypress-1 | We will try connecting to it 1 more time...
kj_import-cypress-1 |
kj_import-cypress-1 | Cypress failed to verify that your server is running.
kj_import-cypress-1 |
kj_import-cypress-1 | Please start this server and then run Cypress again.
kj_import-cypress-1 | ### after cypress run
kj_import-cypress-1 exited with code 0
kj_import-web-test-1 | [28/Dec/2022 02:33:32] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:34:02] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:34:32] "GET / HTTP/1.1" 200 5776
Please note that the server is running fine. You can see it from the above log (GETs replied with 200, even before the Cypress container starts trying to connect) and I can access it from my local browser.
What am I missing here?
Thanks in advance!
In the end it was probably pretty simple: localhost in a container refers only to the container itself, not to the host.
This answer pointed me in the right direction.
So, in order to properly instruct Cypress to watch/test the service, the url (CYPRESS_baseUrl inside the docker-compose.yml) that needs to be passed in is in the format http://[service-name]:[port], which in my case was http://web-test:8000/
Be aware that:
also the Cypress tests need to be directed there, and most likely
the ALLOWED_HOSTS will need to include the service-name
PS: There might have been also a second issue at play: in my search I found this reported bug and several comments pointed to the cypress/included:9.2.0 image as potentially affected. I decided to move onto the 9.7.0

Dockerfile Docker-Compose VueJS App using HAProxy won't run

I'm building my project VUEJS App using Trusted Third Party API, and I'm in the middle of building Dockerfile and docker-compose.yml and using haproxy to allow all methode access to API. But after running docker-compose up --build my first theApp stopped immediately, and always stop even after restart, here's my file
Dockerfile
FROM node:18.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "serve"]
docker-compose.yml
version: "3.7"
services:
theApp:
container_name: theApp
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src:/app/src
ports:
- "9990:9990"
haproxy:
image: haproxy:2.3
expose:
- "7000"
- "8080"
ports:
- "8080:8080"
volumes:
- ./haproxy:/usr/local/etc/haproxy
restart: "always"
depends_on:
- theApp
haproxy.cfg
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout tunnel 1h # timeout to use with WebSocket and CONNECT
#enable resolving throught docker dns and avoid crashing if service is down while proxy is starting
resolvers docker_resolver
nameserver dns 127.0.0.11:53
frontend stats
bind *:7000
stats enable
stats hide-version
stats uri /stats
stats refresh 10s
stats auth admin:admin
frontend project_frontend
bind *:8080
acl is_options method OPTIONS
use_backend cors_backend if is_options
default_backend project_backend
backend project_backend
# START CORS
http-response add-header Access-Control-Allow-Origin "*"
http-response add-header Access-Control-Allow-Headers "*"
http-response add-header Access-Control-Max-Age 3600
http-response add-header Access-Control-Allow-Methods "GET, DELETE, OPTIONS, POST, PUT, PATCH"
# END CORS
server pbe1 theApp:8080 check inter 5s
backend cors_backend
http-after-response set-header Access-Control-Allow-Origin "*"
http-after-response set-header Access-Control-Allow-Headers "*"
http-after-response set-header Access-Control-Max-Age "31536000"
http-request return status 200
here's the error from command
[NOTICE] 150/164342 (1) : New worker #1 (8) forked
haproxy_1 | [WARNING] 150/164342 (8) : Server project_backend/pbe1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1 | [NOTICE] 150/164342 (8) : haproxy version is 2.3.20-2c8082e
haproxy_1 | [NOTICE] 150/164342 (8) : path to executable is /usr/local/sbin/haproxy
haproxy_1 | [ALERT] 150/164342 (8) : backend 'project_backend' has no server available!
trisaic |
trisaic | > trisaic#0.1.0 serve
trisaic | > vue-cli-service serve
trisaic |
trisaic | INFO Starting development server...
trisaic | ERROR Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | at checkResourceSource (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:167:11)
trisaic | at Function.normalizeRule (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:198:4)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:110:20
trisaic | at Array.map (<anonymous>)
trisaic | at Function.normalizeRules (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:109:17)
trisaic | at new RuleSet (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:104:24)
trisaic | at new NormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/NormalModuleFactory.js:115:18)
trisaic | at Compiler.createNormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:636:31)
trisaic | at Compiler.newCompilationParams (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:653:30)
trisaic | at Compiler.compile (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:661:23)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:77:18
trisaic | at AsyncSeriesHook.eval [as callAsync] (eval at create (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:24:1)
trisaic | at AsyncSeriesHook.lazyCompileHook (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/Hook.js:154:20)
trisaic | at Watching._go (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:41:32)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:33:9
trisaic | at Compiler.readRecords (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:529:11)
trisaic exited with code 1
I already tried and googled but got stuck, am I missing something here?

Cannot find `/etc/letsencrypt/live/` in a container

I have a server working well with the following docker-compose.yml. I can find in container /etc/letsencrypt/live/v2.10studio.tech/fullchain.pem and /etc/letsencrypt/live/v2.10studio.tech/privkey.pem.
version: "3"
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
ports:
- 80:8080/tcp
- 443:443/tcp
environment:
CERTBOT_EMAIL: owner#company.com
volumes:
- ./conf.d:/etc/nginx/user.conf.d:ro
- letsencrypt:/etc/letsencrypt
10studio:
image: bitnami/nginx:1.16
restart: always
volumes:
- ./build:/app
- ./default.conf:/opt/bitnami/nginx/conf/server_blocks/default.conf:ro
- ./configs/config.prod.js:/app/lib/config.js
depends_on:
- frontend
volumes:
letsencrypt:
networks:
default:
external:
name: 10studio
I tried to create another server with the same setting, but I could not find live under /etc/letsencrypt of the container.
Does anyone know what's wrong? where do files under /etc/letsencrypt/live come from?
Edit 1:
I have one file conf.d/.conf, I tried to rebuild and got the following message:
root#iZj6cikgrkjzogdi7x6rdoZ:~/10Studio/pfw# docker-compose up --build --force-recreate --no-deps
Creating pfw_pfw_1 ... done
Creating pfw_10studio_1 ... done
Attaching to pfw_pfw_1, pfw_10studio_1
10studio_1 | 11:25:33.60
10studio_1 | 11:25:33.60 Welcome to the Bitnami nginx container
pfw_1 | templating scripts from /etc/nginx/user.conf.d to /etc/nginx/conf.d
pfw_1 | Substituting variables
pfw_1 | -> /etc/nginx/user.conf.d/*.conf
pfw_1 | /scripts/util.sh: line 116: /etc/nginx/user.conf.d/*.conf: No such file or directory
pfw_1 | Done with startup
pfw_1 | Run certbot
pfw_1 | ++ parse_domains
pfw_1 | ++ for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | ++ xargs echo
pfw_1 | ++ sed -n -r -e 's&^\s*ssl_certificate_key\s*\/etc/letsencrypt/live/(.*)/privkey.pem;\s*(#.*)?$&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + auto_enable_configs
pfw_1 | + for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | + keyfiles_exist /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ parse_keyfiles /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ sed -n -e 's&^\s*ssl_certificate_key\s*\(.*\);&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + return 0
pfw_1 | + '[' conf = nokey ']'
pfw_1 | + set +x
10studio_1 | 11:25:33.60 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-nginx
10studio_1 | 11:25:33.61 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-nginx/issues
10studio_1 | 11:25:33.61 Send us your feedback at containers#bitnami.com
10studio_1 | 11:25:33.61
10studio_1 | 11:25:33.62 INFO ==> ** Starting NGINX setup **
10studio_1 | 11:25:33.64 INFO ==> Validating settings in NGINX_* env vars...
10studio_1 | 11:25:33.64 INFO ==> Initializing NGINX...
10studio_1 | 11:25:33.65 INFO ==> ** NGINX setup finished! **
10studio_1 |
10studio_1 | 11:25:33.66 INFO ==> ** Starting NGINX **
If I do docker-compose up -d --build, I still cannot find /etc/letsencrypt/live in the container.
Please go through the original site of this image staticfloat/nginx-certbot, it will create and automatically renew website SSL certificates.
With the configuraiton file under ./conf.d
Create a config directory for your custom configs:
$ mkdir conf.d
And a .conf in that directory:
server {
listen 443 ssl;
server_name server.company.com;
ssl_certificate /etc/letsencrypt/live/server.company.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/server.company.com/privkey.pem;
location / {
...
}
}
because /etc/letsencrypt is mounted from a persistent volume letsencrypt
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
...
volumes:
...
- letsencrypt:/etc/letsencrypt
volumes:
letsencrypt:
If you need reference /etc/letsencrypt/live, you need mount the same volume letsencrypt into your new application as well
It works after changing ports: - 80:8080/tcp to ports: - 80:80/tcp.
As /etc/letsencrypt is a mounted volume that is persisted over restarts of your container, I would assume that any process added these files to the volume. According to a quick search using my favorite search engine, /etc/letsencrypt/live is filled with files after creating certificates

docker-compose problems mounting the teamspeak data directory as a volume

I'm trying to start a teamspeak container and mount the sqlite files to the host. I use a fresh installed docker engine and docker-compose. I haven't done the post installation setup to run docker as non-root user (docs). That's why I think I have problems when I mount the TS data folder /opt/ts3server/sql/ (docs) to my host system. The ./teamspeak/ folder owns root but I gave it also r-w-x for everyone.
docker-compose.yaml:
version: '3'
services:
teamspeak:
user: root
image: teamspeak
restart: always
ports:
- 9987:9987/udp
- 10011:10011
- 30033:30033
volumes:
- ./teamspeak/:/opt/ts3server/sql/
environment:
TS3SERVER_LICENSE: accept
error logs from teamspeak:
teamspeak_1 | 2019-10-25 20:18:33.827157|INFO |ServerLibPriv | |TeamSpeak 3 Server 3.9.1 (2019-07-02 13:17:23)
teamspeak_1 | 2019-10-25 20:18:33.827272|INFO |ServerLibPriv | |SystemInformation: Linux 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u1 (2019-09-20) x86_64 Binary: 64bit
teamspeak_1 | 2019-10-25 20:18:33.827300|INFO |ServerLibPriv | |Using hardware aes
teamspeak_1 | 2019-10-25 20:18:33.827484|INFO |DatabaseQuery | |dbPlugin name: SQLite3 plugin, Version 3, (c)TeamSpeak Systems GmbH
teamspeak_1 | 2019-10-25 20:18:33.827513|INFO |DatabaseQuery | |dbPlugin version: 3.11.1
teamspeak_1 | 2019-10-25 20:18:33.827614|INFO |DatabaseQuery | |checking database integrity (may take a while)
teamspeak_1 | 2019-10-25 20:18:33.844497|CRITICAL|DatabaseQuery | |setSQLfromFile( file:properties_list_by_string_id.sql) failed
When I set anything else than /opt/ts3server/sql/ the teamspeak server starts.
How can I make the mounted volume read and writable for teamspeak?
I assume you want to mount the data directory of the TS3 server. The volume you mounted (/opt/ts3server/sql/) is used to store the sql scripts to create the database.
This variable controls where the TeamSpeak server looks for sql files. Defaults to /opt/ts3server/sql/.
- teamspeak docker docu
you instead want to mount the data directory (/var/ts3server/) to the host sytsem:
version: '3'
services:
teamspeak:
user: root
image: teamspeak
restart: always
ports:
- 9987:9987/udp
- 10011:10011
- 30033:30033
volumes:
- ./teamspeak/:/var/ts3server/
environment:
TS3SERVER_LICENSE: accept

Docker Compose LAMP Database connection error

So after running docker-compose up I get the message Error establishing a database connection when visiting http://localhost:8000/
Output of docker ps -a:
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a3c015efeec dockercompose_wordpress "docker-php-entryp..." 17 minutes ago Up 16 minutes 0.0.0.0:8000->80/tcp dockercompose_wordpress_1
4e46c85345d5 dockercompose_db "docker-entrypoint..." 17 minutes ago Up 16 minutes 0.0.0.0:3306->3306/tcp dockercompose_db_1
Is this right? Or should it only show one container since wordpress depends_on db?
So I am expecting to see my Wordpress site at localhost:8000.
Had imported the database making sure I sed to change all url to point to http://localhost.
Had also mounted ./html which contains my source files to container's /var/www/html.
Did I miss anything?
Folder Structure:
Folder
|
|-db
| |-Dockerfile
| |-db.sql
|
|-html
| |- (Wordpress files)
|
|-php
| |-Dockerfile
|
|-docker-composer.yml
docker-composer.yml:
version: '3'
services:
db:
build:
context: ./db
args:
MYSQL_DATABASE: coown
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: coown
MYSQL_ROOT_PASSWORD: root
wordpress:
build:
context: ./php
depends_on:
- db
ports:
- "8000:80"
volumes:
- ./html:/var/www/html
db/Dockerfile:
FROM mysql:5.7
RUN chown -R mysql:root /var/lib/mysql/
ARG MYSQL_DATABASE
ARG MYSQL_ROOT_PASSWORD
ENV MYSQL_DATABASE=$MYSQL_DATABASE
ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
ADD db.sql /etc/mysql/db.sql
RUN cp /etc/mysql/db.sql /docker-entrypoint-initdb.d
EXPOSE 3306
php/Dockerfile:
FROM php:7.0-apache
RUN docker-php-ext-install mysqli
Some output of docker-compose up:
db_1 | 2017-06-12T19:21:33.873957Z 0 [Warning] CA certificate ca.pem is self signed.
db_1 | 2017-06-12T19:21:33.875841Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
db_1 | 2017-06-12T19:21:33.876030Z 0 [Note] IPv6 is available.
db_1 | 2017-06-12T19:21:33.876088Z 0 [Note] - '::' resolves to '::';
db_1 | 2017-06-12T19:21:33.876195Z 0 [Note] Server socket created on IP: '::'.
db_1 | 2017-06-12T19:21:33.885002Z 0 [Note] InnoDB: Buffer pool(s) load completed at 170612 19:21:33
db_1 | 2017-06-12T19:21:33.902676Z 0 [Warning] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.902862Z 0 [Warning] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.902964Z 0 [Warning] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.903006Z 0 [Warning] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.905557Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2017-06-12T19:21:33.910940Z 0 [Note] Event Scheduler: Loaded 0 events
db_1 | 2017-06-12T19:21:33.911310Z 0 [Note] mysqld: ready for connections.
db_1 | Version: '5.7.18' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
db_1 | 2017-06-12T19:21:33.911365Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
db_1 | 2017-06-12T19:21:33.911387Z 0 [Note] Beginning of list of non-natively partitioned tables
db_1 | 2017-06-12T19:21:33.926384Z 0 [Note] End of list of non-natively partitioned tables
wordpress_1 | 172.18.0.1 - - [12/Jun/2017:19:28:39 +0000] "GET / HTTP/1.1" 500 557 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
are you using "db" host to connect PHP (Wordpress? wp-config.php?) to your database instead of the usual "localhost"?.

Resources