[ EDIT: I am not deleting the question even if it could be a duplicate of this one, because the original question might be harder to search. In case this were not advisable, please feel free to delete/close. ]
I have this docker-compose:
x-common-postgres-env:
&common-postgres-env
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_PORT: 5432
x-common-postgres:
&common-postgres
image: postgres:13.4
hostname: postgres
environment:
<< : *common-postgres-env
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "${POSTGRES_USER}", "-d", "${POSTGRES_DB}"]
x-common-django:
&common-django
build: .
environment:
&common-django-env
<< : *common-postgres-env
DJANGO_SECRET: ${DJANGO_SECRET}
ALLOWED_HOSTS: ".localhost 127.0.0.1 [::1]"
CORS_ALLOWED_ORIGINS: "http://localhost:8000"
CSRF_TRUSTED_ORIGINS: "http://localhost:8000"
healthcheck:
test: ["CMD", "wget", "-qO", "/dev/null", "http://localhost:8000"]
ports:
- "8000:8000"
services:
db:
<< : *common-postgres
profiles:
- prod
volumes:
- ./data/db:/var/lib/postgresql/data
db-test:
<< : *common-postgres
profiles:
- test
web:
<< : *common-django
profiles:
- prod
command: pdm run python manage.py runserver 0.0.0.0:8000
environment:
<< : *common-django-env
POSTGRES_HOST: db
volumes:
- ./KJ_import:/code/KJ_import
- ./docs:/code/docs
- ./KJ-JS:/code/KJ-JS
- ./static:/code/static
- ./media:/code/media
- ./templates:/code/templates
depends_on:
db:
condition: service_healthy
web-test:
<< : *common-django
profiles:
- test
command: pdm run python manage.py runserver 0.0.0.0:8000
environment:
<< : *common-django-env
POSTGRES_HOST: db-test
depends_on:
db-test:
condition: service_healthy
cypress:
# image: "cypress/included:9.2.0"
profiles:
- test
build:
context: .
dockerfile: Dockerfile.cy
# command: ["--browser", "chrome"]
environment:
CYPRESS_baseUrl: http://localhost:8000/
working_dir: /code/KJ-JS
volumes:
- ./KJ-JS:/code/KJ-JS
- ./media:/code/media
depends_on:
web-test:
condition: service_healthy
This Dockerfile.cy
FROM cypress/included:9.2.0
# WORKDIR /code/KJ-JS
COPY system.conf /etc/dbus-1/system.conf
RUN chmod 644 /etc/dbus-1/system.conf
COPY entrypoint.cy.sh /
ENTRYPOINT ["/bin/sh", "/entrypoint.cy.sh"]
and this entrypoint.cy.sh to activate the Cypress tests:
#!/bin/sh
echo "### Create DBus"
dbus-uuidgen > /var/lib/dbus/machine-id
mkdir -p /var/run/dbus
dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address &
# Wait for the D-Bus system bus address to be available
while [ -f /var/run/dbus/system_bus_socket ]; do
sleep 1
done
# Check if the dbus-daemon process is running
if ps -ef | grep -v grep | grep dbus-daemon > /dev/null; then
echo "### D-Bus daemon is running"
else
echo "### D-Bus daemon is not running"
fi
# Check if the D-Bus configuration files are correctly configured
if [ -f /etc/dbus-1/system.conf ]; then
echo "### D-Bus system configuration file is present"
else
echo "### D-Bus system configuration file is missing"
fi
# Make sure that the /var/run/dbus directory exists and is writable by the dbus-daemon process
if [ -d /var/run/dbus ]; then
if [ -w /var/run/dbus ]; then
echo "### /var/run/dbus is writable by the dbus-daemon process"
else
echo "### /var/run/dbus is not writable by the dbus-daemon process"
fi
else
echo "### /var/run/dbus does not exist"
fi
# Remove the /var/run/dbus/pid file if it exists
if [ -f /var/run/dbus/pid ]; then
rm -f /var/run/dbus/pid
echo "### /var/run/dbus/pid file removed"
else
echo "### /var/run/dbus/pid file does not exist"
fi
echo "### Bus active"
cd /code/KJ-JS
cypress run --headed --browser chrome
echo "### after cypress run"
exec "$#"
When I run the docker compose --profile test up, the db spins well, django gets up and running, but Cypress cannot seem to connect.
It complained of not having Dbus running, so I added it in the entrypoint shown above, and tested all of its components, yet the error message still comes up:
kj_import-web-test-1 | System check identified no issues (0 silenced).
kj_import-web-test-1 | December 28, 2022 - 02:32:40
kj_import-web-test-1 | Django version 2.2.28, using settings 'KJ_import.settings'
kj_import-web-test-1 | Starting development server at http://0.0.0.0:8000/
kj_import-web-test-1 | Quit the server with CONTROL-C.
kj_import-web-test-1 | [28/Dec/2022 02:32:42] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:32:42] "GET /static/favicon.ico HTTP/1.1" 200 9662
kj_import-web-test-1 | [28/Dec/2022 02:32:46] "GET /docs/register/ HTTP/1.1" 200 6551
kj_import-web-test-1 | [28/Dec/2022 02:32:49] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:33:02] "GET / HTTP/1.1" 200 5776
kj_import-cypress-1 | ### Create DBus
kj_import-cypress-1 | ### D-Bus daemon is running
kj_import-cypress-1 | ### D-Bus system configuration file is present
kj_import-cypress-1 | ### /var/run/dbus is writable by the dbus-daemon process
kj_import-cypress-1 | ### /var/run/dbus/pid file does not exist
kj_import-cypress-1 | ### Bus active
kj_import-cypress-1 | unix:path=/var/run/dbus/system_bus_socket,guid=1181acd37ea51796e63af6a863ab9ccf
kj_import-cypress-1 | [26:1228/013304.773071:ERROR:bus.cc(392)] Failed to connect to the bus: Address does not contain a colon
kj_import-cypress-1 | [26:1228/013304.773122:ERROR:bus.cc(392)] Failed to connect to the bus: Address does not contain a colon
kj_import-cypress-1 | [213:1228/013304.794142:ERROR:gpu_init.cc(453)] Passthrough is not supported, GL is swiftshader, ANGLE is
kj_import-cypress-1 | Cypress could not verify that this server is running:
kj_import-cypress-1 |
kj_import-cypress-1 | > http://localhost:8000/
kj_import-cypress-1 |
kj_import-cypress-1 | We are verifying this server because it has been configured as your `baseUrl`.
kj_import-cypress-1 |
kj_import-cypress-1 | Cypress automatically waits until your server is accessible before running tests.
kj_import-cypress-1 |
kj_import-cypress-1 | We will try connecting to it 3 more times...
kj_import-cypress-1 | We will try connecting to it 2 more times...
kj_import-cypress-1 | We will try connecting to it 1 more time...
kj_import-cypress-1 |
kj_import-cypress-1 | Cypress failed to verify that your server is running.
kj_import-cypress-1 |
kj_import-cypress-1 | Please start this server and then run Cypress again.
kj_import-cypress-1 | ### after cypress run
kj_import-cypress-1 exited with code 0
kj_import-web-test-1 | [28/Dec/2022 02:33:32] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:34:02] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:34:32] "GET / HTTP/1.1" 200 5776
Please note that the server is running fine. You can see it from the above log (GETs replied with 200, even before the Cypress container starts trying to connect) and I can access it from my local browser.
What am I missing here?
Thanks in advance!
In the end it was probably pretty simple: localhost in a container refers only to the container itself, not to the host.
This answer pointed me in the right direction.
So, in order to properly instruct Cypress to watch/test the service, the url (CYPRESS_baseUrl inside the docker-compose.yml) that needs to be passed in is in the format http://[service-name]:[port], which in my case was http://web-test:8000/
Be aware that:
also the Cypress tests need to be directed there, and most likely
the ALLOWED_HOSTS will need to include the service-name
PS: There might have been also a second issue at play: in my search I found this reported bug and several comments pointed to the cypress/included:9.2.0 image as potentially affected. I decided to move onto the 9.7.0
Related
I have port forwarded applications mysql port to 3307 because I need my host mysql to keep running at 3306, but it gives below error.
Also I am able to get welcome page after running sail up
I am using laravel 9 latest version
Error
Illuminate\Database\QueryException
PHP 8.1.9
9.26.1
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo for mysql failed: Temporary failure in name resolution
SELECT count(*) AS aggregate FROM `users` WHERE `email` = test#test.com
.env
APP_URL=http://127.0.0.1
APP_PORT=81
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
FORWARD_DB_PORT=3307
docker-composer.yml
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-81}:80'
- '${VITE_PORT:-5174}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3307}:3306'
environment:
MYSQL_ROOT_PASSWORD: '{DB_PASSWORD}'
MYSQL_ROOT_HOST: '{DB_HOST}'
MYSQL_DATABASE: '{DB_DATABASE}'
MYSQL_USER: '{DB_USERNAME}'
MYSQL_PASSWORD: '{DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-mysql:
driver: local
Update 1
My terminal ouput is as follows
sm_v2-laravel.test-1 "start-container" laravel.test exited (0)
Shutting down old Sail processes...
[+] Running 0/1
⠙ Network sm_v2_sail Creating 0.2s
[+] Running 3/3d orphan containers ([sm_v2-service-1]) for this project. If you removed or renamed this service in your co ⠿ Network sm_v2_sail Created 0.2s
⠿ Container sm_v2-mysql-1 Created 1.5s
⠿ Container sm_v2-laravel.test-1 Created 0.5s
Attaching to sm_v2-laravel.test-1, sm_v2-mysql-1
sm_v2-mysql-1 | [Entrypoint] MySQL Docker Image 8.0.30-1.2.9-server
sm_v2-mysql-1 | [Entrypoint] Starting MySQL 8.0.30-1.2.9-server
sm_v2-mysql-1 | 2022-08-30T15:19:04.087084Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
sm_v2-mysql-1 | 2022-08-30T15:19:04.092964Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30) starting as process 1
sm_v2-mysql-1 | 2022-08-30T15:19:04.148193Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
sm_v2-mysql-1 | 2022-08-30T15:19:04.303213Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files.
sm_v2-mysql-1 | 2022-08-30T15:19:04.755173Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
sm_v2-mysql-1 | 2022-08-30T15:19:04.755609Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
sm_v2-mysql-1 | 2022-08-30T15:19:04.755681Z 0 [ERROR] [MY-010119] [Server] Aborting
sm_v2-mysql-1 | 2022-08-30T15:19:04.757223Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.30) MySQL Community Server - GPL.
sm_v2-mysql-1 exited with code 1
sm_v2-laravel.test-1 | 2022-08-30 15:19:07,746 INFO Set uid to user 0 succeeded
sm_v2-laravel.test-1 | 2022-08-30 15:19:07,751 INFO supervisord started with pid 1
sm_v2-laravel.test-1 | 2022-08-30 15:19:08,756 INFO spawned: 'php' with pid 16
sm_v2-laravel.test-1 | 2022-08-30 15:19:09,759 INFO success: php entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | INFO Server running on [http://0.0.0.0:80].
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | Press Ctrl+C to stop the server
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | 2022-08-30 15:19:21 ................................................... ~ 1s
sm_v2-laravel.test-1 | 2022-08-30 15:19:23 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:23 ................................................... ~ 1s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /build/assets/app.ac81e540.css .................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /build/assets/app.ab93cf8a.js ..................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:27 ................................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:29 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 16:07:14 ................................................... ~ 0s
Update 2
I get different error now
SQLSTATE[HY000] [1045] Access denied for user 'root'#'192.168.128.3' (using password: YES)
I finally solved it after mental frustation of more than a week. But it is very strange that no one was able to provide any answer in any forums, yes I tried all the famous forums possible.
I made sure that two users are added on my host(main computer) machine not the docker mysql, and I granted them full grant using mysql cli, there were 2 entries like these along with other entries
root | %
root | localhost
I ran following commands one after another. I don't know which commands exactly solved the problem as I am a beginner in docker and sail but here are my steps that I tried after which it started working.
I was getting Docker is not running. , so I tried following to make docker running.
sudo systemctl enable docker.service
sudo systemctl enable docker.socket
After that I tried sail up but it did not work, so ran following
sudo systemctl stop docker
sudo systemctl start docker
sudo systemctl disable docker.service
sudo systemctl enable docker.service
sail up
After that I rebooted my computer (I am on Ubuntu 22.04)
reboot
Removed some unnecessary files, also I got some failed error in docker service which I solved by running line 2&3 of the code below
sudo rm /etc/systemd/system/docker.service.d/override.conf
sudo systemctl reset-failed docker.service
sudo systemctl start docker.service
systemctl daemon-reload
sudo systemctl start docker.service
sail down
sail build --no-cache
sail up
php artisan config:clear
After that I migrated database and it worked
sail artisan migrate
After that
sudo systemctl enable docker
sail up
sail build
sail ps
sudo usermod -aG docker ${USER}
Removed daemon.json
sudo rm daemon.json
Removed old volumes
I think this was helpful
sail down --rmi all -v
sail up / (you can use sail up --no-cache)
Now mysql works on host computer port 3306 as well as other ports used for docker 3307,3308 simultaneously
I Appreciate #Mihai effort becoz only #Mihai responded in the comments
Update 2
I had to add platform: 'linux/x86_64' in docker-compose.yml file
mysql:
image: 'mysql/mysql-server:8.0'
platform: 'linux/x86_64'
ports:
- '${FORWARD_DB_PORT:-3307}:3306'
Here is my Dockerfile for React.js with the error I got in terminal:
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./package.json /usr/src/app
RUN npm install
RUN npm build
EXPOSE 3000
CMD ["npm", "run", "start"]
Error:-
react_1 |
react_1 | > ecom-panther#0.1.0 start /usr/src/app
react_1 | > react-scripts start
react_1 |
react_1 | ℹ 「wds」: Project is running at http://172.18.0.2/
react_1 | ℹ 「wds」: webpack output is served from
react_1 | ℹ 「wds」: Content not from webpack is served from /usr/src/app/public
react_1 | ℹ 「wds」: 404s will fallback to /
react_1 | Starting the development server...
react_1 |
ecom-panther_react_1 exited with code 0
For Node and Express, I got this:
express_1 | (node:30) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
express_1 | server is running on port: 5000
express_1 | (node:30) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]
express_1 | at Pool.<anonymous> (/usr/src/app/node_modules/mongodb/lib/core/topologies/server.js:438:11)
express_1 | at emitOne (events.js:116:13)
express_1 | at Pool.emit (events.js:211:7)
express_1 | at createConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:561:14)
express_1 | at connect (/usr/src/app/node_modules/mongodb/lib/core/connection/pool.js:994:11)
express_1 | at makeConnection (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:31:7)
express_1 | at callback (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:264:5)
express_1 | at Socket.err (/usr/src/app/node_modules/mongodb/lib/core/connection/connect.js:294:7)
express_1 | at Object.onceWrapper (events.js:315:30)
express_1 | at emitOne (events.js:116:13)
express_1 | at Socket.emit (events.js:211:7)
express_1 | at emitErrorNT (internal/streams/destroy.js:73:8)
express_1 | at _combinedTickCallback (internal/process/next_tick.js:139:11)
express_1 | at process._tickCallback (internal/process/next_tick.js:181:9)
express_1 | (node:30) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
express_1 | (node:30) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Docker file for backend:-
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
COPY . /usr/src/app
EXPOSE 5000
CMD ["npm","start"]
Here is my docker-compose.yml file
version: '3' # specify docker-compose version
# Define the service/container to be run
services:
react: #name of first service
build: client #specify the directory of docker file
ports:
- "3000:3000" #specify port mapping
express: #name of second service
build: server #specify the directory of docker file
ports:
- "5000:5000" #specify port mapping
links:
- database #link this service to the database service
database: #name of third service
image: mongo #specify image to build contasiner flow
ports:
- "27017:27017" #specify port mapping
How I can run frontend at browser and is there any easy approach to do this in a better way ?
Error 1:
Add stdin_open: true to your react service, like:
...
services:
react: #name of first service
build: client #specify the directory of docker file
stdin_open: true
ports:
- "3000:3000" #specify port mapping
...
You might need to rebuild or clean cached so "docker-compose up --build" or "docker-compose build --no-cache" then "docker-compose up"
Error 2:
In your database connections line in your index.js file or whatever you named should have :
mongodb://database:27017/
where "database" is your named MongoDB service. You can use your container IP address too with docker inspect <container> and use the IP the see there too. Ideally you want to have a ENV in your Dockerfile or docker-compose.yml:
ENV MONGO_URL mongodb://database:27017/
I have a server working well with the following docker-compose.yml. I can find in container /etc/letsencrypt/live/v2.10studio.tech/fullchain.pem and /etc/letsencrypt/live/v2.10studio.tech/privkey.pem.
version: "3"
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
ports:
- 80:8080/tcp
- 443:443/tcp
environment:
CERTBOT_EMAIL: owner#company.com
volumes:
- ./conf.d:/etc/nginx/user.conf.d:ro
- letsencrypt:/etc/letsencrypt
10studio:
image: bitnami/nginx:1.16
restart: always
volumes:
- ./build:/app
- ./default.conf:/opt/bitnami/nginx/conf/server_blocks/default.conf:ro
- ./configs/config.prod.js:/app/lib/config.js
depends_on:
- frontend
volumes:
letsencrypt:
networks:
default:
external:
name: 10studio
I tried to create another server with the same setting, but I could not find live under /etc/letsencrypt of the container.
Does anyone know what's wrong? where do files under /etc/letsencrypt/live come from?
Edit 1:
I have one file conf.d/.conf, I tried to rebuild and got the following message:
root#iZj6cikgrkjzogdi7x6rdoZ:~/10Studio/pfw# docker-compose up --build --force-recreate --no-deps
Creating pfw_pfw_1 ... done
Creating pfw_10studio_1 ... done
Attaching to pfw_pfw_1, pfw_10studio_1
10studio_1 | 11:25:33.60
10studio_1 | 11:25:33.60 Welcome to the Bitnami nginx container
pfw_1 | templating scripts from /etc/nginx/user.conf.d to /etc/nginx/conf.d
pfw_1 | Substituting variables
pfw_1 | -> /etc/nginx/user.conf.d/*.conf
pfw_1 | /scripts/util.sh: line 116: /etc/nginx/user.conf.d/*.conf: No such file or directory
pfw_1 | Done with startup
pfw_1 | Run certbot
pfw_1 | ++ parse_domains
pfw_1 | ++ for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | ++ xargs echo
pfw_1 | ++ sed -n -r -e 's&^\s*ssl_certificate_key\s*\/etc/letsencrypt/live/(.*)/privkey.pem;\s*(#.*)?$&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + auto_enable_configs
pfw_1 | + for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | + keyfiles_exist /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ parse_keyfiles /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ sed -n -e 's&^\s*ssl_certificate_key\s*\(.*\);&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + return 0
pfw_1 | + '[' conf = nokey ']'
pfw_1 | + set +x
10studio_1 | 11:25:33.60 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-nginx
10studio_1 | 11:25:33.61 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-nginx/issues
10studio_1 | 11:25:33.61 Send us your feedback at containers#bitnami.com
10studio_1 | 11:25:33.61
10studio_1 | 11:25:33.62 INFO ==> ** Starting NGINX setup **
10studio_1 | 11:25:33.64 INFO ==> Validating settings in NGINX_* env vars...
10studio_1 | 11:25:33.64 INFO ==> Initializing NGINX...
10studio_1 | 11:25:33.65 INFO ==> ** NGINX setup finished! **
10studio_1 |
10studio_1 | 11:25:33.66 INFO ==> ** Starting NGINX **
If I do docker-compose up -d --build, I still cannot find /etc/letsencrypt/live in the container.
Please go through the original site of this image staticfloat/nginx-certbot, it will create and automatically renew website SSL certificates.
With the configuraiton file under ./conf.d
Create a config directory for your custom configs:
$ mkdir conf.d
And a .conf in that directory:
server {
listen 443 ssl;
server_name server.company.com;
ssl_certificate /etc/letsencrypt/live/server.company.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/server.company.com/privkey.pem;
location / {
...
}
}
because /etc/letsencrypt is mounted from a persistent volume letsencrypt
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
...
volumes:
...
- letsencrypt:/etc/letsencrypt
volumes:
letsencrypt:
If you need reference /etc/letsencrypt/live, you need mount the same volume letsencrypt into your new application as well
It works after changing ports: - 80:8080/tcp to ports: - 80:80/tcp.
As /etc/letsencrypt is a mounted volume that is persisted over restarts of your container, I would assume that any process added these files to the volume. According to a quick search using my favorite search engine, /etc/letsencrypt/live is filled with files after creating certificates
Suppose I have a Kafka Connect worker that I built using Docker from the confluentinc/cp-kafka-connect image deploying to a server and spinning up a worker. Now most of the time, the connector will already exist since I have created it using a REST API call to POST on port 8083. But how would I create my connector (if it doesn't already exist) via a script at worker start time? Can I somehow give my worker steps to run after it spins up?
It requires an overloaded command
Original issue: https://github.com/confluentinc/cp-docker-images/issues/467
Solution
volumes:
- $PWD/scripts:/scripts # TODO: Create this folder ahead of time, on your host
command:
- bash
- -c
- |
/etc/confluent/docker/run &
echo "Waiting for Kafka Connect to start listening on kafka-connect ⏳"
while [ $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) -eq 000 ] ; do
echo -e $$(date) " Kafka Connect listener HTTP state: " $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) " (waiting for 200)"
sleep 5
done
nc -vz kafka-connect 8083
echo -e "\n--\n+> Creating Kafka Connector(s)"
/scripts/create-connectors.sh # Note: This script is stored externally from container
sleep infinity
As cricket_007 says, you can embed it in the command with a call out to a mounted script, or you can just put it all inline, like this example. If you do this note that in the command section, $ are replaced with $$ to avoid the error Invalid interpolation format for "command" option
kafka-connect-01:
image: confluentinc/cp-kafka-connect:5.4.0
[…]
command:
- bash
- -c
- |
[…]
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
echo "Waiting for Kafka Connect to start listening on localhost ⏳"
while : ; do
curl_status=$$(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors)
echo -e $$(date) " Kafka Connect listener HTTP state: " $$curl_status " (waiting for 200)"
if [ $$curl_status -eq 200 ] ; then
break
fi
sleep 5
done
echo -e "\n--\n+> Creating Data Generator source"
curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/connectors/source-datagen-01/config \
-d '{
"connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"kafka.topic": "ratings",
"max.interval":750,
"quickstart": "ratings",
"tasks.max": 1
}'
sleep infinity
If you use for automation such tools like Ansible, this config may be useful:
- hosts: kafka-connect-docker
name: deploy kafka connect cluster
become: yes
gather_facts: yes
serial: '{{ serial|default(1) }}'
tasks:
# it's not fully working example
...
- name: run container
notify: wait ports
docker_container:
name: kafka-connect
image: "{{ docker_registry }}/kafka-connect:2.4.0-1.3.0"
entrypoint: ["sh", "-c", "'exec /opt/kafka/bin/connect-distributed.sh /etc/kafka-connect/connect-distributed.properties >> /var/log/kafka-connect/stderrout.log 2>&1'"]
restart_policy: always
network_mode: host
state: started
- name: call wait ports
command: /bin/true
notify: wait ports
handlers:
- name: restart container
shell: docker restart kafka-connect
notify: wait ports
- name: wait ports
wait_for: port=10900 timeout=300 host=127.0.0.1
changed_when: True
notify: check cluster status
- name: check cluster status
uri:
url: "http://127.0.0.1:10900/connectors"
status_code: 200
register: cluster_status_json_response
until: cluster_status_json_response.status == 200
retries: 60
delay: 5
- hosts: kafka-connect-docker[0]
name: deploy connectors configs
become: yes
tasks:
- name: restore connectors configs
uri:
url: "http://127.0.0.1:10900/connectors/{{ item }}/config"
method: PUT
return_content: yes
body_format: json
headers:
Accept: "application/json"
Content-Type: "application/json"
body: "{{ lookup('template', 'roles/kafka-connect/templates/etc/kafka-connect/tasks/' + item + '.json') }}"
status_code: 200, 201
timeout: 60
with_items: "{{ connector_configs }}"
I'm trying to follow this tutorial How to build docker cluster with celery and RabbitMQ in 10 minutes.
Followed the tutorial, although I did changed the following files.
My docker-compose.yml file looks as follows:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=pass
- HOSTNAME=rabbitmq
- RABBITMQ_NODENAME=rabbitmq
ports:
- "5672:5672" # we forward this port because it's useful for debugging
- "15672:15672" # here, we can access rabbitmq management plugin
worker:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
links:
- rabbit
depends_on:
- rabbit
test_celery/celery.py:
from __future__ import absolute_import, unicode_literals
from celery import Celery
app = Celery('test_celery',broker='amqp://user:pass#rabbit:5672//',backend='rpc://', include=['test_celery.tasks'])
and Dockerfile:
FROM python:3.6
ADD requirements.txt /app/requirements.txt
ADD ./test_celery /app/
WORKDIR /app/
RUN pip install -r requirements.txt
ENTRYPOINT celery -A test_celery worker --loglevel=info
I run the code with the following commands(My OS is Ubuntu 16.04):
sudo docker-compose build
sudo docker-compose scale worker=5
sudo docker-compose up
The output on screen looks something like this:
rabbit_1 | closing AMQP connection <0.501.0> (172.19.0.6:60470 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.479.0> (172.19.0.6:60468 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.366.0> (172.19.0.4:44754 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
rabbit_1 |
rabbit_1 | =WARNING REPORT==== 8-Jun-2017::03:34:15 ===
rabbit_1 | closing AMQP connection <0.359.0> (172.19.0.4:44752 -> 172.19.0.7:5672, vhost: '/', user: 'admin'):
rabbit_1 | client unexpectedly closed TCP connection
worker_1 | [2017-06-08 03:34:19,138: INFO/MainProcess] missed heartbeat from celery#f77048a9d801
worker_1 | [2017-06-08 03:34:24,140: INFO/MainProcess] missed
heartbeat from celery#79aa2323a472
worker_1 | [2017-06-08 03:34:24,141: INFO/MainProcess] missed heartbeat from celery#93af751ed1b5
Then in the same directory I run
python -m test_celery.run_tasks
and the output from this gives me:
a kombu.exceptions.OperationalError: timed out error which I am not sure how to fix and get the same output as in the tutorial.
As the output and error report, "client unexpectedly closed TCP connection" , "kombu.exceptions.OperationalError: timed out", it seems that RabbitMQ didn't start as expected. Could you run the command "docker ps -a" to check what's the status of Rabbit container? If exited , "docker logs container-id" will print out logs of Rabbit container.