I have a server working well with the following docker-compose.yml. I can find in container /etc/letsencrypt/live/v2.10studio.tech/fullchain.pem and /etc/letsencrypt/live/v2.10studio.tech/privkey.pem.
version: "3"
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
ports:
- 80:8080/tcp
- 443:443/tcp
environment:
CERTBOT_EMAIL: owner#company.com
volumes:
- ./conf.d:/etc/nginx/user.conf.d:ro
- letsencrypt:/etc/letsencrypt
10studio:
image: bitnami/nginx:1.16
restart: always
volumes:
- ./build:/app
- ./default.conf:/opt/bitnami/nginx/conf/server_blocks/default.conf:ro
- ./configs/config.prod.js:/app/lib/config.js
depends_on:
- frontend
volumes:
letsencrypt:
networks:
default:
external:
name: 10studio
I tried to create another server with the same setting, but I could not find live under /etc/letsencrypt of the container.
Does anyone know what's wrong? where do files under /etc/letsencrypt/live come from?
Edit 1:
I have one file conf.d/.conf, I tried to rebuild and got the following message:
root#iZj6cikgrkjzogdi7x6rdoZ:~/10Studio/pfw# docker-compose up --build --force-recreate --no-deps
Creating pfw_pfw_1 ... done
Creating pfw_10studio_1 ... done
Attaching to pfw_pfw_1, pfw_10studio_1
10studio_1 | 11:25:33.60
10studio_1 | 11:25:33.60 Welcome to the Bitnami nginx container
pfw_1 | templating scripts from /etc/nginx/user.conf.d to /etc/nginx/conf.d
pfw_1 | Substituting variables
pfw_1 | -> /etc/nginx/user.conf.d/*.conf
pfw_1 | /scripts/util.sh: line 116: /etc/nginx/user.conf.d/*.conf: No such file or directory
pfw_1 | Done with startup
pfw_1 | Run certbot
pfw_1 | ++ parse_domains
pfw_1 | ++ for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | ++ xargs echo
pfw_1 | ++ sed -n -r -e 's&^\s*ssl_certificate_key\s*\/etc/letsencrypt/live/(.*)/privkey.pem;\s*(#.*)?$&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + auto_enable_configs
pfw_1 | + for conf_file in /etc/nginx/conf.d/*.conf*
pfw_1 | + keyfiles_exist /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ parse_keyfiles /etc/nginx/conf.d/certbot.conf
pfw_1 | ++ sed -n -e 's&^\s*ssl_certificate_key\s*\(.*\);&\1&p' /etc/nginx/conf.d/certbot.conf
pfw_1 | + return 0
pfw_1 | + '[' conf = nokey ']'
pfw_1 | + set +x
10studio_1 | 11:25:33.60 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-nginx
10studio_1 | 11:25:33.61 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-nginx/issues
10studio_1 | 11:25:33.61 Send us your feedback at containers#bitnami.com
10studio_1 | 11:25:33.61
10studio_1 | 11:25:33.62 INFO ==> ** Starting NGINX setup **
10studio_1 | 11:25:33.64 INFO ==> Validating settings in NGINX_* env vars...
10studio_1 | 11:25:33.64 INFO ==> Initializing NGINX...
10studio_1 | 11:25:33.65 INFO ==> ** NGINX setup finished! **
10studio_1 |
10studio_1 | 11:25:33.66 INFO ==> ** Starting NGINX **
If I do docker-compose up -d --build, I still cannot find /etc/letsencrypt/live in the container.
Please go through the original site of this image staticfloat/nginx-certbot, it will create and automatically renew website SSL certificates.
With the configuraiton file under ./conf.d
Create a config directory for your custom configs:
$ mkdir conf.d
And a .conf in that directory:
server {
listen 443 ssl;
server_name server.company.com;
ssl_certificate /etc/letsencrypt/live/server.company.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/server.company.com/privkey.pem;
location / {
...
}
}
because /etc/letsencrypt is mounted from a persistent volume letsencrypt
services:
frontend:
restart: unless-stopped
image: staticfloat/nginx-certbot
...
volumes:
...
- letsencrypt:/etc/letsencrypt
volumes:
letsencrypt:
If you need reference /etc/letsencrypt/live, you need mount the same volume letsencrypt into your new application as well
It works after changing ports: - 80:8080/tcp to ports: - 80:80/tcp.
As /etc/letsencrypt is a mounted volume that is persisted over restarts of your container, I would assume that any process added these files to the volume. According to a quick search using my favorite search engine, /etc/letsencrypt/live is filled with files after creating certificates
Related
[ EDIT: I am not deleting the question even if it could be a duplicate of this one, because the original question might be harder to search. In case this were not advisable, please feel free to delete/close. ]
I have this docker-compose:
x-common-postgres-env:
&common-postgres-env
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_PORT: 5432
x-common-postgres:
&common-postgres
image: postgres:13.4
hostname: postgres
environment:
<< : *common-postgres-env
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "${POSTGRES_USER}", "-d", "${POSTGRES_DB}"]
x-common-django:
&common-django
build: .
environment:
&common-django-env
<< : *common-postgres-env
DJANGO_SECRET: ${DJANGO_SECRET}
ALLOWED_HOSTS: ".localhost 127.0.0.1 [::1]"
CORS_ALLOWED_ORIGINS: "http://localhost:8000"
CSRF_TRUSTED_ORIGINS: "http://localhost:8000"
healthcheck:
test: ["CMD", "wget", "-qO", "/dev/null", "http://localhost:8000"]
ports:
- "8000:8000"
services:
db:
<< : *common-postgres
profiles:
- prod
volumes:
- ./data/db:/var/lib/postgresql/data
db-test:
<< : *common-postgres
profiles:
- test
web:
<< : *common-django
profiles:
- prod
command: pdm run python manage.py runserver 0.0.0.0:8000
environment:
<< : *common-django-env
POSTGRES_HOST: db
volumes:
- ./KJ_import:/code/KJ_import
- ./docs:/code/docs
- ./KJ-JS:/code/KJ-JS
- ./static:/code/static
- ./media:/code/media
- ./templates:/code/templates
depends_on:
db:
condition: service_healthy
web-test:
<< : *common-django
profiles:
- test
command: pdm run python manage.py runserver 0.0.0.0:8000
environment:
<< : *common-django-env
POSTGRES_HOST: db-test
depends_on:
db-test:
condition: service_healthy
cypress:
# image: "cypress/included:9.2.0"
profiles:
- test
build:
context: .
dockerfile: Dockerfile.cy
# command: ["--browser", "chrome"]
environment:
CYPRESS_baseUrl: http://localhost:8000/
working_dir: /code/KJ-JS
volumes:
- ./KJ-JS:/code/KJ-JS
- ./media:/code/media
depends_on:
web-test:
condition: service_healthy
This Dockerfile.cy
FROM cypress/included:9.2.0
# WORKDIR /code/KJ-JS
COPY system.conf /etc/dbus-1/system.conf
RUN chmod 644 /etc/dbus-1/system.conf
COPY entrypoint.cy.sh /
ENTRYPOINT ["/bin/sh", "/entrypoint.cy.sh"]
and this entrypoint.cy.sh to activate the Cypress tests:
#!/bin/sh
echo "### Create DBus"
dbus-uuidgen > /var/lib/dbus/machine-id
mkdir -p /var/run/dbus
dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address &
# Wait for the D-Bus system bus address to be available
while [ -f /var/run/dbus/system_bus_socket ]; do
sleep 1
done
# Check if the dbus-daemon process is running
if ps -ef | grep -v grep | grep dbus-daemon > /dev/null; then
echo "### D-Bus daemon is running"
else
echo "### D-Bus daemon is not running"
fi
# Check if the D-Bus configuration files are correctly configured
if [ -f /etc/dbus-1/system.conf ]; then
echo "### D-Bus system configuration file is present"
else
echo "### D-Bus system configuration file is missing"
fi
# Make sure that the /var/run/dbus directory exists and is writable by the dbus-daemon process
if [ -d /var/run/dbus ]; then
if [ -w /var/run/dbus ]; then
echo "### /var/run/dbus is writable by the dbus-daemon process"
else
echo "### /var/run/dbus is not writable by the dbus-daemon process"
fi
else
echo "### /var/run/dbus does not exist"
fi
# Remove the /var/run/dbus/pid file if it exists
if [ -f /var/run/dbus/pid ]; then
rm -f /var/run/dbus/pid
echo "### /var/run/dbus/pid file removed"
else
echo "### /var/run/dbus/pid file does not exist"
fi
echo "### Bus active"
cd /code/KJ-JS
cypress run --headed --browser chrome
echo "### after cypress run"
exec "$#"
When I run the docker compose --profile test up, the db spins well, django gets up and running, but Cypress cannot seem to connect.
It complained of not having Dbus running, so I added it in the entrypoint shown above, and tested all of its components, yet the error message still comes up:
kj_import-web-test-1 | System check identified no issues (0 silenced).
kj_import-web-test-1 | December 28, 2022 - 02:32:40
kj_import-web-test-1 | Django version 2.2.28, using settings 'KJ_import.settings'
kj_import-web-test-1 | Starting development server at http://0.0.0.0:8000/
kj_import-web-test-1 | Quit the server with CONTROL-C.
kj_import-web-test-1 | [28/Dec/2022 02:32:42] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:32:42] "GET /static/favicon.ico HTTP/1.1" 200 9662
kj_import-web-test-1 | [28/Dec/2022 02:32:46] "GET /docs/register/ HTTP/1.1" 200 6551
kj_import-web-test-1 | [28/Dec/2022 02:32:49] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:33:02] "GET / HTTP/1.1" 200 5776
kj_import-cypress-1 | ### Create DBus
kj_import-cypress-1 | ### D-Bus daemon is running
kj_import-cypress-1 | ### D-Bus system configuration file is present
kj_import-cypress-1 | ### /var/run/dbus is writable by the dbus-daemon process
kj_import-cypress-1 | ### /var/run/dbus/pid file does not exist
kj_import-cypress-1 | ### Bus active
kj_import-cypress-1 | unix:path=/var/run/dbus/system_bus_socket,guid=1181acd37ea51796e63af6a863ab9ccf
kj_import-cypress-1 | [26:1228/013304.773071:ERROR:bus.cc(392)] Failed to connect to the bus: Address does not contain a colon
kj_import-cypress-1 | [26:1228/013304.773122:ERROR:bus.cc(392)] Failed to connect to the bus: Address does not contain a colon
kj_import-cypress-1 | [213:1228/013304.794142:ERROR:gpu_init.cc(453)] Passthrough is not supported, GL is swiftshader, ANGLE is
kj_import-cypress-1 | Cypress could not verify that this server is running:
kj_import-cypress-1 |
kj_import-cypress-1 | > http://localhost:8000/
kj_import-cypress-1 |
kj_import-cypress-1 | We are verifying this server because it has been configured as your `baseUrl`.
kj_import-cypress-1 |
kj_import-cypress-1 | Cypress automatically waits until your server is accessible before running tests.
kj_import-cypress-1 |
kj_import-cypress-1 | We will try connecting to it 3 more times...
kj_import-cypress-1 | We will try connecting to it 2 more times...
kj_import-cypress-1 | We will try connecting to it 1 more time...
kj_import-cypress-1 |
kj_import-cypress-1 | Cypress failed to verify that your server is running.
kj_import-cypress-1 |
kj_import-cypress-1 | Please start this server and then run Cypress again.
kj_import-cypress-1 | ### after cypress run
kj_import-cypress-1 exited with code 0
kj_import-web-test-1 | [28/Dec/2022 02:33:32] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:34:02] "GET / HTTP/1.1" 200 5776
kj_import-web-test-1 | [28/Dec/2022 02:34:32] "GET / HTTP/1.1" 200 5776
Please note that the server is running fine. You can see it from the above log (GETs replied with 200, even before the Cypress container starts trying to connect) and I can access it from my local browser.
What am I missing here?
Thanks in advance!
In the end it was probably pretty simple: localhost in a container refers only to the container itself, not to the host.
This answer pointed me in the right direction.
So, in order to properly instruct Cypress to watch/test the service, the url (CYPRESS_baseUrl inside the docker-compose.yml) that needs to be passed in is in the format http://[service-name]:[port], which in my case was http://web-test:8000/
Be aware that:
also the Cypress tests need to be directed there, and most likely
the ALLOWED_HOSTS will need to include the service-name
PS: There might have been also a second issue at play: in my search I found this reported bug and several comments pointed to the cypress/included:9.2.0 image as potentially affected. I decided to move onto the 9.7.0
I have port forwarded applications mysql port to 3307 because I need my host mysql to keep running at 3306, but it gives below error.
Also I am able to get welcome page after running sail up
I am using laravel 9 latest version
Error
Illuminate\Database\QueryException
PHP 8.1.9
9.26.1
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo for mysql failed: Temporary failure in name resolution
SELECT count(*) AS aggregate FROM `users` WHERE `email` = test#test.com
.env
APP_URL=http://127.0.0.1
APP_PORT=81
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
FORWARD_DB_PORT=3307
docker-composer.yml
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-81}:80'
- '${VITE_PORT:-5174}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3307}:3306'
environment:
MYSQL_ROOT_PASSWORD: '{DB_PASSWORD}'
MYSQL_ROOT_HOST: '{DB_HOST}'
MYSQL_DATABASE: '{DB_DATABASE}'
MYSQL_USER: '{DB_USERNAME}'
MYSQL_PASSWORD: '{DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-mysql:
driver: local
Update 1
My terminal ouput is as follows
sm_v2-laravel.test-1 "start-container" laravel.test exited (0)
Shutting down old Sail processes...
[+] Running 0/1
⠙ Network sm_v2_sail Creating 0.2s
[+] Running 3/3d orphan containers ([sm_v2-service-1]) for this project. If you removed or renamed this service in your co ⠿ Network sm_v2_sail Created 0.2s
⠿ Container sm_v2-mysql-1 Created 1.5s
⠿ Container sm_v2-laravel.test-1 Created 0.5s
Attaching to sm_v2-laravel.test-1, sm_v2-mysql-1
sm_v2-mysql-1 | [Entrypoint] MySQL Docker Image 8.0.30-1.2.9-server
sm_v2-mysql-1 | [Entrypoint] Starting MySQL 8.0.30-1.2.9-server
sm_v2-mysql-1 | 2022-08-30T15:19:04.087084Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
sm_v2-mysql-1 | 2022-08-30T15:19:04.092964Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30) starting as process 1
sm_v2-mysql-1 | 2022-08-30T15:19:04.148193Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
sm_v2-mysql-1 | 2022-08-30T15:19:04.303213Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files.
sm_v2-mysql-1 | 2022-08-30T15:19:04.755173Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
sm_v2-mysql-1 | 2022-08-30T15:19:04.755609Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
sm_v2-mysql-1 | 2022-08-30T15:19:04.755681Z 0 [ERROR] [MY-010119] [Server] Aborting
sm_v2-mysql-1 | 2022-08-30T15:19:04.757223Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.30) MySQL Community Server - GPL.
sm_v2-mysql-1 exited with code 1
sm_v2-laravel.test-1 | 2022-08-30 15:19:07,746 INFO Set uid to user 0 succeeded
sm_v2-laravel.test-1 | 2022-08-30 15:19:07,751 INFO supervisord started with pid 1
sm_v2-laravel.test-1 | 2022-08-30 15:19:08,756 INFO spawned: 'php' with pid 16
sm_v2-laravel.test-1 | 2022-08-30 15:19:09,759 INFO success: php entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | INFO Server running on [http://0.0.0.0:80].
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | Press Ctrl+C to stop the server
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | 2022-08-30 15:19:21 ................................................... ~ 1s
sm_v2-laravel.test-1 | 2022-08-30 15:19:23 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:23 ................................................... ~ 1s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /build/assets/app.ac81e540.css .................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /build/assets/app.ab93cf8a.js ..................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:27 ................................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:29 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 16:07:14 ................................................... ~ 0s
Update 2
I get different error now
SQLSTATE[HY000] [1045] Access denied for user 'root'#'192.168.128.3' (using password: YES)
I finally solved it after mental frustation of more than a week. But it is very strange that no one was able to provide any answer in any forums, yes I tried all the famous forums possible.
I made sure that two users are added on my host(main computer) machine not the docker mysql, and I granted them full grant using mysql cli, there were 2 entries like these along with other entries
root | %
root | localhost
I ran following commands one after another. I don't know which commands exactly solved the problem as I am a beginner in docker and sail but here are my steps that I tried after which it started working.
I was getting Docker is not running. , so I tried following to make docker running.
sudo systemctl enable docker.service
sudo systemctl enable docker.socket
After that I tried sail up but it did not work, so ran following
sudo systemctl stop docker
sudo systemctl start docker
sudo systemctl disable docker.service
sudo systemctl enable docker.service
sail up
After that I rebooted my computer (I am on Ubuntu 22.04)
reboot
Removed some unnecessary files, also I got some failed error in docker service which I solved by running line 2&3 of the code below
sudo rm /etc/systemd/system/docker.service.d/override.conf
sudo systemctl reset-failed docker.service
sudo systemctl start docker.service
systemctl daemon-reload
sudo systemctl start docker.service
sail down
sail build --no-cache
sail up
php artisan config:clear
After that I migrated database and it worked
sail artisan migrate
After that
sudo systemctl enable docker
sail up
sail build
sail ps
sudo usermod -aG docker ${USER}
Removed daemon.json
sudo rm daemon.json
Removed old volumes
I think this was helpful
sail down --rmi all -v
sail up / (you can use sail up --no-cache)
Now mysql works on host computer port 3306 as well as other ports used for docker 3307,3308 simultaneously
I Appreciate #Mihai effort becoz only #Mihai responded in the comments
Update 2
I had to add platform: 'linux/x86_64' in docker-compose.yml file
mysql:
image: 'mysql/mysql-server:8.0'
platform: 'linux/x86_64'
ports:
- '${FORWARD_DB_PORT:-3307}:3306'
I want to use celery in my Django app for a task that should run in the background. I've followed these tutorials Asynchronous Tasks With Django and Celery and First steps with Django Using Celery with Django
my project structure:
project
├──project
| ├── settings
| ├──__init__.py (empty)
| ├──base.py
| ├──development.py
| └──production.py
| ├──__init__.py
| └──celery.py
├──app_number_1
| └──tasks.py
project/project/init.py :
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ['celery_app']
project/project/celery.py :
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings.production')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
project/project/settings/production.py :
INSTALLED_APPS = [
'background_task',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.gis',
]
.
.
.
CELERY_BROKER_URL = 'mongodb://mongo:27017'
CELERY_RESULT_BACKEND = 'mongodb://mongo:27017'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
docker-compose.yml:
version: '3'
services:
web:
env_file:
- env_vars.env
build: .
restart: always
command:
- /code/runserver.sh
ports:
- "80:8000"
depends_on:
- db
- mongo
db:
...
mongo:
image: mongo:latest
restart: always
runserver.sh:
#!/bin/bash
sleep 1
python3 /code/project/manage.py migrate --settings=project.settings.production
python3 /code/project/manage.py runserver 0.0.0.0:8000 --settings=project.settings.production & celery -A project worker -Q celery
after docker-compose up --build I get the following error:
web_1 | Running migrations:
web_1 | No migrations to apply.
mongo_1 | 2019-08-28T10:24:26.478+0000 I NETWORK [conn2] end connection 172.18.0.4:42252 (1 connection now open)
mongo_1 | 2019-08-28T10:24:26.479+0000 I NETWORK [conn1] end connection 172.18.0.4:42250 (0 connections now open)
web_1 | Error:
web_1 | Unable to load celery application.
web_1 | Module 'project' has no attribute 'celery'
any hint will be great!
thanks
The celery module isn't contained in the first project folder.
You can either move it there or make project into a package by adding an __init__ module and setting the app instance module in the celery command to be:
celery -A project.project worker -Q celery
I am trying to run the concourse worker using a docker image on a gentoo host. When running the docker image of the worker in privileged mode I get:
iptables: create-instance-chains: iptables: No chain/target/match by that name.
My docker-compose file is
version: '3'
services:
worker:
image: private-concourse-worker-with-keys
command: worker
ports:
- "7777:7777"
- "7788:7788"
- "7799:7799"
#restart: on-failure
privileged: true
environment:
- CONCOURSE_TSA_HOST=concourse-web-1.dev
- CONCOURSE_GARDEN_NETWORK
My Dockerfile
FROM concourse/concourse
COPY keys/tsa_host_key.pub /concourse-keys/tsa_host_key.pub
COPY keys/worker_key /concourse-keys/worker_key
Some more errors
worker_1 | {"timestamp":"1526507528.298546791","source":"guardian","message":"guardian.create.containerizer-create.finished","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.2"}}
worker_1 | {"timestamp":"1526507528.298666477","source":"guardian","message":"guardian.create.containerizer-create.watch.watching","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.2.4"}}
worker_1 | {"timestamp":"1526507528.303164721","source":"guardian","message":"guardian.create.network.started","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.5","spec":""}}
worker_1 | {"timestamp":"1526507528.303202152","source":"guardian","message":"guardian.create.network.config-create","log_level":1,"data":{"config":{"ContainerHandle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","HostIntf":"wbpuf2nmpege-0","ContainerIntf":"wbpuf2nmpege-1","IPTablePrefix":"w--","IPTableInstance":"bpuf2nmpege","BridgeName":"wbrdg-0afe0000","BridgeIP":"x.x.0.1","ContainerIP":"x.x.0.2","ExternalIP":"x.x.0.2","Subnet":{"IP":"x.x.0.0","Mask":"/////A=="},"Mtu":1500,"PluginNameservers":null,"OperatorNameservers":[],"AdditionalNameservers":["x.x.0.2"]},"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.5","spec":""}}
worker_1 | {"timestamp":"1526507528.324085236","source":"guardian","message":"guardian.iptables-runner.command.failed","log_level":2,"data":{"argv":["/worker-state/3.6.0/assets/iptables/sbin/iptables","--wait","-A","w--instance-bpuf2nmpege-log","-m","conntrack","--ctstate","NEW,UNTRACKED,INVALID","--protocol","all","--jump","LOG","--log-prefix","426762cc-b9a8-47b0-711a-8f5c ","-m","comment","--comment","426762cc-b9a8-47b0-711a-8f5ce18ff46c"],"error":"exit status 1","exit-status":1,"session":"1.26","stderr":"iptables: No chain/target/match by that name.\n","stdout":"","took":"1.281243ms"}}
It turns out it was because we were missing the log kernel module for iptables compiled into our distro.
Description
I'm trying to create a Redis cluster in a docker swarm. I'm using the bitnami-redis-docker image for creating my containers. Going through the bitnami documentation they always suggest to use 1 master node as opposed to reading the Redis documentation which states that there should be at least 3 master nodes, which is why I'm confused as to which one is right. Given that all bitnami slave are by default read-only, if I setup only a single master in one of the swarm leader nodes, and if it fails I believe sentinel will try to promote a different slave redis instance as master, but given that it is read-only all write operations will fail. If I change that to make the master redis instance as global meaning that it will be created in all of the nodes available in the swarm, in this case do I require sentinel at all? Also if the below setup is a good one is there a reason to introduce a load balancer?
Setup
+------------------+ +------------------+ +------------------+ +------------------+
| Node-1 | | Node-2 | | Node-3 | | Node-4 |
| Leader | | Worker | | Leader | | Worker |
+------------------+ +------------------+ +------------------+ +------------------+
| M1 | | M2 | | M3 | | M4 |
| R1 | | R2 | | R3 | | R4 |
| S1 | | S2 | | S3 | | S4 |
| | | | | | | |
+------------------+ +------------------+ +------------------+ +------------------+
Legends -
Masters are called M1, M2, M3, ..., Mn
Slaves are called R1, R2, R3, ..., Rn (R stands for replica).
Sentinels are called S1, S2, S3, ..., Sn
Docker
version: '3'
services:
redis-master:
image: 'bitnami/redis:latest'
ports:
- '6379:6379'
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=laSQL2019
- REDIS_EXTRA_FLAGS=--maxmemory 100mb
volumes:
- 'redis-master-volume:/bitnami'
deploy:
mode: global
redis-slave:
image: 'bitnami/redis:latest'
ports:
- '6379'
depends_on:
- redis-master
volumes:
- 'redis-slave-volume:/bitnami'
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=laSQL2019
- REDIS_PASSWORD=laSQL2019
- REDIS_EXTRA_FLAGS=--maxmemory 100mb
deploy:
mode: replicated
replicas: 4
redis-sentinel:
image: 'bitnami/redis:latest'
ports:
- '16379'
depends_on:
- redis-master
- redis-slave
volumes:
- 'redis-sentinel-volume:/bitnami'
entrypoint: |
bash -c 'bash -s <<EOF
"/bin/bash" -c "cat <<EOF > /opt/bitnami/redis/etc/sentinel.conf
port 16379
dir /tmp
sentinel monitor master-node redis-master 6379 2
sentinel down-after-milliseconds master-node 5000
sentinel parallel-syncs master-node 1
sentinel failover-timeout master-node 5000
sentinel auth-pass master-node laSQL2019
sentinel announce-ip redis-sentinel
sentinel announce-port 16379
EOF"
"/bin/bash" -c "redis-sentinel /opt/bitnami/redis/etc/sentinel.conf"
EOF'
deploy:
mode: global
volumes:
redis-master-volume:
driver: local
redis-slave-volume:
driver: local
redis-sentinel-volume:
driver: local
The bitnami solution is a failover solution hence it has one master node
Sentinel is a HA solution i.e automatic failover. But it does not provide scalability in terms of distribution of data across multiple nodes. You would need to setup clustering if you want 'sharding' in addition to 'HA'.