Running latest version of Docker Desktop on Windows 10 and trying to get PhotoPrism setup after successfully setting up PiGallery2 but wanting something more.
This is probably extremely basic and I'm likely a bit in over my head (not a programmer by any means aside from basic logic), but when I run docker-compose up -d I pull the files but then I see this:
photoprism Pulled
Container -mariadb-1 Creating
Error response from daemon: Invalid container name (-mariadb-1), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
From the way I understand it, it's unable to create the container due to the - in the name. I just don't really get why it's trying to use "-mariadb-1" and as far as I can tell, I'm not seeing anywhere I can tell it to name the container something else.
Here's my docker-compose.yml contents:
version: '3.5'
services:
photoprism:
image: photoprism/photoprism:latest
depends_on:
- mariadb
security_opt:
- seccomp:unconfined
- apparmor:unconfined
ports:
- "2342:2342" # HTTP port (host:container)
environment:
PHOTOPRISM_ADMIN_PASSWORD: "[redacted]"
PHOTOPRISM_AUTH_MODE: "password"
PHOTOPRISM_SITE_URL: "http://localhost:2342/"
PHOTOPRISM_ORIGINALS_LIMIT: 50000
PHOTOPRISM_HTTP_COMPRESSION: "gzip"
PHOTOPRISM_DEBUG: "false"
PHOTOPRISM_READONLY: "false"
PHOTOPRISM_EXPERIMENTAL: "false"
PHOTOPRISM_DISABLE_CHOWN: "false"
PHOTOPRISM_DISABLE_WEBDAV: "false"
PHOTOPRISM_DISABLE_SETTINGS: "false"
PHOTOPRISM_DISABLE_TENSORFLOW: "false"
PHOTOPRISM_DISABLE_FACES: "false"
PHOTOPRISM_DISABLE_CLASSIFICATION: "false"
PHOTOPRISM_DISABLE_RAW: "false"
PHOTOPRISM_RAW_PRESETS: "false"
PHOTOPRISM_JPEG_QUALITY: 85
PHOTOPRISM_DETECT_NSFW: "false"
PHOTOPRISM_UPLOAD_NSFW: "true"
PHOTOPRISM_DATABASE_DRIVER: "mysql"
PHOTOPRISM_DATABASE_SERVER: "mariadb:3306"
PHOTOPRISM_DATABASE_NAME: "photoprism"
PHOTOPRISM_DATABASE_USER: "photoprism"
PHOTOPRISM_DATABASE_PASSWORD: "[redacted]"
PHOTOPRISM_SITE_CAPTION: ""
PHOTOPRISM_SITE_DESCRIPTION: ""
PHOTOPRISM_SITE_AUTHOR: ""
working_dir: "/photoprism"
volumes:
- "./Pictures:/photoprism/originals"
- "./storage:/photoprism/storage"
mariadb:
restart: always
image: mariadb:10.8
security_opt:
- seccomp:unconfined
- apparmor:unconfined
command: mysqld --innodb-buffer-pool-size=512M --lower-case-table-names=1 --transaction-isolation=READ-COMMITTED --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --max-connections=512 --innodb-rollback-on-timeout=OFF --innodb-lock-wait-timeout=120
volumes:
- "./database:/var/lib/mysql"
environment:
MARIADB_AUTO_UPGRADE: "1"
MARIADB_INITDB_SKIP_TZINFO: "1"
MARIADB_DATABASE: "photoprism"
MARIADB_USER: "photoprism"
MARIADB_PASSWORD: "[redacted]"
MARIADB_ROOT_PASSWORD: "[redacted]"
volumes:
database:
driver: local
Related
Here is my docker-compose.yml
version: '3'
services:
minio:
image: minio/minio:latest
container_name: minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: admin123
MINIO_SERVER_URL: "https://xxxx.ga:9000"
MINIO_BROWSER_REDIRECT_URL: "https://xxxx.ga:9001"
MINIO_COMPRESS: "off"
MINIO_COMPRESS_EXTENSIONS: ""
MINIO_COMPRESS_MIME_TYPES: ""
volumes:
- ./data:/data
- ./config:/root/.minio #where my certs locate
command: server --address ':9000' --console-address ':9001' /data
privileged: true
restart: always
The service runs fine as I could see the login page and docker logs print the right address:
Status: 1 Online, 0 Offline.
API: https://xxxx.ga:9000
Console: https://xxxx.ga:9001
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
minio docker version:RELEASE.2023-01-12T02-06-16Z (go1.19.4 linux/amd64)
Below is the working docker-compose file in v2 spec:
version: '2'
volumes:
webroot:
driver: local
services:
app: # Launch uwsgi application server
build:
context: ../../
dockerfile: docker/release/Dockerfile
links:
- dbc
volumes:
- webroot:/var/www/someapp
environment:
DJANGO_SETTINGS_MODULE: someapp.settings.release
MYSQL_HOST: dbc
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
command:
- uwsgi
- "--socket /var/www/someapp/someapp.sock"
- "--chmod-socket=666"
- "--module someapp.wsgi"
- "--master"
- "--die-on-term"
test: # Run acceptance test cases
image: shamdockerhub/someapp-specs
links:
- nginx
environment:
URL: http://nginx:8000/todos
JUNIT_REPORT_PATH: /reports/acceptance.xml
JUNIT_REPORT_STACK: 1
command: --reporter mocha-jenkins-reporter
nginx: # Start nginx web server that forwards https packets to uwsgi server
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "8000:8000"
links:
- app
volumes:
- webroot:/var/www/someapp
dbc: # Launch MySQL server
image: mysql:5.6
hostname: dbr
expose:
- "3306"
environment:
MYSQL_DATABASE: someapp
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
MYSQL_ROOT_PASSWORD: passwd
agent: # Ensure DB server is runnin
image: shamdockerhub/ansible
links:
- dbc
environment:
PROBE_HOST: "dbc"
PROBE_PORT: "3306"
command: ["probe.yml"]
where entries
MYSQL_HOST: dbc
PROBE_HOST: "dbc"
does not look intuitive, because the hostname is set to dbr in dbc service
1)
app service fails with below error on using MYSQL_HOST: dbr
django.db.utils.OperationalError: (2005, "Unknown MySQL server host 'dbr' (0)")
2)
agent service also fails in below ansible code when PROBE_HOST: "dbr"
set_fact:
probe_host: "{{ lookup('env', 'PROBE_HOST') }}"
local_action: >
wait_for host={{ probe_host }}
1)
Why these two services are failing with value dbr?
2)
How to make these two services work with MYSQL_HOST: dbr
and PROBE_HOST: "dbr"?
that is how Docker works because the hostname is not unique and that will lead to a problem if you give two containers the same hostname therefore compose will always use the service name for DNS resolution
Setting hostname: is equivalent to the hostname(8) command on plain Linux: it changes what the container thinks its own hostname is, but doesn't affect anything outside the container that might try to reach it. On plain Linux running hostname dbr won't change an external DNS server or other machines' /etc/hosts files, for example. Setting the hostname might affect a shell prompt, in the unusual case of getting an interactive shell inside a container; it has no effect on networking.
Within a single Docker Compose file, if you have no special configuration for networks:, any container can reach any other container using the name of its block in the YAML file. In your file, app, nginx, test, dbc, and agent are valid hostnames. If you manually specify a container_name: I believe that will also be reachable; network aliases as suggested in #asolanki's answer give yet another name; and the deprecated links: option would give still another. All of these are in addition to the standard name Compose gives you.
Networking in Compose has some reasonable explanations of all of this.
In your example, dbr is not a valid hostname. dbc is the Compose service name of the container, but nothing from the previous listing causes a hostname dbr to exist. It happens to be the name you'll see in the prompt if you docker-compose exec dlc sh but nobody else thinks that container has that name.
As a specific corollary to "links: is deprecated", the form of links: you have does absolutely nothing. links: [dbc] makes the container that would otherwise be visible under the name dbc visible to that specific container as that same name. You could use it to give an alternate name to a container from the point of view of a client, but I wouldn't.
Your docker-compose.yml file doesn't have any networks: blocks, and so Compose will create a default network and attach all of the containers to it. This is totally fine and I would not recommend changing it. If you do declare multiple networks, the other requirement here is that the client and server need to be on the same network to reach each other. (Containers without a networks: block implicitly have networks: [default].)
If you want to reference the service by another name you can use network alias.
Modified compose file to use network alias
version: '2'
volumes:
webroot:
driver: local
services:
app: # Launch uwsgi application server
build:
context: ../../
dockerfile: docker/release/Dockerfile
links:
- dbc
volumes:
- webroot:/var/www/someapp
environment:
DJANGO_SETTINGS_MODULE: someapp.settings.release
MYSQL_HOST: dbc
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
command:
- uwsgi
- "--socket /var/www/someapp/someapp.sock"
- "--chmod-socket=666"
- "--module someapp.wsgi"
- "--master"
- "--die-on-term"
networks:
new:
aliases:
- myapp
test: # Run acceptance test cases
image: shamdockerhub/someapp-specs
links:
- nginx
environment:
URL: http://nginx:8000/todos
JUNIT_REPORT_PATH: /reports/acceptance.xml
JUNIT_REPORT_STACK: 1
command: --reporter mocha-jenkins-reporter
networks:
- new
nginx: # Start nginx web server that forwards https packets to uwsgi server
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "8000:8000"
links:
- app
volumes:
- webroot:/var/www/someapp
networks:
- new
dbc: # Launch MySQL server
image: mysql:5.6
hostname: dbr
expose:
- "3306"
environment:
MYSQL_DATABASE: someapp
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
MYSQL_ROOT_PASSWORD: passwd
networks:
new:
aliases:
- dbr
agent: # Ensure DB server is runnin
image: shamdockerhub/ansible
links:
- dbc
environment:
PROBE_HOST: "dbc"
PROBE_PORT: "3306"
command: ["probe.yml"]
networks:
- new
networks:
new:
When I run docker-compose up with this docker-compose.yml.
version: '3'
services:
rundeck:
build:
context: ./
args:
RUNDECK_IMAGE: ${RUNDECK_IMAGE:-rundeck/rundeck:SNAPSHOT}
links:
- mysql
tty: true
environment:
RUNDECK_GRAILS_URL: http://localhost
RUNDECK_SERVER_FORWARDED: 'true'
RUNDECK_DATABASE_DRIVER: com.mysql.jdbc.Driver
RUNDECK_DATABASE_USERNAME: rundeck
RUNDECK_DATABASE_PASSWORD: rundeck
RUNDECK_DATABASE_URL: jdbc:mysql://mysql/rundeck?autoReconnect=true&useSSL=false
RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_NAME: com.rundeck.rundeckpro.amazon-s3
RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_S3_BUCKET: ${RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_S3_BUCKET}
RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_S3_REGION: ${RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_S3_REGION}
RUNDECK_STORAGE_CONVERTER_1_CONFIG_PASSWORD: ${RUNDECK_STORAGE_PASSWORD}
RUNDECK_CONFIG_STORAGE_CONVERTER_1_CONFIG_PASSWORD: ${RUNDECK_STORAGE_PASSWORD}
volumes:
- data:/home/rundeck/server/data
- ${AWS_CREDENTIALS}:/home/rundeck/.aws/credentials
- ${RUNDECK_LICENSE_FILE:-/dev/null}:/home/rundeck/etc/rundeckpro-license.key
nginx:
image: nginx
links:
- rundeck
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- 80:80
mysql:
image: mysql:5.7
expose:
- 3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=rundeck
- MYSQL_USER=rundeck
- MYSQL_PASSWORD=rundeck
volumes:
- dbdata:/var/lib/mysql
volumes:
data:
dbdata:
I get the following error:
xxxx_mysql_1 is up-to-date
Creating xxxx_rundeck_1 ... error
ERROR: for xxxx_rundeck_1 Cannot create container for service rundeck: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: for rundeck Cannot create container for service rundeck: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: Encountered errors while bringing up the project.
I don't see a create . statement anywhere in the docker-compose.yml or the Dockerfile.
What am I missing?
You need to update the variable in env file. It is commented and hence adds null value to config.
https://github.com/rundeck/docker-zoo/blob/master/cloud/.env.dist#L3
Used in
https://github.com/rundeck/docker-zoo/blob/master/cloud/docker-compose.yml#L27
I got same issue and it resolved once i updated the env variables with value.
I once got this issue when the .env file contained
AWS_CREDENTIALS:foo
instead of
AWS_CREDENTIALS=foo
How to setup login credentials for kibana gui with docker elk stack containers.
What arguments and environmental variables must be passed in docker-compose.yaml file to get this working.
For setting kibana user credentials for docker elk stack, we have to set xpack.security.enabled: true either in elasticsearch.yml or pass this as a environment variable in docker-compose.yml file.
Pass username & password as environment variable in docker-compose.yml like below:
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
ports:
- "9200:9200"
- "9300:9300"
configs:
- source: elastic_config
target: /usr/share/elasticsearch/config/elasticsearch.yml
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_USERNAME: "elastic"
ELASTIC_PASSWORD: "MyPw123"
http.cors.enabled: "true"
http.cors.allow-origin: "*"
xpack.security.enabled: "true"
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:6.6.1
ports:
- "5044:5044"
- "9600:9600"
configs:
- source: logstash_config
target: /usr/share/logstash/config/logstash.yml:rw
- source: logstash_pipeline
target: /usr/share/logstash/pipeline/logstash.conf
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
xpack.monitoring.elasticsearch.url: "elasticsearch:9200"
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "MyPw123"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:6.6.1
ports:
- "5601:5601"
configs:
- source: kibana_config
target: /usr/share/kibana/config/kibana.yml
networks:
- elk
deploy:
mode: replicated
replicas: 1
configs:
elastic_config:
file: ./elasticsearch/config/elasticsearch.yml
logstash_config:
file: ./logstash/config/logstash.yml
logstash_pipeline:
file: ./logstash/pipeline/logstash.conf
kibana_config:
file: ./kibana/config/kibana.yml
networks:
elk:
driver: overlay
Then add this following lines to kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "MyPw123"
Did not managed to get it working without adding XPACK_MONITORING & SECURITY flags to kibana's container and there was no need for a config file
However I was not able to use kibana user, even after logging in with elastic user and changing kibana's password through the UI.
NOTE: looks like you can't setup default built-in users other than elastic superuser in docker-compose through it's environment. I've tried several times with kibana and kibana_system to no success.
version: "3.7"
services:
elasticsearch:
image: elasticsearch:7.4.0
restart: always
ports:
- 9200:9200
environment:
- discovery.type=single-node
- xpack.security.enabled=true
- ELASTIC_PASSWORD=123456
kibana:
image: kibana:7.4.0
restart: always
ports:
- 5601:5601
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_COLLECTION_ENABLED=true
- XPACK_SECURITY_ENABLED=true
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD="123456"
depends_on:
- elasticsearch
SOURCE
NOTE: looks like this won't work with 8.5.3, Kibana won't accept superuser elastic.
Update
I was able to setup 8.5.3 but with a couple twists. I would build the whole environment, then in elastic's container run the setup-passwords auto
bin/elasticsearch-setup-passwords auto
Grab the auto generated password for kibana_system user and replace it in docker-compose then restart only kibana's container
Kibana 8.5.3 with environment variables:
kibana:
image: kibana:8.5.3
restart: always
ports:
- 5601:5601
environment:
- ELASTICSEARCH_USERNAME="kibana_system"
- ELASTICSEARCH_PASSWORD="sVUurmsWYEwnliUxp3pX"
Restart kibana's container:
docker-compose up -d --build --force-recreate --no-deps kibana
NOTE: make sure to use --no-deps flag otherwise it will restart elastic container if tagged to kibana's
I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.