Host and Path with Traefik V2 - docker

In reference to: Define host and path frontend rule for Traefik (I wanted to comment on the answer but I can't)
I implemented the suggestion in the answer using
Host(`domain.com`) && Path(`/path`)
but it does not work (Getting 404 when trying to access it).
Traefik logs show:
time="2020-07-07T10:31:30Z" level=error msg="field not found, node: rule " providerName=docker
My docker compose looks like this:
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.typo3-${NAMEOFSERVICE}.rule = Host(`${HOSTNAME}`) && Path(`${DIRECTORY}`)"
When just using with the Host rule it works perfectly fine. But I want to be able to do e.g. subdomain.domain.com/subdirectory for service 1 and subdomain.domain.com/subdirectory2 for service 2
I also tried - "traefik.http.routers.typo3-${NAMEOFSERVICE}.rule = Host(`${HOSTNAME}`) && PathPrefix(`${DIRECTORY}`)" but I get the same error in the log and 404.

I found the Problem: remove the spaces around the "="
This works:
- "traefik.http.routers.typo3-${NAMEOFSERVICE}.rule=(Host(`${HOSTNAME}`) && Path(`${DIRECTORY}`))"
I now have another problem. My service in this subdirectory, redirects outside of it. (Example, Typo 3 first install: I access subdomain.domain.com/foo and it redirects me to subdomain.domain.com/typo3/install.php)

Related

Keycloak 17.0.1 Import Realm on Docker / Docker-Compose Startup

I am trying to find a way to import a realm in Keycloak version 17.0.1 that can be done at starting up a docker container (with docker-compose). I want to be able to do this in "start" mode and not "start-dev" mode as in my experience so far "start-dev" in 17 is forcing an H2/in-mem database and not allowing me to point to an external db which I would like to do to more closely resemble dev/prod environments when running locally.
Things I've tried:
1) It appears that according to recent conversations on Github (Issue 10216 and Issue 10754 to name a couple) that the environment variable that used to allow this (KEYCLOAK_IMPORT or KC_IMPORT_REALM in some versions) is no longer a trigger for this. In my attempts it also did not work for version 17.0.1.
2) I've also tried appending the following command in my docker-compose setup for keycloak and had no luck (also tried with just "start") - It appears to just ignore the command (no error or anything):
command: ["start-dev", "-Dkeycloak.import=/tmp/my-realm.json"]
3) I tried running the kc.sh command "import" in the Dockerfile (both before and after Entrypoint/start) but got error: Unmatched arguments from index 1: '/opt/keycloak/bin/kc.sh', 'im port', '--file', '/tmp/my-realm.json'
4) I've shifted gears and have tried to see if it is possible to just do it after the container starts (even with manual intervention) just to get some sanity restored. I attempted to use the admin-cli but after quite a few different attempts at different points/endpoints etc. I just get that localhost refuses to connect.
bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password adminpassword
Responds when hitting the following ports as shown:
8080: Failed to send request - Connect to localhost:8080 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
8443: Failed to send request - localhost:8443 failed to respond
I am sure there are other ways that I've tried and am forgetting - I've kind of spun my wheels at this point.
My code (largely the same as the latest docs on the Keycloak website):
Dockerfile:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_FEATURES=token-exchange
ENV KC_DB=postgres
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
WORKDIR /opt/keycloak
# for demonstration purposes only, please make sure to use proper certificates in production instead
ENV KC_HOSTNAME=localhost
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start" ]
Docker-compose.yml:
version: "3"
services:
keycloak:
build:
context: .
volumes:
- ./my-realm.json:/tmp/my-realm.json:ro
env_file:
- .env
environment:
KC_DB_URL: ${POSTGRESQL_URL}
KC_DB_USERNAME: ${POSTGRESQL_USER}
KC_DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: adminpassword
ports:
- 8080:8080
- 8443:8443 # <-- I've tried with only 8080 and with only 8443 as well. 8443 appears to be the only that I can get the admin console ui to even work on though.
networks:
- my_net
networks:
my_net:
name: my_net
Any suggestion on how to do this in a programmatic + "dev-opsy" way would be greatly appreciated. I'd really like to get this to work but am confused on how to get past this.
Importing realm upon docker initialization thru configuration is not supported yet. See https://github.com/keycloak/keycloak/issues/10216. They might release this feature in next release v18.
The workarounds people had shared in github thread is create own docker image and import the realm thru json file when building it.
FROM quay.io/keycloak/keycloak:17.0.1
# Make the realm configuration available for import
COPY realm-and-users.json /opt/keycloak_import/
# Import the realm and user
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak_import/realm-and-users.json
# The Keycloak server is configured to listen on port 8080
EXPOSE 8080
EXPOSE 8443
# Import the realm on start-up
CMD ["start-dev"]
As #tboom said, it was not supported yet by keycloak 17.x. But it is now supported by keycloak 18.x using the --import-realm option :
bin/kc.[sh|bat] [start|start-dev] --import-realm
This feature does not work as it was before. The JSON file path must not be specified anymore: the JSON file only has to be copied in the <KEYCLOAK_DIR>/data/import directory (multiple JSON files supported). Note that the import operation is skipped if the realm already exists, so incremental updates are not possible anymore (at least for the time being).
This feature is documented on https://www.keycloak.org/server/importExport#_importing_a_realm_during_startup.

Graylog in Docker persistent

I'm trying to make a Graylog Docker Container persistent.
Meaning that after restarting (docker-compose down; docker-compose up) the logs will still be there alongside the configuration.
I've used the documentation at https://docs.graylog.org/en/3.1/pages/installation/docker.html I created a yml file with the content under the topic "Persisting data".
I only edited the line "GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/" to not use localhost but the external ip the machine is using.
Docker works, i can create an input and collect logfiles. What does not work is the data being persistent. Also every time i restart the node id changes, so i have to reconfigure the input. Running docker volume ls lists five volumes 3 of which are the ones created in the yml file.
I don't understand why data is not persistent. Can anybody help?
I had the same problem and I'd been struggling for a while before I found a solution. I'm on 3.2 and also had issues with node persistence. The documentation doesn't seem to directly state that there is one more configuration folder you need to persist, which is:
/usr/share/graylog/data/config
They actually mention it in the Custom configuration files section and when I took a look via CLI in that directory, it turns out that it's where the graylog.conf and node-id (the file Graylog uses to store information about its nodes) are stored as well!
Here's my docker-compose.override.yml section with the necessary changes (marked with '# ADDED' comments)
services:
graylog:
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
- GRAYLOG_IS_MASTER=true
#- GRAYLOG_NODE_ID_FILE=/usr/share/graylog/data/config/node-id
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
volumes:
- "graylogjournal:/usr/share/graylog/data/journal"
- "graylogconfig:/usr/share/graylog/data/config" # ADDED
volumes:
graylogjournal:
driver: local
graylogconfig: # ADDED
driver: local # ADDED
Hope this helps
You can add into daemon.json file these lines ;
{
"log-driver": "gelf",
"log-opts": {
"gelf-address": "udp://1.2.3.4:12201"
}
}
https://docs.docker.com/config/containers/logging/gelf/

Traefik v2 [how to route to specific port]

I'm trying to start the change of backends to be compatible with traefik v2.0.
The old configuration was:
labels:
- traefik.port=8500
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:consul.{DOMAIN}
I assumed, the network is not necessary anymore, it would change the new traefik for:
- traefik.http.routers.consul-server-bootstrap.rule=Host('consul.scoob.thrust.com.br')
But how I set, that this should forward to my backend at port 8500? and not 80 where the entrypoint was reached at Traefik?
My goal would try to accomplish something like this:
https://docs.traefik.io/user-guide/cluster-docker-consul/#migrate-configuration-to-consul
Is it still possible?
I saw, there was no --consul or storeconfig command in v2.0
You need traefik.http.services.{SERVICE}.loadbalancer.server.port
labels:
- "traefik.http.services.{SERVICE}.loadbalancer.server.port=8500"
- "traefik.docker.network=proxy"
- "traefik.http.routers.{SERVICE}.rule=Host(`{DOMAIN}`)"
Replace {SERVICE} with the name of your service.
Replace {DOMAIN} with your domain name.
If you want to remove the proxy network you'll need to look at https://docs.traefik.io/v2.0/providers/docker/#usebindportip

Why Neo4J docker authentication doesn't work

I want to run a Neo4J instance through docker using a docker-compose.
docker-compose.yml
version: '3'
services:
neo4j:
container_name: neo4j-lab
image: neo4j:latest
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- NEO4J_dbms_memory_heap_maxSize=4G
- NEO4J_dbms_memory_heap_initialSize=512M
- NEO4J_AUTH=neo4j/changeme
ports:
- 7474:7474
- 7687:7687
volumes:
- neo4j_data:/data
- neo4j_conf:/conf
- ./import:/import
volumes:
neo4j_data:
neo4j_conf:
Running the following with docker-compose up is perfectly fine, and I can reach the login screen.
But when I set the credentials, I get the following error on my container logs : Neo.ClientError.Security.Unauthorized The client is unauthorized due to authentication failure. whereas I am sure that I fill with right credentials (the ones used in my docker-compose file)
Furthermore,
when I set NEO4J_AUTH to none, then no credentials have been asked.
when I set it to neo4j/neo4j it said that I can't use the default password
According the documentation, this is perfectly fine :
By default Neo4j requires authentication and requires you to login with neo4j/neo4j at the first connection and set a new password. You can set the password for the Docker container directly by specifying --env NEO4J_AUTH=neo4j/password in your run directive. Alternatively, you can disable authentication by specifying --env NEO4J_AUTH=none instead.
Do you have any idea of what's going on ?
Hope you could help me to solve this !
EDIT
Docker logs output :
neo4j-lab | 2019-03-13 23:02:32.378+0000 INFO Starting...
neo4j-lab | 2019-03-13 23:02:37.796+0000 INFO Bolt enabled on 0.0.0.0:7687.
neo4j-lab | 2019-03-13 23:02:41.102+0000 INFO Started.
neo4j-lab | 2019-03-13 23:02:43.935+0000 INFO Remote interface available at http://localhost:7474/
neo4j-lab | 2019-03-13 23:02:56.105+0000 WARN The client is unauthorized due to authentication failure.
EDIT 2 :
It seems that deleting the volume associated first works. The password is now changed.
However, if I docker-compose down then docker-compose up whereas I change the password in my docker-compose file then the issue reappears.
So I think that when we change the password through docker-compose more than once while a volume exists, we need to remove the auth file presents in the volumes.
To do that :
docker volume inspect <volume_name>
You should get something like that :
[
{
"CreatedAt": "2019-03-14T11:17:08+01:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "neo4j",
"com.docker.compose.volume": "neo4j_data"
},
"Mountpoint": "/data/docker/volumes/neo4j_neo4j_data/_data",
"Name": "neo4j_neo4j_data",
"Options": null,
"Scope": "local"
}
]
This is obviously different if you named your container and your volumes not like me (neo4j, neo4j_data).
The important part is the Mountpoint which locates the volume.
In this volume, you can delete the auth file which is in dbms directory.
Then restart your docker and everything should be fine.
Neo4j docker developer here.
The reason this is happening is that the NEO4J_AUTH environment variable doesn't set the database password, it sets the INITIAL password only.
If you're mounting a data volume with an existing database inside, then NEO4J_AUTH has no effect because that database already has a password. It sounds like that's what you're experiencing here.
The documentation around this feature was not great and I've updated it! See: Neo4j docker authentication documentation
define Neo4j password with docker-compose
neo4j:
image: 'neo4j:4.1'
environment:
NEO4J_AUTH: 'neo4j/your_password'
ports:
- "7474:7474"
volumes:
...

How to manage Docker private registry

I've set up Docker and running a private repository on example.com:5000. I followed the instructions listed here: https://docs.docker.com/registry/deploying/
And uses the docker-compose.yml:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- /path/data:/var/lib/registry
- /path/certs:/certs
- /path/auth:/auth
I can push and pull images to the repository, but I can't get docker search example.com:5000/library to run. I get an: Error response from daemon: Unexpected status code 404.
When I point curl to the endpoint I get the following result:
$ curl -v -X GET http://example.com:5000/v2/images
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 37.139.20.160...
* Connected to example.com (192.167.201.2) port 5000 (#0)
> GET /v2/images HTTP/1.1
> Host: domain.com:5000
> User-Agent: curl/7.47.0
> Accept: */*
>
* Connection #0 to host example.com left intact
How can I make the search command working so that I can manage the repository? And where can I find the API documentation of the endpoint? Or are there better ways to manage a Docker private repo?
It seems you have to activate the search option, according to Search-engine options
The Docker Registry can optionally index repository information in a database for the GET /v1/search endpoint.
(I don't see a search in the V2 API. You can list tags)
The search_backend setting selects the search backend to use.
If search_backend is empty, no index is built, and the search endpoint always returns empty results.
For instance, using the SQLAlchemy database
common:
search_backend: sqlalchemy
sqlalchemy_index_database: sqlite:////tmp/docker-registry.db
On initialization, the SQLAlchemyIndex class checks the database version. If the database doesn't exist yet (or does exist, but lacks a version table), the SQLAlchemyIndex creates the database and required tables.

Resources