Traefik Docker Swarm Basic Authentication - docker

I recently set up Traefik v.1.7.14 in a Docker container, on a Docker Swarm enabled cluster. As a test, I created a simple service:
docker service create --name demo-nginx \
--network traefik-net \
--label traefik.app.port=80 \
--label traefik.app.frontend.auth.basic="test:$$apr1$$LG8ly.Y1$$1J9m2sDXimLGaCSlO8.T20" \
--label traefik.app.frontend.rule="Host:t.myurl.com" \
nginx
As the code above states, I am simply installing nginx on my url, at the subdomain t specified.
When this code runs, the service gets created successfully. Traefik also shows the service in the traefik api, as well as within the traefik administrator.
In the traefik api, the back-end service is reported as follows:
"frontend-Host-t-myurl-com-0": {
"entryPoints": [
"http",
"https"
],
"backend": "backend-demo-nginx",
"routes": {
"route-frontend-Host-t-myurl-com-0": {
"rule": "Host:t.myurl.com"
}
},
"passHostHeader": true,
"priority": 0,
"basicAuth": null,
"auth": {
"basic": {}
}
When I go to visit t.myurl.com, I get the authentication prompt, as expected.
However, when I type in my username/password (test:test, in this case), the login prompt just prompts me again and doesn't authenticate me.
I have checked to ensure that I am escaping the characters in the docker label by using:
echo $(htpasswd -nb test test) | sed -e s/\\$/\\$\\$/g
To generate the password.
As part of my testing, I also tried turning off the https entryPoint, as I wanted to see if this cycle was somehow being triggered by ssl. That didn't seem to have any impact on resolving this (rule: --label traefik.app.frontend.entryPoints=http). Traefik did properly respond on http upon doing this, but the password authentication still fell into the same prompting loop as before.
When I remove the traefik.app.frontend.auth.basic label, I can access my site at my url (t.myurl.com). So this issue appears to be isolated within the basic authentication functionality.
My DNS provider is Cloudflare.
If anyone has any ideas, I'd appreciate it.

Maybe you can try this:
echo $(htpasswd -nb your-user your-password);
Because you don't need two $$ in the command line.

Related

Why no_proxy must be specified for CURL to work in this scenario?

Inside my virtual machine, I have the following docker-compose.yml file:
services:
nginx:
image: "nginx:1.23.1-alpine"
container_name: parse-nginx
ports:
- "80:80"
mongo-0:
image: "mongo:5.0.6"
container_name: parse-mongo-0
volumes:
- ./mongo-0/data:/data/db
- ./mongo-0/config:/data/config
server-0:
image: "parseplatform/parse-server:5.2.4"
container_name: parse-server-0
ports:
- "1337:1337"
volumes:
- ./server-0/config-vol/configuration.json:/parse-server/config/configuration.json
command: "/parse-server/config/configuration.json"
The configuration.json file specified for server-0 is as follows:
{
"appId": "APPLICATION_ID_00",
"masterKey": "MASTER_KEY_00",
"readOnlyMasterKey": "only",
"databaseURI": "mongodb://mongo-0/test"
}
After using docker compose up, I execute the following command from the VM:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://localhost:1337/parse/classes/GameScore
The output is:
{"objectId":"yeHHiu01IV","createdAt":"2022-08-25T02:36:06.054Z"}
I use the following command to get inside the nginx container:
docker exec -it parse-nginx sh
Pinging parse-server-0 shows that it does resolve into a proper IP address. I then run the modified version of the curl command above changing localhost with that host name:
curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
It gives me a 504 error like this:
...
<title>504 DNS look up failed</title>
</head>
<body><div class="message-container">
<div class="logo"></div>
<h1>504 DNS look up failed</h1>
<p>The webserver reported that an error occurred while trying to access the website. Please return to the previous page.</p>
...
However if I use no_proxy as follows, it works:
no_proxy="parse-server-0" curl -X POST -H "X-Parse-Application-Id: APPLICATION_ID_00" -H "X-Parse-Master-Key: MASTER_KEY_00" -H "Content-Type: application/json" -d '{"score":1000,"playerName":"Sean Plott","cheatMode":false}' http://parse-server-0:1337/parse/classes/GameScore
The output is again something like this:
{"objectId":"ICTZrQQ305","createdAt":"2022-08-25T02:18:11.565Z"}
I am very perplexed by this. Clearly, parse-server-0 is reachable with ping. How can it then throws a 504 error without using no_proxy? The parse-nginx container is using default settings and configuration. I do not set up any proxy. I am using it to test the curl command from another container to parse-mongo-0. Any help would be greatly appreciated.
The contents of /etc/resolv.conf is:
nameserver 127.0.0.11
options edns0 trust-ad ndots:0
Running echo $HTTP_PROXY inside parse-nginx returns:
http://10.10.10.10:8080
This value is null inside the VM.
Your proxy server doesn't appear to be running in this docker network. So when the request goes to that proxy server, it will not query the docker DNS on this network to resolve the other container names.
If your application isn't making requests outside of the docker network, you can remove the proxy settings. Otherwise, you'll want to set no_proxy for the other docker containers you will be accessing.
Please check the value of echo $http_proxy. Please note the downcase here. If this value is set, that means curl is configured to use the proxy. You're getting 504 while DNS resolution most probably because your parse-nginx container isn't able to reach the ip 10.10.10.10. And specifying no_proxy tells it to ignore the http_proxy env var (overriding it) and make the request without any proxy.
Inside my VM, this is the contents of the ~/.docker/config.json file:
{
"proxies":
{
"default":
{
"httpProxy": "http://10.10.10.10:8080",
"httpsProxy": "http://10.10.10.10:8080"
}
}
}
This was implemented a while back as an ad hoc fix for some network issues. A security certificate was later implemented. I completely forgot about the fix. Clearing the ~/.docker/config.json file, and redoing docker compose up fixes the issue. I no longer need no_proxy to make curl works. Everything is as it should be now. Thank you so much for all the help.

Configure HTTP Proxy for Containers in Kubernetes

I have a Kubernetes v1.18.3 cluster and the workers have a Docker v19.03.6 dameon.
I'm looking for a way to automatically inject the HTTP_PROXY and HTTPS_PROXY to every container that Kubernetes creates.
I tried creating a ~/.docker/config.json file, but it didn't work.
What would be the proper way to accomplish it?
Was interested in your case, even reproduced with the same docker and k8s versions...
Used official Configure Docker to use a proxy server documentation to set proxy for docker in ~/.docker/config.json
Configure the Docker client On the Docker client, create or edit the
file ~/.docker/config.json in the home directory of the user which
starts containers. Add JSON such as the following, substituting the
type of proxy with httpsProxy or ftpProxy if necessary, and
substituting the address and port of the proxy server. You can
configure multiple proxy servers at the same time.
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:3001",
"httpsProxy": "http://127.0.0.1:3001",
"noProxy": "*.test.example.com,.example2.com"
}
}
}
Save the file.
When you create or start new containers, the environment variables are set automatically within the container.
My config was:
{
"proxies": {
"default": {
"httpProxy": "http://user:pass#my.proxy.domain.com",
"httpsProxy": "http://user:pass#my.proxy.domain.com"
}
}
}
So basically after setting above in ~/.docker/config.json, the proxy-server will be automatically used when starting a brand new containers.
In my case that worked, I can verify that by using cli and creating e.g busybox container.
$ docker container run --rm busybox env
HTTP_PROXY=http://user:pass#my.proxy.domain.com
http_proxy=http://user:pass#my.proxy.domain.com
HTTPS_PROXY=http://user:pass#my.proxy.domain.com
https_proxy=http://user:pass#my.proxy.domain.com
HOME=/root
Please keep in mind that there should be issues with next part:
On the Docker client, create or edit the file ~/.docker/config.json in
the home directory of the user which starts containers.
Be careful with USER you use and make sure your HOME env is set to correct one.
Links to github almost similar issue and ways to resolve:
1) https://github.com/kubernetes/kubernetes/issues/45487#issuecomment-312042754
I dug into this a bit and the issue for me was that the HOME
environment variable was empty when kubelet was launched through a
systemd unit. While it's not documented this way, loading the
configuration from /root/docker/config.json or /root/.dockercfg
requires that HOME=/root
Setting User=root in the [Service] declaration fixed it up for me.
2) https://github.com/kubernetes/kubernetes/issues/45487#issuecomment-378116386
3) https://github.com/kubernetes/kubernetes/issues/45487#issuecomment-464516064 (partial info)
(3) vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add User=root
File looks kind of like this
[Service]
User=root
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
(4) Reload and restart kubelet
systemctl daemon-reload
systemctl restart kubelet
Exactly in my case everything worked fine from scratch. So read carefully and check points I highlighted. Most probably you have very tiny problem/typo, cause it works as expected.
I hope my investigation will help you
This sounds right & logical, but does not work for me.
I am running kubernetes version "v1.18.6". Data below. This fails. But setting same http-proxy as Env for dockerd, works.
admin#str-s6000-acs-13:/etc/systemd/system/kubelet.service.d$ sudo cat /proc/$(pidof kubelet)/environ | tr '\0' '\n'
LANG=en_US.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOME=/root
LOGNAME=root
USER=root
SHELL=/bin/sh
INVOCATION_ID=fd58e75d7be64758b01e2d8d63fdf7f6
JOURNAL_STREAM=9:11737012
KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf
KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2
admin#str-s6000-acs-13:/etc/systemd/system/kubelet.service.d$ sudo cat /root/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "http://20.72.201.152:3128",
"httpsProxy": "http://20.72.201.152:3128"
}
}
}
admin#str-s6000-acs-13:/etc/systemd/system/kubelet.service.d$
Apr 12 01:14:49 str-s6000-acs-13 dockerd[27360]: time="2021-04-12T01:14:49.630282308Z" level=error msg="Handler for POST /images/create returned error: Get https://sonicanalytics.azurecr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"

Vault Docker Image - Cant get REST Response

I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.

Error face following tutorial on REST persistent data Store on Hyperledger composer

https://i.imgur.com/nGh5orv.png
I am setting this up in a AWS ec2 environment.Everything works fine till I tried doing a multi-user mode.
I am facing this issue where I had setup the mongoldb persistent data store following the tutorials.
Here is my setup on the envvars.txt
COMPOSER_CARD=admin#property-network
COMPOSER_NAMESPACES=never
COMPOSER_AUTHENTICATION=true
COMPOSER_MULTIUSER=true
COMPOSER_PROVIDERS='{
"github": {
"provider": "github",
"module": "passport-github",
"clientID": "xxxx",
"clientSecret": "xxxx
"authPath": "/auth/github",
"callbackURL": "/auth/github/callback",
"successRedirect": "/",
"failureRedirect": "/"
}
}'
COMPOSER_DATASOURCES='{
"db": {
"name": "db",
"connector": "mongodb",
"host": "mongo"
}
}'
And I had changed the connection profile of both h1lfv1 and admin#xxx-network to 0.0.0.0 as seen here.
https://github.com/hyperledger/composer/issues/1784
I tried his solution here and it doesn't work.
Thank you!
Currently there's an issue with admin re-enrolling (strictly an issue with REST server) even though the admin card has a certificate (it ignores it - but fixed in 0.18.x).
Further, there's a hostname resolution issue which you'll need to address because Docker needs to be able to resolve the container names from within the persistent REST server container - we will need to change the hostnames to represent the docker resolvable hostnames as they are current set to localhost values - (example shows a newly issued 'restadmin' card that was created for the purposes of using it to start the REST server and using the standard 'Developer setup' Composer environment):
Create a REST Adninistrator identity restadmin and an associated business network card (used to launch the REST server later).
composer participant add -c admin#property-network -d '{"$class":"org.hyperledger.composer.system.NetworkAdmin", "participantId":"restadmin"}'
Issue a 'restadmin' identity, mapped to the above participant:
composer identity issue -c admin#property-network -f restadmin.card -u restadmin -a "resource:org.hyperledger.composer.system.NetworkAdmin#restadmin"
Import and test the card:
composer card import -f restadmin.card
composer network ping -c restadmin#property-network
run this one-liner to carry out the resolution changes easily:
sed -e 's/localhost:/orderer.example.com:/' -e 's/localhost:/peer0.org1.example.com:/' -e 's/localhost:/peer0.org1.example.com:/' -e 's/localhost:/ca.org1.example.com:/' < $HOME/.composer/cards/restadmin#property-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/restadmin#property-network
Try running the REST server with the card -c restadmin#property-network - if you're running this tutorial https://hyperledger.github.io/composer/latest/integrating/deploying-the-rest-server then you will need to put this CARD NAME in the top of your envvars.txt and then ensure you run source envvars.txt to get it set 'in your current shell environment'
If you wish to issue further identities - say kcoe below - from the REST client (given you're currently 'restadmin') you simply do the following (first two can be done in Playground too FYI):
composer participant add -c admin#trade-network -d '{"$class":"org.acme.trading.Trader","tradeId":"trader2", "firstName":"Ken","lastName":"Coe"}'
composer identity issue -c admin#trade-network -f kcoe.card -u kcoe -a "resource:org.acme.trading.Trader#trader2"
composer card import -f kcoe.card # imported to the card store
Next - one-liner to get docker hostname resolution right, from inside the persistent dockerized REST server:
sed -e 's/localhost:/orderer.example.com:/' -e 's/localhost:/peer0.org1.example.com:/' -e 's/localhost:/peer0.org1.example.com:/' -e 's/localhost:/ca.org1.example.com:/' < $HOME/.composer/cards/kcoe#trade-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/kcoe#trade-network
Start your REST server as per the Deploy REST server doc:
docker run \
-d \
-e COMPOSER_CARD=${COMPOSER_CARD} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_AUTHENTICATION=${COMPOSER_AUTHENTICATION} \
-e COMPOSER_MULTIUSER=${COMPOSER_MULTIUSER} \
-e COMPOSER_PROVIDERS="${COMPOSER_PROVIDERS}" \
-e COMPOSER_DATASOURCES="${COMPOSER_DATASOURCES}" \
-v ~/.composer:/home/composer/.composer \
--name rest \
--network composer_default \
-p 3000:3000 \
myorg/my-composer-rest-server
From the System REST API in http://localhost:3000/explorer - go to the POST /wallet/import operation and import the card file kcoe.card with (in this case) the card name set to kcoe#trade-network and click on 'Try it Out' to import it - it should return a successful (204) response.
This is set as the default ID in the Wallet via System REST API endpoint
(if you need to set any further imported cards as the default card name in our REST client Wallet - go to the POST /wallet/name/setDefault/ method and choose the card name and click on Try it Out. This would now the default card).
Test it out - try getting a list of Traders (trade-network example):
Return to the Trader methods in the REST API client and expand the /GET Trader endpoint then click 'Try it Out' . It should confirm that we are now using a card in the business network, and should be able to interact with the REST Server and get a list of Traders (that were added to your business network)..

Unable to set localhost. This prevents creation of a GUID

I'm struggling with the following issue. We have a Java application that is running properly on Docker. Now, when we try to migrate the application to Docker Swarm--running it as a service--it always throws the following exception:
Cache - Unable to set localhost. This prevents creation of a GUID. Cause was: 39bc5cdfb3d9: 39bc5cdfb3d9: Name or service not known
java.net.UnknownHostException: 39bc5cdfb3d9: 39bc5cdfb3d9: Name or service not known
Note that 39bc5cdfb3d9 is the container ID.
I've tried the following:
curl against the DNS that we are using
updating the nginx config that the other server is back up
Setup:
3 Mangers
containers runs only on the 2 servers app1.dev and app2.dev it has a constraint label=dev
using the default network ingress,
DNS:dev-ecc.toroserver.com
I run the service using this:
docker service create \
${HTTP} \
${HTTPS} \
${VOLUMES} \
${ENV_VARS} \
${LICENSE} \
${LOGS} \
--limit-memory 768mb \
--mode=global \
--constraint 'engine.labels.serverType == dev' \
--env appName="${SUB_DNS}" \
--name="${SUB_DNS}" \
--restart-condition on-failure --restart-max-attempts 5 \
--with-registry-auth \
${DOCKER_REGISTRY}/${DOCKER_USER}/${APPNAME}:${VERSION}
Also I've got this error every time I tried to login, it will automatically logout my session , Not sure if it is related to the error Unable to set localhost
2017-11-08 03:25:56,771 [ INFO] AjaxTimeoutRedirectFilter - User session expired or not logged in yet
2017-11-08 03:25:56,771 [ INFO] AjaxTimeoutRedirectFilter - User session expired or not logged in yet
2017-11-08 03:25:56,778 [ INFO] AjaxTimeoutRedirectFilter - Redirect to login page
2017-11-08 03:25:56,778 [ INFO] AjaxTimeoutRedirectFilter - Redirect to login page
2017-11-08 03:30:36,822 [ INFO] AjaxTimeoutRedirectFilter - User session expired or not logged in yet
2017-11-08 03:30:36,822 [ INFO] AjaxTimeoutRedirectFilter - User session expired or not logged in yet
Any insights will be much appreciated. Thanks.
The "Cache - unable to set localhost" looks to be a common error message from the EHCache project. Finding that in the code shows that it is the result of calling the Java net library's java.net.InetAddress.getLocalHost() method, which looks up the local hostname and then tries to DNS resolve it to an IP address.
A quick local test shows that this works for both docker run and as a service on my single-node Swarm. Given you mention testing DNS, maybe at this point more information is required about your specific Swarm setup (specifically networking) to see why you are getting different behavior. Obviously if you have your own DNS, then as per the above, the default name of the container must be resolvable by a DNS lookup or else you will continue to get the Java UnknownHostException.

Resources