Unable to set localhost. This prevents creation of a GUID - docker

I'm struggling with the following issue. We have a Java application that is running properly on Docker. Now, when we try to migrate the application to Docker Swarm--running it as a service--it always throws the following exception:
Cache - Unable to set localhost. This prevents creation of a GUID. Cause was: 39bc5cdfb3d9: 39bc5cdfb3d9: Name or service not known
java.net.UnknownHostException: 39bc5cdfb3d9: 39bc5cdfb3d9: Name or service not known
Note that 39bc5cdfb3d9 is the container ID.
I've tried the following:
curl against the DNS that we are using
updating the nginx config that the other server is back up
Setup:
3 Mangers
containers runs only on the 2 servers app1.dev and app2.dev it has a constraint label=dev
using the default network ingress,
DNS:dev-ecc.toroserver.com
I run the service using this:
docker service create \
${HTTP} \
${HTTPS} \
${VOLUMES} \
${ENV_VARS} \
${LICENSE} \
${LOGS} \
--limit-memory 768mb \
--mode=global \
--constraint 'engine.labels.serverType == dev' \
--env appName="${SUB_DNS}" \
--name="${SUB_DNS}" \
--restart-condition on-failure --restart-max-attempts 5 \
--with-registry-auth \
${DOCKER_REGISTRY}/${DOCKER_USER}/${APPNAME}:${VERSION}
Also I've got this error every time I tried to login, it will automatically logout my session , Not sure if it is related to the error Unable to set localhost
2017-11-08 03:25:56,771 [ INFO] AjaxTimeoutRedirectFilter - User session expired or not logged in yet
2017-11-08 03:25:56,771 [ INFO] AjaxTimeoutRedirectFilter - User session expired or not logged in yet
2017-11-08 03:25:56,778 [ INFO] AjaxTimeoutRedirectFilter - Redirect to login page
2017-11-08 03:25:56,778 [ INFO] AjaxTimeoutRedirectFilter - Redirect to login page
2017-11-08 03:30:36,822 [ INFO] AjaxTimeoutRedirectFilter - User session expired or not logged in yet
2017-11-08 03:30:36,822 [ INFO] AjaxTimeoutRedirectFilter - User session expired or not logged in yet
Any insights will be much appreciated. Thanks.

The "Cache - unable to set localhost" looks to be a common error message from the EHCache project. Finding that in the code shows that it is the result of calling the Java net library's java.net.InetAddress.getLocalHost() method, which looks up the local hostname and then tries to DNS resolve it to an IP address.
A quick local test shows that this works for both docker run and as a service on my single-node Swarm. Given you mention testing DNS, maybe at this point more information is required about your specific Swarm setup (specifically networking) to see why you are getting different behavior. Obviously if you have your own DNS, then as per the above, the default name of the container must be resolvable by a DNS lookup or else you will continue to get the Java UnknownHostException.

Related

Is there a way to decide what CURL should consider a success?

My goal is to have a HEALTHCHECK command in my Dockerfile to check if the webserver is working alright by simply making a request to the website and check if it receives a "proper response".
The problem I'm having is that the application has an authentication middleware, which causes the application to return an error (401 Unauthorized), causing CURL to fail and return curl: (7) Failed to connect to host.docker.internal port 8000: Connection refused.
If I remove the authentication middleware it doesn't return anything, which is what I'm aiming for.
The command I'm using is the following (I'm currently just using it inside a container, trying to find a solution):
curl --fail http://host.docker.internal:8000
I know I can tell CURL the username and password but that's something I would rather not do it.
Having a way to tell CURL that Unauthorized (error 401) is fine or to consider a connection refused error (curl: (7)) as the only error would be fine but it would be even better if I could decide what should CURL consider and/or not consider a success. Is there any way to do something like this with one or more CURL options?
Health check is a good practice when microservices or rest services architecture are used.
Default health endpoints and check platforms needs 200 as http code to flag your app as healthy. Any other response is flagged ad unhealthy.
Custom codes with curl
I tried and I can say: with curl is not possible:
https://superuser.com/questions/590099/can-i-make-curl-fail-with-an-exitcode-different-than-0-if-the-http-status-code-i
You need a custom logic.
Custom health
As you are using ubuntu based image, you could use a simple bash function to catch 401 codes and return exit 0 in order to mark as healthy your container.
with curl
The cornerstone here is the option to retrieve just the response code from curl invokation:
curl -o /dev/null -s -w "%{http_code}\n" http://localhost
So you can create a bash script to execute your curl invocation and return:
exit 0 just for 200 & 401
And exit 1 in any other case .
#healthcheck.sh
code=$(curl -o /dev/null -s -w "%{http_code}\n" http://localhost:12345)
echo "response code:$code"
if [ "$code" == "200" ] || [ "$code" == "401" ]
then
echo "success"
exit 0;
else
echo "error"
exit 1;
fi
Finally you can use this script in your healthcheck in any language(php in your case):
FROM node
COPY server.js /
HEALTHCHECK --interval=5s --timeout=10s --retries=3 CMD curl -sS 127.0.0.1:8080 || exit 1
CMD [ "node", "/server.js" ]
Health feature should be public
Common health verification is related to: server status, internet connection, ram, disk, database connectivity, and any other stat that indicates you if you app is running and is ok.
Health check platforms does not allow us to register complex security flows (oauth1, ouath2, openid, etc). Just allow us to register a simple http endpoint. Here an example from aws ELB check configuration
Health feature should not expose any other sensitive data, because of that, this endpoint could be public. Classic and public webpages, web systems or public apis are examples.
Workaround
In some strict cases, privacy is required.
In this case I protected the /health with a simple apikey value as query parameter. In the controller, I validate if it is equal to some value. Final health endpoint will be /health?apiKey=secret and this is easy to register in check platforms.
Using complex configurations you could allow /health just for internal private lan, not for public access. So in this case your /health is secure

Traefik Docker Swarm Basic Authentication

I recently set up Traefik v.1.7.14 in a Docker container, on a Docker Swarm enabled cluster. As a test, I created a simple service:
docker service create --name demo-nginx \
--network traefik-net \
--label traefik.app.port=80 \
--label traefik.app.frontend.auth.basic="test:$$apr1$$LG8ly.Y1$$1J9m2sDXimLGaCSlO8.T20" \
--label traefik.app.frontend.rule="Host:t.myurl.com" \
nginx
As the code above states, I am simply installing nginx on my url, at the subdomain t specified.
When this code runs, the service gets created successfully. Traefik also shows the service in the traefik api, as well as within the traefik administrator.
In the traefik api, the back-end service is reported as follows:
"frontend-Host-t-myurl-com-0": {
"entryPoints": [
"http",
"https"
],
"backend": "backend-demo-nginx",
"routes": {
"route-frontend-Host-t-myurl-com-0": {
"rule": "Host:t.myurl.com"
}
},
"passHostHeader": true,
"priority": 0,
"basicAuth": null,
"auth": {
"basic": {}
}
When I go to visit t.myurl.com, I get the authentication prompt, as expected.
However, when I type in my username/password (test:test, in this case), the login prompt just prompts me again and doesn't authenticate me.
I have checked to ensure that I am escaping the characters in the docker label by using:
echo $(htpasswd -nb test test) | sed -e s/\\$/\\$\\$/g
To generate the password.
As part of my testing, I also tried turning off the https entryPoint, as I wanted to see if this cycle was somehow being triggered by ssl. That didn't seem to have any impact on resolving this (rule: --label traefik.app.frontend.entryPoints=http). Traefik did properly respond on http upon doing this, but the password authentication still fell into the same prompting loop as before.
When I remove the traefik.app.frontend.auth.basic label, I can access my site at my url (t.myurl.com). So this issue appears to be isolated within the basic authentication functionality.
My DNS provider is Cloudflare.
If anyone has any ideas, I'd appreciate it.
Maybe you can try this:
echo $(htpasswd -nb your-user your-password);
Because you don't need two $$ in the command line.

Error face following tutorial on REST persistent data Store on Hyperledger composer

https://i.imgur.com/nGh5orv.png
I am setting this up in a AWS ec2 environment.Everything works fine till I tried doing a multi-user mode.
I am facing this issue where I had setup the mongoldb persistent data store following the tutorials.
Here is my setup on the envvars.txt
COMPOSER_CARD=admin#property-network
COMPOSER_NAMESPACES=never
COMPOSER_AUTHENTICATION=true
COMPOSER_MULTIUSER=true
COMPOSER_PROVIDERS='{
"github": {
"provider": "github",
"module": "passport-github",
"clientID": "xxxx",
"clientSecret": "xxxx
"authPath": "/auth/github",
"callbackURL": "/auth/github/callback",
"successRedirect": "/",
"failureRedirect": "/"
}
}'
COMPOSER_DATASOURCES='{
"db": {
"name": "db",
"connector": "mongodb",
"host": "mongo"
}
}'
And I had changed the connection profile of both h1lfv1 and admin#xxx-network to 0.0.0.0 as seen here.
https://github.com/hyperledger/composer/issues/1784
I tried his solution here and it doesn't work.
Thank you!
Currently there's an issue with admin re-enrolling (strictly an issue with REST server) even though the admin card has a certificate (it ignores it - but fixed in 0.18.x).
Further, there's a hostname resolution issue which you'll need to address because Docker needs to be able to resolve the container names from within the persistent REST server container - we will need to change the hostnames to represent the docker resolvable hostnames as they are current set to localhost values - (example shows a newly issued 'restadmin' card that was created for the purposes of using it to start the REST server and using the standard 'Developer setup' Composer environment):
Create a REST Adninistrator identity restadmin and an associated business network card (used to launch the REST server later).
composer participant add -c admin#property-network -d '{"$class":"org.hyperledger.composer.system.NetworkAdmin", "participantId":"restadmin"}'
Issue a 'restadmin' identity, mapped to the above participant:
composer identity issue -c admin#property-network -f restadmin.card -u restadmin -a "resource:org.hyperledger.composer.system.NetworkAdmin#restadmin"
Import and test the card:
composer card import -f restadmin.card
composer network ping -c restadmin#property-network
run this one-liner to carry out the resolution changes easily:
sed -e 's/localhost:/orderer.example.com:/' -e 's/localhost:/peer0.org1.example.com:/' -e 's/localhost:/peer0.org1.example.com:/' -e 's/localhost:/ca.org1.example.com:/' < $HOME/.composer/cards/restadmin#property-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/restadmin#property-network
Try running the REST server with the card -c restadmin#property-network - if you're running this tutorial https://hyperledger.github.io/composer/latest/integrating/deploying-the-rest-server then you will need to put this CARD NAME in the top of your envvars.txt and then ensure you run source envvars.txt to get it set 'in your current shell environment'
If you wish to issue further identities - say kcoe below - from the REST client (given you're currently 'restadmin') you simply do the following (first two can be done in Playground too FYI):
composer participant add -c admin#trade-network -d '{"$class":"org.acme.trading.Trader","tradeId":"trader2", "firstName":"Ken","lastName":"Coe"}'
composer identity issue -c admin#trade-network -f kcoe.card -u kcoe -a "resource:org.acme.trading.Trader#trader2"
composer card import -f kcoe.card # imported to the card store
Next - one-liner to get docker hostname resolution right, from inside the persistent dockerized REST server:
sed -e 's/localhost:/orderer.example.com:/' -e 's/localhost:/peer0.org1.example.com:/' -e 's/localhost:/peer0.org1.example.com:/' -e 's/localhost:/ca.org1.example.com:/' < $HOME/.composer/cards/kcoe#trade-network/connection.json > /tmp/connection.json && cp -p /tmp/connection.json $HOME/.composer/cards/kcoe#trade-network
Start your REST server as per the Deploy REST server doc:
docker run \
-d \
-e COMPOSER_CARD=${COMPOSER_CARD} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_AUTHENTICATION=${COMPOSER_AUTHENTICATION} \
-e COMPOSER_MULTIUSER=${COMPOSER_MULTIUSER} \
-e COMPOSER_PROVIDERS="${COMPOSER_PROVIDERS}" \
-e COMPOSER_DATASOURCES="${COMPOSER_DATASOURCES}" \
-v ~/.composer:/home/composer/.composer \
--name rest \
--network composer_default \
-p 3000:3000 \
myorg/my-composer-rest-server
From the System REST API in http://localhost:3000/explorer - go to the POST /wallet/import operation and import the card file kcoe.card with (in this case) the card name set to kcoe#trade-network and click on 'Try it Out' to import it - it should return a successful (204) response.
This is set as the default ID in the Wallet via System REST API endpoint
(if you need to set any further imported cards as the default card name in our REST client Wallet - go to the POST /wallet/name/setDefault/ method and choose the card name and click on Try it Out. This would now the default card).
Test it out - try getting a list of Traders (trade-network example):
Return to the Trader methods in the REST API client and expand the /GET Trader endpoint then click 'Try it Out' . It should confirm that we are now using a card in the business network, and should be able to interact with the REST Server and get a list of Traders (that were added to your business network)..

How can I specify canonical server name in composer connection profile?

We need to run "composer" command outside of docker container's network.
When I specify orderer and peer host name (e.g. peer0.org1.example.com) in /etc/hosts file, "composer" command seems to work.
However, if I specify server's IP address, it does not work. Here is sample.
$ composer network list -p hlfv1 -n info-share-bc -i PeerAdmin -s secret
✖ List business network info-share-bc
Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
Command succeeded
This is a command example when I specify host name in /etc/hosts.
$ composer network list -p hlfv1 -n info-share-bc -i PeerAdmin -s secret
✔ List business network info-share-bc
name: info-share-bc
models:
- org.hyperledger.composer.system
- bc.share.info
<snip>
I believe when the server name can not be resolved, we will specify the option called "ssl-target-name-override", hyperledger node.js SDK as described here.
https://jimthematrix.github.io/Remote.html
- ssl-target-name-override {string} Used in test environment only,
when the server certificate's hostname (in the 'CN' field) does not
match the actual host endpoint that the server process runs at,
the application can work around the client TLS verify failure by
setting this property to the value of the server certificate's hostname
Is there any option to specify host name in connection profile (connection.json) ?
Found a work around: hostnameOverride option in connection profile resolved the connection issue.
"eventURL": "grpcs://<target-host>:17053",
"hostnameOverride": "peer0.org1.example.com",

How to know if my program is completely started inside my docker with compose

In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)

Resources