Not seeing certificate attributes when not running locally - hyperledger

The Problem
When I deploy a 4 peer nodes with PBFT or NOOPS in the cloud, any user certificate attributes are not seen. The values are blank.
Observations
Everything works locally. This suggests that I am calling the API correctly, and the chaincode is accessing attributes correctly.
When I attach to the membership container, I see the correct membersrvc.yaml with aca.enabled set to true. This is the same yaml that works locally. For good measure, I'm also passing the ENV variable MEMBERSRVC_CA_ACA_ENABLED=true.
I can see the attributes for the users in the membership service's ACA database. (suggesting that the users were created with attributes)
When I look at the actual certificate from the log (Bytes to Hex then Base64 decode) I see the attributes. (Appending certificate [30 82 02 dd 30 8....)
All attributes are blank when deployed. No errors.
Membership Service Logs
I enabled debug logging, and see that Membership services thinks it's enabled ACA:
19:57:46.421 [server] main -> DEBU 049 ACA was enabled [aca.enabled == true]
19:57:46.421 [aca] Start -> INFO 04a Staring ACA services...
19:57:46.421 [aca] startACAP -> INFO 04b ACA PUBLIC gRPC API server started
19:57:46.421 [aca] Start -> INFO 04c ACA services started
This looks good. What am I missing?
Guess
Could it be that the underlying docker container the chaincode deploys into doesn't have security enabled? Does it use the ENV passed to the parent peer? One difference is that locally I'm using "dev mode" without the base-image shenanigans.
Membership Service
membersrvc:
container_name: membersrvc
image: hyperledger/fabric-membersrvc
volumes:
- /home/ec2-user/membership:/user/membership
- /var/hyperledger:/var/hyperledger
command: sh -c "cp /user/membership/membersrvc.yaml /opt/gopath/src/github.com/hyperledger/fabric/membersrvc && membersrvc"
restart: unless-stopped
environment:
- MEMBERSRVC_CA_ACA_ENABLED=true
ports:
- 7054:7054
Root Peer Service
rootpeer:
container_name: root-peer
image: hyperledger/fabric-peer
restart: unless-stopped
environment:
- CORE_VM_ENDPOINT=unix:///var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp1
- CORE_SECURITY_ENROLLID=vp1
- CORE_SECURITY_ENROLLSECRET=xxxxxxxx
- CORE_SECURITY_ENABLED=true
- CORE_SECURITY_ATTRIBUTES_ENABLED=true
- CORE_PEER_PKI_ECA_PADDR=members.x.net:7054
- CORE_PEER_PKI_TCA_PADDR=members.x.net:7054
- CORE_PEER_PKI_TLSCA_PADDR=members.x.net:7054
- CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=NOOPS
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/hyperledger:/var/hyperledger
command: sh -c "peer node start"
ports:
- 7051:7051
- 7050:7050
Here's the request:
{
"jsonrpc": "2.0",
"method":"query",
"params": {
"chaincodeID": {
"name" :"659cb5dcc3063054e4c90908050eebf68eb2bd193cc1520f1f2d198f0ff42268"
},
"ctorMsg": {
"args":["get_results", "{\"Id\":\"abc123\"}"]
},
"secureContext": "user123",
"attributes":["account_id","role"]
},
"id": 2
}
Edited*: I previously thought this was just PBFT...but it's also happening on NOOPS on the cloud. I reduced the example to NOOPS.

My problem is that the fabric version inside the fabric-baseimage docker container is a good bit newer. This is my fault - because I populated that image with the fabric version manually.
Background
If one is using non-vagrant with 0.6 and not in DEV mode, deploying chaincode will have a "cannot find :latest tag" error. To solve this, I pulled a fabric-baseimage version, and populated it with what I needed, including a git-clone of fabric. I should have pulled the 0.6 branch, but instead it was pulling master.
So essentially, my fabric-peer, node-sdk deployer, and baseimage were using slightly different hyperledger versions.
After about 48 hours of configuration hell, I think I have it straightened out by sending everything back to 0.6. I have terraform spinning everything up successfully now.
I do wish the documentation included something about deploying in a non-dev multi-node environment.

Related

Launching Keycloak 20.0.3 on REL 7.9 in a Docker container with compose never responds to HTTP requests

I have an application that uses Keycloak 15.0.0 on REL 7.9 and other OSes (REL 8.7, Ubuntu 22.04, Oracle Linux 8.7). I am running this behind NGINX proxy and have it 100% working with Keycloak 15.0.0 and have for about a year 1/2 now. Now, we need to update to Keycloak 20.0 for OpenJDK issues and such. I updated my image in the docker compose YML configuration, my environment variables that all changed by this v20.0, and launched my application to have it update.
On 3 of the 4 OSes this worked 100% fine, came up great, came up quick, love the v20.0 UI changes in Keycloak. I tried this on FIPS enabled and FIPS disabled setups, and all worked 100%. It works as expected with my application, behind NGINX, and no issues at all whatsoever we have found in the last two weeks.
However, on Red Hat 7.9 for whatever reason I get no logs at all and nothing happens. I can do a docker exec -it xxxxxx /bin/sh type of command and get into it, but even a curl http://localhost:8080/auth/ turns up just a connection refused. It is almost like it is not running.
This happens whether I am updating a Keycloak 15.0.0 already setup, or if I remove that docker volume and start over from scratch. It just hangs there and does nothing.
And this only happens on REL 7.9. The other operating systems work great after a few minutes and respond correctly. I have even left it alone for up to 30 minutes to see if there was a process running, something hidden, a timeout, something else as a "ghost in the machine". But still nothing works.
I have searched for a while, read the readme files on updates, and started over fresh on other OSes and they all work. Just not this one. So looking for guidance here on what to change/try. Or I cannot use Keycloak 20.0 on REL 7.9. Until its EOL June 2024.
The keycloak configuration that works on the other 3 OSes, with the same docker versions installed and all the same permissions setup via our Ansible setup, is below. I cannot figure out why REL 7.9 is the one holdout on this.
Any help or tips or things to try is much appreciated. I am 8+ hours into this with nothing to show.
keycloak:
image: keycloak-mybuild:20.0.3
restart: on-failure:5
container_name: keycloak
command: start --optimized
ports:
- "8080"
environment:
- KC_DB=postgres
- KC_DB_URL=jdbc:postgresql://postgres:5432/xxxxxxxx
- KC_DB_USERNAME=xxxxxxxx
- KC_DB_PASSWORD=xxxxxxxx
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=xxxxxxxx
- KC_HOSTNAME_STRICT=false
- KC_HOSTNAME_STRICT_HTTPS=false
- PROXY_ADDRESS_FORWARDING=true
- KC_HTTP_RELATIVE_PATH=/auth
- KC_HTTP_ENABLED=true
- KC_HTTP_PORT=8080
depends_on:
- postgres
networks:
- namednetwork
Voila!!
That line that #jayc talked on - JAVA_OPTS_APPEND="-Dcom.redhat.fips=false" was the answer. Thank you so much! This worked on my REL 7.9 box with FIPS enabled 100% every time, with the Keycloak 20.0.3 container I built from the steps mentioned on containers https://www.keycloak.org/server/containers.
keycloak:
image: keycloak-mybuild:20.0.3
restart: on-failure:5
container_name: keycloak
command: start --optimized
ports:
- "8080"
environment:
- KC_DB=postgres
- KC_DB_URL=jdbc:postgresql://postgres:5432/xxxxxxxx
- KC_DB_USERNAME=xxxxxxxx
- KC_DB_PASSWORD=xxxxxxxx
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=xxxxxxxx
- KC_HOSTNAME_STRICT=false
- KC_HOSTNAME_STRICT_HTTPS=false
- PROXY_ADDRESS_FORWARDING=true
- KC_HTTP_RELATIVE_PATH=/auth
- KC_HTTP_ENABLED=true
- KC_HTTP_PORT=8080
- JAVA_OPTS_APPEND="-Dcom.redhat.fips=false"
depends_on:
- postgres
networks:
- namednetwork

Does docker-compose support init container?

init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.
This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.

Local Vault using docker-compose

I'm having big trouble running Vault in docker-compose.
My requirements are :
running as deamon (so restarting when I restart my Mac)
secret being persisted between container restart
no human intervention between restart (unsealing, etc.)
using a generic token
My current docker-compose
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
ports:
- "8200:8200"
volumes:
- ./storagedc/vault/file:/vault/file
However, when the container restart, I get the log
==> Vault server configuration:
Api Address: http://0.0.0.0:8200
Cgo: disabled
Cluster Address: https://0.0.0.0:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v1.2.1
Error initializing Dev mode: Vault is already initialized
Is there any recommendation on that matter?
I'm going to pseudo-code an answer to work around the problems specified, but please note that this is a massive hack and should NEVER be deployed in production as a hard-coded master key and single unseal key is COLOSSALLY INSECURE.
So, you want a test vault server, with persistence.
You can accomplish this, it will need a little bit of work because of the default behavior of the vault container - if you just start it, it will start with a dev mode container, which won't allow for persistence. Just adding persistence via the environment variable won't solve that problem entirely because it will conflict with the default start mode of the container.
so we need to replace this entrypoint script with something that does what we want it to do instead.
First we copy the script out of the container:
$ docker create --name vault vault:1.2.1
$ docker cp vault:/usr/local/bin/docker-entrypoint.sh .
$ docker rm vault
For simplicity, we're going to edit the file and mount it into the container using the docker-compose file. I'm not going to make it really functional - just enough to get it to do what's desired. The entire point here is sample, not something that is usable in production.
My customizations all start at about line 98 - first we launch a dev-mode server in order to record the unseal key, then we terminate the dev mode server.
# Here's my customization:
if [ ! -f /vault/unseal/sealfile ]; then
# start in dev mode, in the background to record the unseal key
su-exec vault vault server \
-dev -config=/vault/config \
-dev-root-token-id="$VAULT_DEV_ROOT_TOKEN_ID" \
2>&1 | tee /vault/unseal/sealfile &
while ! grep -q 'core: vault is unsealed' /vault/unseal/sealfile; do
sleep 1
done
kill %1
fi
Next we check for supplemental config. This is where the extra config goes for disabling TLS, and for binding the appropriate interface.
if [ -n "$VAULT_SUPPLEMENTAL_CONFIG" ]; then
echo "$VAULT_SUPPLEMENTAL_CONFIG" > "$VAULT_CONFIG_DIR/supplemental.json"
fi
Then we launch vault in 'release' mode:
if [ "$(id -u)" = '0' ]; then
set -- su-exec vault "$#"
"$#"&
Then we get the unseal key from the sealfile:
unseal=$(sed -n 's/Unseal Key: //p' /vault/unseal/sealfile)
if [ -n "$unseal" ]; then
while ! vault operator unseal "$unseal"; do
sleep 1
done
fi
We just wait for the process to terminate:
wait
exit $?
fi
There's a full gist for this on github.
Now the docker-compose.yml for doing this is slightly different to your own:
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
command: [ 'vault', 'server', '-config=/vault/config' ]
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
VAULT_SUPPLEMENTAL_CONFIG: '{"ui":true, "listener": {"tcp":{"address": "0.0.0.0:8200", "tls_disable": 1}}}'
VAULT_ADDR: "http://127.0.0.1:8200"
ports:
- "8200:8200"
volumes:
- ./vault:/vault/file
- ./unseal:/vault/unseal
- ./docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh
cap_add:
- IPC_LOCK
The command is the command to execute. This is what's in the "$#"& of the script changes.
I've added VAULT_SUPPLEMENTAL_CONFIG for the non-dev run. It needs to specify the interfaces, it needs to turn of tls. I added the ui, so I can access it using http://127.0.0.1:8200/ui. This is part of the changes I made to the script.
Because this is all local, for me, test purposes, I'm mounting ./vault as the data directory, I'm mounting ./unseal as the place to record the unseal code and mounting ./docker-entrypoint.sh as the entrypoint script.
I can docker-compose up this and it launches a persistent vault - there are some errors on the log as I try to unseal before the server has launched, but it works, and persists across multiple docker-compose runs.
Again, to mention that this is completely unsuitable for any form of long-term use. You're better off using docker's own secrets engine if you're doing things like this.
I'd like to suggest a simpler solution for local development with docker-compose.
Vault is always unsealed
Vault UI is enabled and accessible at http://localhost:8200/ui/vault on your dev machine
Vault has predefined root token which can be used by services to communicate with it
docker-compose.yml
vault:
hostname: vault
container_name: vault
image: vault:1.12.0
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://0.0.0.0:8200"
ports:
- "8200:8200"
volumes:
- ./volumes/vault/file:/vault/file:rw
cap_add:
- IPC_LOCK
entrypoint: vault server -dev -dev-listen-address="0.0.0.0:8200" -dev-root-token-id="root"

Why Neo4J docker authentication doesn't work

I want to run a Neo4J instance through docker using a docker-compose.
docker-compose.yml
version: '3'
services:
neo4j:
container_name: neo4j-lab
image: neo4j:latest
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- NEO4J_dbms_memory_heap_maxSize=4G
- NEO4J_dbms_memory_heap_initialSize=512M
- NEO4J_AUTH=neo4j/changeme
ports:
- 7474:7474
- 7687:7687
volumes:
- neo4j_data:/data
- neo4j_conf:/conf
- ./import:/import
volumes:
neo4j_data:
neo4j_conf:
Running the following with docker-compose up is perfectly fine, and I can reach the login screen.
But when I set the credentials, I get the following error on my container logs : Neo.ClientError.Security.Unauthorized The client is unauthorized due to authentication failure. whereas I am sure that I fill with right credentials (the ones used in my docker-compose file)
Furthermore,
when I set NEO4J_AUTH to none, then no credentials have been asked.
when I set it to neo4j/neo4j it said that I can't use the default password
According the documentation, this is perfectly fine :
By default Neo4j requires authentication and requires you to login with neo4j/neo4j at the first connection and set a new password. You can set the password for the Docker container directly by specifying --env NEO4J_AUTH=neo4j/password in your run directive. Alternatively, you can disable authentication by specifying --env NEO4J_AUTH=none instead.
Do you have any idea of what's going on ?
Hope you could help me to solve this !
EDIT
Docker logs output :
neo4j-lab | 2019-03-13 23:02:32.378+0000 INFO Starting...
neo4j-lab | 2019-03-13 23:02:37.796+0000 INFO Bolt enabled on 0.0.0.0:7687.
neo4j-lab | 2019-03-13 23:02:41.102+0000 INFO Started.
neo4j-lab | 2019-03-13 23:02:43.935+0000 INFO Remote interface available at http://localhost:7474/
neo4j-lab | 2019-03-13 23:02:56.105+0000 WARN The client is unauthorized due to authentication failure.
EDIT 2 :
It seems that deleting the volume associated first works. The password is now changed.
However, if I docker-compose down then docker-compose up whereas I change the password in my docker-compose file then the issue reappears.
So I think that when we change the password through docker-compose more than once while a volume exists, we need to remove the auth file presents in the volumes.
To do that :
docker volume inspect <volume_name>
You should get something like that :
[
{
"CreatedAt": "2019-03-14T11:17:08+01:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "neo4j",
"com.docker.compose.volume": "neo4j_data"
},
"Mountpoint": "/data/docker/volumes/neo4j_neo4j_data/_data",
"Name": "neo4j_neo4j_data",
"Options": null,
"Scope": "local"
}
]
This is obviously different if you named your container and your volumes not like me (neo4j, neo4j_data).
The important part is the Mountpoint which locates the volume.
In this volume, you can delete the auth file which is in dbms directory.
Then restart your docker and everything should be fine.
Neo4j docker developer here.
The reason this is happening is that the NEO4J_AUTH environment variable doesn't set the database password, it sets the INITIAL password only.
If you're mounting a data volume with an existing database inside, then NEO4J_AUTH has no effect because that database already has a password. It sounds like that's what you're experiencing here.
The documentation around this feature was not great and I've updated it! See: Neo4j docker authentication documentation
define Neo4j password with docker-compose
neo4j:
image: 'neo4j:4.1'
environment:
NEO4J_AUTH: 'neo4j/your_password'
ports:
- "7474:7474"
volumes:
...

setup drone continuous integration with github

I'm trying to setup a CI server inside a corporate network with drone (open source edition). Its author describes drone as very simple solution even for programmer (as I am), though some moments are not clear for me (may be official documentation misses them).
First, I've made up an docker image for my rails application: rails-qna.
Next, composing drone images:
docker-compose.yml:
version: '2'
services:
drone-server:
image: drone/drone:0.5
ports:
- 80:8000
volumes:
- ./drone:/var/lib/drone/
restart: always
environment:
- DRONE_OPEN=true
- DRONE_ADMIN=khataev
- DRONE_GITHUB_CLIENT=github-client-string
- DRONE_GITHUB_SECRET=github-secret-string
- DRONE_SECRET=drone-secret-string
drone-agent:
image: drone/drone:0.5
command: agent
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=ws://drone-server:8000/ws/broker
- DRONE_SECRET=drone-secret-string
Application is registered on Github and secret/client strings are provided.
I placed .drone.yml file into my project repository:
pipeline:
build:
image: rails-qna
commands:
- bundle exec rake db:drop
- bundle exec rake db:create
- bundle exec rake db:migrate
- bundle exec rspec
Unclear moments:
1) While registering OAuth application on github, we should specify Homepage URL and authorization callback URL. Where should they point to? Drone server container? Guessing that so, I specified
mycorporatedomain.com:3005
and
mycorporatedomain.com:3005/authorize
and setup port forwarding from 3005 port to 80 port of host, where drone docker is running. May be I'm wrong?
2) What should I specify in key DRONE_GITHUB_URL?
https://github.com or full path to my project repository, i.e.
https://github.com/khataev/qna?
3) What if I want to build some branch other than master? Were should I specify it? For now drone ready branch (with .drone.yml) is not a master branch - would it work?
4) Why DRONE_GITHUB_GIT_USERNAME and DRONE_GITHUB_GIT_PASSWORD are optional? How it is supposed to work if, I don't specify username and password for my github account?
5) When I start drone images with docker up, I get this errors:
→ docker-compose up
Starting drone_drone-server_1
Starting drone_drone-agent_1
Attaching to drone_drone-server_1, drone_drone-agent_1
drone-server_1 | time="2017-03-04T17:00:33Z" level=fatal msg="version control system not configured"
drone-agent_1 | 1:M 04 Mar 17:00:35.208 * connecting to server ws://drone-server:8000/ws/broker
drone-agent_1 | 1:M 04 Mar 17:00:35.229 # connection failed, retry in 15s. websocket.Dial ws://drone-server:8000/ws/broker: dial tcp: lookup drone-server on 127.0.0.11:53: no such host
drone_drone-server_1 exited with code 1
drone-server_1 | time="2017-03-04T16:53:38Z" level=fatal msg="version control system not configured"
UPD
5) this was solved - forgot to specify
DRONE_GITHUB=true
Homepage URL is the address of the server where drone is running on.
E.g. http://155.200.100.0
Authorize URL is the same address appended by /authorize
Eg. http://155.200.100.0/authorize
You dont have to specify that. DRONE_GITHUB=true says drone to use github url.
You can limit a single section to a branch or the whole drone build.
Single Section:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
when:
branch: master
Whole build process:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
branches: master
You don't need username and password when using OAuth.
Source:
http://readme.drone.io/admin/setup-github/
http://readme.drone.io/usage/skipping-builds/
http://readme.drone.io/usage/skipping-build-steps/
UPDATE:
Documentation is shifted to http://docs.drone.io/ due to version 0.6 of Drone

Resources