Send NGSIv2 data to Orion Context Broker - iot

I explain the problem. I need to register a client to an Orion context broker. The client (OMALWM2M) is connected to the IoT Agent which acts as a bridge with NGSI. My problem is that when I connect to localhost: 1026 / v2 / entities there is no client that I connected. I ask you to look at my conifgurations of the IoT Agent and the Context broker to see where I am wrong. Thank you.
Orion context Broker:
docker-compose.yml
version: "3"
services:
orion:
image: fiware/orion
ports:
- "1026:1026"
depends_on:
- mongo
command: -dbhost mongo
mongo:
image: mongo:4.4
command: --nojournal
Fiware IoT Agent
config.js
/*
* Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
*
* This file is part of fiware-iotagent-lib
*
* fiware-iotagent-lib is free software: you can redistribute it and/or
* modify it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the License,
* or (at your option) any later version.
*
* fiware-iotagent-lib is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public
* License along with fiware-iotagent-lib.
* If not, seehttp://www.gnu.org/licenses/.
*
* For those usages not covered by the GNU Affero General Public License
* please contact with::[contacto#tid.es]
*/
var config = {};
config.lwm2m = {
logLevel: 'DEBUG',
port: 5683,
defaultType: 'Device',
ipProtocol: 'udp4',
serverProtocol: 'udp4',
/**
* When a LWM2M client has active attributes, the IoT Agent sends an observe instruction for each one, just after the
* client registers. This may cause cause an error when the client takes too long to start listening, as the
* observe requests may not reach its destiny. This timeout (ms) is used to give the client the opportunity to
* create the listener before the server sends the requests.
*/
delayedObservationTimeout: 50,
formats: [
{
name: 'application-vnd-oma-lwm2m/text',
value: 1541
},
{
name: 'application-vnd-oma-lwm2m/tlv',
value: 1542
},
{
name: 'application-vnd-oma-lwm2m/json',
value: 1543
},
{
name: 'application-vnd-oma-lwm2m/opaque',
value: 1544
}
],
writeFormat: 'application-vnd-oma-lwm2m/text',
types: []
};
config.ngsi = {
logLevel: 'DEBUG',
timestamp: true,
contextBroker:{
host: 'localhost',
port: '1026',
ngsiVersion: 'v2'
},
server: {
port: 59441
},
deviceRegistry: {
//type: 'memory'
type: 'mongodb'
},
mongodb: {
host: 'localhost',
port: '27017',
db: 'iotagentlm2m'
//replicaSet: ''
},
types: {},
service: 'smartGondor',
subservice: '/gardens',
providerUrl: 'http://localhost:4041',
deviceRegistrationDuration: 'P1Y',
defaultType: 'Thing'
};
/**
* Configuration for secured access to instances of the Context Broker secured with a PEP Proxy.
* For the authentication mechanism to work, the authentication attribute in the configuration has to be fully
* configured, and the authentication.enabled subattribute should have the value `true`.
*
* The Username and password should be considered as sensitive data and should not be stored in plaintext.
* Either encrypt the config and decrypt when initializing the instance or use environment variables secured by
* docker secrets.
*/
// config.authentication: {
//enabled: false,
/**
* Type of the Identity Manager which is used when authenticating the IoT Agent.
* Either 'oauth2' or 'keystone'
*/
//type: 'keystone',
/**
* Name of the additional header passed to retrieve the identity of the IoT Agent
*/
//header: 'Authorization',
/**
* Hostname of the Identity Manager.
*/
//host: 'localhost',
/**
* Port of the Identity Manager.
*/
//port: '5000',
/**
* URL of the Identity Manager - a combination of the above
*/
//url: 'localhost:5000',
/**
* KEYSTONE ONLY: Username for the IoT Agent
* - Note this should not be stored in plaintext.
*/
//user: 'IOTA_AUTH_USER',
/**
* KEYSTONE ONLY: Password for the IoT Agent
* - Note this should not be stored in plaintext.
*/
//password: 'IOTA_AUTH_PASSWORD',
/**
* OAUTH2 ONLY: URL path for retrieving the token
*/
//tokenPath: '/oauth2/token',
/**
* OAUTH2 ONLY: Flag to indicate whether or not the token needs to be periodically refreshed.
*/
//permanentToken: true,
/**
* OAUTH2 ONLY: ClientId for the IoT Agent
* - Note this should not be stored in plaintext.
*/
//clientId: 'IOTA_AUTH_CLIENT_ID',
/**
* OAUTH2 ONLY: ClientSecret for the IoT Agent
* - Note this should not be stored in plaintext.
*/
//clientSecret: 'IOTA_AUTH_CLIENT_SECRET'
//};
/**
* flag indicating whether the node server will be executed in multi-core option (true) or it will be a
* single-thread one (false).
*/
// config.multiCore= true;
module.exports = config;

Related

Spring Boot Admin with Eureka and Docker Swarm

EDIT/SOLUTION:
I've got it, partly thanks to #anemyte's comment. Although the eureka.hostname Property was not the issue at play (though it did warrant correction), looking more closely got me to the true cause of the problem: the network interface in use, port forwarding and (bad) luck.
The services that I chose for this prototypical implementation were those that have port-forwardings in a production setting (I must have unfortunately forgotten to add a port-forwarding to the example service below - dumb, though I do not know if this would have helped).
When a Docker Swarm service has a port forwarding, the container has an additional bridge interface in addition to the overlay interface which is used for internal container-to-container communication.
Unfortunately, the client services were choosing to register with Eureka with their bridge interface IP as the advertised IP instead of the internal swarm IP - possibly because that is what InetAddress.getLocalhost() (which is internally used by Spring Cloud) would return in that case.
This led me to erroneously believe that Spring Boot Admin could reach these services - as I externally could, when it in fact could not as the wrong IP was being advertised. Using cURL to verify this only compounded the confusion as I was using the overlay IP to check whether the services can communicate, which is not the one that was being registered with Eureka.
The (provisional) solution to the issue was setting the spring.cloud.inetutils.preferred-networks setting to 10.0, which is the default IP address pool (more specifically: 10.0.0.0/8) for the internal swarm networks (documentation here). There is also a blacklist approach using spring.cloud.inetutils.ignored-networks, however I prefer not to use it.
In this case, client applications advertised their actual swarm overlay IP to Eureka, and SBA was able to reach them.
I do find it a bit odd that I did not get any error messages from SBA, and will be opening an issue on their tracker. Perhaps I was simply doing something wrong.
(original question follows)
I have the following setup:
Service discovery using Eureka (with eureka.client.fetch-registry=true and eureka.instance.preferIpAddress=true)
Spring Boot Admin running in the same application as Eureka, with spring.boot.admin.context-path=/admin
Keycloak integration, such that:
SBA itself uses a service account to poll the various /actuator endpoints of my client applications.
The SBA UI itself is protected via a login page which expects an administrative login.
Locally, this setup works. When I start my eureka-server application together with client applications, I see the following correct behaviour:
Eureka running on e.g. localhost:8761
Client applications successfully registering with Eureka via IP registration (eureka.instance.preferIpAddress=true)
SBA running at e.g. localhost:8761/admin and discovering my services
localhost:8761/admin correctly redirects to my Keycloak login page, and login correctly provides a session for the SBA UI
SBA itself successfully polling the /actuator endpoints of any registered applications.
However, I have issues replicating this setup inside a Docker Swarm.
I have two Docker Services, let's say eureka-server and client-api - both are created using the same network and the containers can reach each other via this network (via e.g. curl). eureka-server correctly starts and client-api registers with Eureka right away.
Attempting to navigate to eureka_url/admin correctly shows the Keycloak login page and redirects back to the Spring Boot Admin UI after a successful login. However, no applications are registered and I cannot figure out why.
I've attempted to enable more debug/trace logging, but I see absolutely no logs; it's as if SBA is simply not fetching the Eureka registry.
Does anyone know of a way to troubleshoot this behaviour? Has anyone had this issue?
EDIT:
I'm not quite sure which settings may be pertinent to the issue, but here are some of my configuration files (as code snippets since they're not that small, I hope that's OK):
application.yaml
(Includes base eureka properties, SBA properties, and Keycloak properties for SBA)
---
eureka:
hostname: localhost
port: 8761
client:
register-with-eureka: false
# Registry must be fetched so that Spring Boot Admin knows that there are registered applications
fetch-registry: true
serviceUrl:
defaultZone: http://${eureka.hostname}:${eureka.port}/eureka/
instance:
lease-renewal-interval-in-seconds: 10
lease-expiration-duration-in-seconds: 30
environment: eureka-test-${user.name}
server:
enable-self-preservation: false # Intentionally disabled for non-production
spring:
application:
name: eureka-server
boot:
admin:
client:
prefer-ip: true
# Since we are running in Eureka, "/" is already the root path for Eureka itself
# Register SBA under the "/admin" path
context-path: /admin
cloud:
config:
enabled: false
main:
allow-bean-definition-overriding: true
keycloak:
realm: ${realm}
auth-server-url: ${auth_url}
# Client ID
resource: spring-boot-admin-automated
# Client secret used for service account grant
credentials:
secret: ${client_secret}
ssl-required: external
autodetect-bearer-only: true
use-resource-role-mappings: false
token-minimum-time-to-live: 90
principal-attribute: preferred_username
build.gradle
// Versioning / Spring parents poms
apply from: new File(project(':buildscripts').projectDir, '/dm-versions.gradle')
configurations {
all*.exclude module: 'spring-boot-starter-tomcat'
}
ext {
springBootAdminVersion = '2.3.1'
keycloakVersion = '11.0.2'
}
dependencies {
compileOnly 'org.projectlombok:lombok'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-server'
implementation "de.codecentric:spring-boot-admin-starter-server:${springBootAdminVersion}"
implementation 'org.keycloak:keycloak-spring-boot-starter'
implementation 'org.springframework.boot:spring-boot-starter-security'
compile "org.keycloak:keycloak-admin-client:${keycloakVersion}"
testCompileOnly 'org.projectlombok:lombok'
}
dependencyManagement {
imports {
mavenBom "org.keycloak.bom:keycloak-adapter-bom:${keycloakVersion}"
}
}
The actual application code:
package com.app.eureka;
import de.codecentric.boot.admin.server.config.EnableAdminServer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
#EnableAdminServer
#EnableEurekaServer
#SpringBootApplication
public class EurekaServer {
public static void main(String[] args) {
SpringApplication.run(EurekaServer.class, args);
}
}
Keycloak configuration:
package com.app.eureka.keycloak.config;
import de.codecentric.boot.admin.server.web.client.HttpHeadersProvider;
import org.keycloak.KeycloakPrincipal;
import org.keycloak.KeycloakSecurityContext;
import org.keycloak.OAuth2Constants;
import org.keycloak.adapters.springboot.KeycloakSpringBootProperties;
import org.keycloak.adapters.springsecurity.KeycloakConfiguration;
import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider;
import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter;
import org.keycloak.adapters.springsecurity.token.KeycloakAuthenticationToken;
import org.keycloak.admin.client.Keycloak;
import org.keycloak.admin.client.KeycloakBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Scope;
import org.springframework.context.annotation.ScopedProxyMode;
import org.springframework.http.HttpHeaders;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper;
import org.springframework.security.core.session.SessionRegistry;
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy;
import org.springframework.web.context.WebApplicationContext;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;
import java.security.Principal;
import java.util.Objects;
#KeycloakConfiguration
#EnableConfigurationProperties(KeycloakSpringBootProperties.class)
class KeycloakConfig extends KeycloakWebSecurityConfigurerAdapter {
private static final String X_API_KEY = System.getProperty("sba_api_key");
#Value("${keycloak.token-minimum-time-to-live:60}")
private int tokenMinimumTimeToLive;
/**
* {#link HttpHeadersProvider} used to populate the {#link HttpHeaders} for
* accessing the state of the disovered clients.
*
* #param keycloak
* #return
*/
#Bean
public HttpHeadersProvider keycloakBearerAuthHeaderProvider(final Keycloak keycloak) {
return provider -> {
String accessToken = keycloak.tokenManager().getAccessTokenString();
HttpHeaders headers = new HttpHeaders();
headers.add("X-Api-Key", X_API_KEY);
headers.add("X-Authorization-Token", "keycloak-bearer " + accessToken);
return headers;
};
}
/**
* The Keycloak Admin client that provides the service-account Access-Token
*
* #param props
* #return keycloakClient the prepared admin client
*/
#Bean
public Keycloak keycloak(KeycloakSpringBootProperties props) {
final String secretString = "secret";
Keycloak keycloakAdminClient = KeycloakBuilder.builder()
.serverUrl(props.getAuthServerUrl())
.realm(props.getRealm())
.grantType(OAuth2Constants.CLIENT_CREDENTIALS)
.clientId(props.getResource())
.clientSecret((String) props.getCredentials().get(secretString))
.build();
keycloakAdminClient.tokenManager().setMinTokenValidity(tokenMinimumTimeToLive);
return keycloakAdminClient;
}
/**
* Put the SBA UI behind a Keycloak-secured login page.
*
* #param http
*/
#Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http
.csrf().disable()
.authorizeRequests()
.antMatchers("/**/*.css", "/admin/img/**", "/admin/third-party/**").permitAll()
.antMatchers("/admin/**").hasRole("ADMIN")
.anyRequest().permitAll();
}
#Autowired
public void configureGlobal(final AuthenticationManagerBuilder auth) {
SimpleAuthorityMapper grantedAuthorityMapper = new SimpleAuthorityMapper();
grantedAuthorityMapper.setPrefix("ROLE_");
grantedAuthorityMapper.setConvertToUpperCase(true);
KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider();
keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(grantedAuthorityMapper);
auth.authenticationProvider(keycloakAuthenticationProvider);
}
#Bean
#Override
protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
return new RegisterSessionAuthenticationStrategy(buildSessionRegistry());
}
#Bean
protected SessionRegistry buildSessionRegistry() {
return new SessionRegistryImpl();
}
/**
* Allows to inject requests scoped wrapper for {#link KeycloakSecurityContext}.
* <p>
* Returns the {#link KeycloakSecurityContext} from the Spring
* {#link ServletRequestAttributes}'s {#link Principal}.
* <p>
* The principal must support retrieval of the KeycloakSecurityContext, so at
* this point, only {#link KeycloakPrincipal} values and
* {#link KeycloakAuthenticationToken} are supported.
*
* #return the current <code>KeycloakSecurityContext</code>
*/
#Bean
#Scope(scopeName = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public KeycloakSecurityContext provideKeycloakSecurityContext() {
ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
Principal principal = Objects.requireNonNull(attributes).getRequest().getUserPrincipal();
if (principal == null) {
return null;
}
if (principal instanceof KeycloakAuthenticationToken) {
principal = (Principal) ((KeycloakAuthenticationToken) principal).getPrincipal();
}
if (principal instanceof KeycloakPrincipal<?>) {
return ((KeycloakPrincipal<?>) principal).getKeycloakSecurityContext();
}
return null;
}
}
KeycloakConfigurationResolver
(separate class to prevent circular bean dependency that happens for some reason)
package com.app.eureka.keycloak.config;
import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class KeycloakConfigurationResolver {
/**
* Load Keycloak configuration from application.properties or application.yml
*
* #return
*/
#Bean
public KeycloakSpringBootConfigResolver keycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
}
Logout controller
package com.app.eureka.keycloak.config;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PostMapping;
import javax.servlet.http.HttpServletRequest;
#Controller
class LogoutController {
/**
* Logs the current user out, preventing access to the SBA UI
* #param request
* #return
* #throws Exception
*/
#PostMapping("/admin/logout")
public String logout(final HttpServletRequest request) throws Exception {
request.logout();
return "redirect:/admin";
}
}
I unfortunately do not have a docker-compose.yaml as our deployment is done mostly through Ansible, and anonymizing those scripts is rather difficult.
The services are ultimately created as follows (using docker service create):
(some of these networks may not be relevant as this is a local swarm running on my personal node, of note are the swarm networks)
dev#ws:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
3ba4a65c319f bridge bridge local
21065811cbff docker_gwbridge bridge local
ti1ksbdxlouo services overlay swarm
c59778b105b5 host host local
379lzdi0ljp4 ingress overlay swarm
dd92d2f75a31 none null local
eureka-server Dockerfile:
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY eureka-server.jar /home/myuser/eureka-server.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom eureka-server.jar \
--server.port=8761; sh
Eureka/SBA app Docker swarm service:
dev#ws:~$ docker service create --name eureka-server -p 8080:8761 --replicas 1 --network services --hostname eureka-server --limit-cpu 1 --limit-memory 768m eureka-server
Client applications are then started as follows:
Dockerfile
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY client-api.jar /home/myuser/client-api.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom -Deureka.instance.hostname=client-api client-api.jar \
--eureka.zone=http://eureka-server:8761/eureka --server.port=0; sh
And then created as Swarm services as follows:
dev#ws:~$ docker service create --name client-api --replicas 1 --network services --hostname client-api --limit-cpu 1 --limit-memory 768m client-api
On the client side, of note are the following eureka.client settings:
eureka:
name: ${spring.application.name}
instance:
leaseRenewalIntervalInSeconds: 10
instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instanceId:${random.int}}
preferIpAddress: true
client:
registryFetchIntervalSeconds: 5
That's all I can think of right now. The created docker services are running in the same network and can ping each other by IP as well as by hostname (cannot show output right now as I am not actively working on this at the moment, unfortunately).
In the Eureka UI I can, in fact, see my client applications registered and running - it's only SBA which does not seem to notice that there are any.
I found nothing wrong with the configuration you have presented. The only weak lead I see is eureka.hostname=localhost from application.yml. localhost and loopback IPs are two things that better to be avoided with swarm. I think you should check if it isn't something network related.

Why does FreeRadius fail to process the accounting response from Fortigate?

I have configured a freeradius proxy (3.0.16) on Ubuntu (4.15.0-47-generic). It receives the radius accounting packets from another radius server running on Ubuntu and writes those to another radius server on running on Fortigate.
Radius Server ---> Proxy Radius Server ---> Fortigate Radius Server
I have configured copy-acct-to-home-server to include the Realm in proxy.conf
proxy.conf ( Realm definition )
home_server myFortigate {
type = acct
ipaddr = <IP address of Fortigate Interface Running Radius>
port = 1813
secret = superSecret
}
home_server_pool myFortigatePool {
type = fail-over
home_server = myFortigate
}
realm myFortigateRealm {
acct_pool = myFortigatePool
nostrip
}
copy-acct-to-home-server entry
preacct {
preprocess
update control {
Proxy-To-Realm := myFortigateRealm
}
suffix
}
After I run the freeradius -X, I also run tcpdump from a new session
tcpdump -ni eth01 port 1812 or port 1813
and get the following log
15:03:40.225570 IP RADIUS_PROXY_IP.56813 > FORTIGATE_INTERFACE_IP.1813: RADIUS, Accounting-Request (4), id: 0x31 length: 371
15:03:40.236155 IP FORTIGATE_INTERFACE_IP.1813 > RADIUS_PROXY_IP.56813: RADIUS, Accounting-Response (5), id: 0x31 length: 27
Which basically shows it is sending the account request to fortigate radius server and receiving the accounting response.
But strangely freeradius -X debug output shows a request time out for the same radius server on Fortigate and it ultimately tags the server as zombie
Starting proxy to home server FORTIGATE_INTERFACE_IP port 1813
(14) Proxying request to home server FORTIGATE_INTERFACE_IP port 1813 timeout 30.000000
Waking up in 0.3 seconds.
(14) Expecting proxy response no later than 29.667200 seconds from now
Waking up in 3.5 seconds.
and Finally it gives up
25) accounting {
(25) [ok] = ok
(25) } # accounting = ok
(25) ERROR: Failed to find live home server: Cancelling proxy
(25) WARNING: No home server selected
(25) Clearing existing &reply: attributes
(25) Found Post-Proxy-Type Fail-Accounting
(25) Post-Proxy-Type sub-section not found. Ignoring.
So the situation is the Radius proxy is sending accounting packets to Fortigate Radius server (could be seen in both freeradius and fortigate logs)
tcpdump shows that Radius proxy is receiving accounting response from the fortigate, but for some reason freeradius process doesn't recognize (or can not read) accounting response. It may be some interoperability issue or I have missed to set some flag. Requesting help from the experts to isolate and rectify the issue.

Hyperledger Composer - NetworkAdmin#admin has no READ access to network

Upon following this tutorial https://medium.freecodecamp.org/how-to-build-a-blockchain-network-using-hyperledger-fabric-and-composer-e06644ff801d
When I use the command:
composer network start --networkName my-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file my-network-admin.card
I successfully create the card and import it using the following command:
composer card import --file my-network-admin.card
However, the problem is using the following command:
composer network ping --card admin#my-network
I receive the following error:
transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#my-network#0.0.1'
Command failed
I've looked at the documentation and tried restarting the entire process a couple of times to no avail. I even tried adding the following to my permissions.acl file:
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
rule Default {
description: "Grant all access by default"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
EDIT:
When I run composer card list -c admin#my-network, I get the following:
userName: admin
description:
businessNetworkName: my-network
identityId: fc63d3e4b3b3d73a2be2943a0c422e7af862207f9489fc1ce3707e8769efc99b
roles:
- PeerAdmin
connectionProfile:
name: hlfv1
x-type: hlfv1
credentials: Credentials set
Command succeeded

Q: HyperLedger fabric-starter-kit customisation

I have followed the basic guide for getting the HyperLedger fabric-starter-kit up and running which works perfectly. I cannot figure out how to successfully change the development directory of the app.js without causing an "invalid ELF header" error:
root#104efc36f09e:/user/env# node app
module.js:355
Module._extensions[extension](this, filename);
^
Error: /user/env/node_modules/grpc/src/node/extension_binary/grpc_node.node: invalid ELF header
at Error (native)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
at Module.require (module.js:365:17)
at require (module.js:384:17)
at Object.<anonymous> (/user/env/node_modules/grpc/src/node/src/grpc_extension.js:38:15)
at Module._compile (module.js:460:26)
at Object.Module._extensions..js (module.js:478:10)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
root#104efc36f09e:/user/env#
Dockerfile (unchanged):
FROM hyperledger/fabric-peer:latest
WORKDIR $GOPATH/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02
RUN go build
WORKDIR $GOPATH/src/github.com/hyperledger/fabric/examples/sdk/node
RUN npm install hfc`
docker-compose.yaml (changed volume to local workdir: ~/Documents/Work/Blockchain/env):
membersrvc:
container_name: membersrvc
image: hyperledger/fabric-membersrvc
command: membersrvc
peer:
container_name: peer
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp0
- CORE_SECURITY_ENABLED=true
- CORE_PEER_PKI_ECA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TCA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TLSCA_PADDR=membersrvc:7054
- CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=noops
# this gives access to the docker host daemon to deploy chain code in network mode
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# have the peer wait 10 sec for membersrvc to start
# the following is to run the peer in Developer mode - also set sample DEPLOY_MODE=dev
command: sh -c "sleep 10; peer node start --peer-chaincodedev"
#command: sh -c "sleep 10; peer node start"
links:
- membersrvc
starter:
container_name: starter
image: hyperledger/fabric-starter-kit
volumes:
- ~/Documents/Work/Blockchain/env:/user/env
environment:
- MEMBERSRVC_ADDRESS=membersrvc:7054
- PEER_ADDRESS=peer:7051
- KEY_VALUE_STORE=/tmp/hl_sdk_node_key_value_store
# set to following to 'dev' if peer running in Developer mode
- DEPLOY_MODE=dev
- CORE_CHAINCODE_ID_NAME=mycc
- CORE_PEER_ADDRESS=peer:7051
# the following command will start the chain code when this container starts and ready it for deployment by the app
command: sh -c "sleep 20; /opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02/chaincode_example02"
stdin_open: true
tty: true
links:
- membersrvc
- peer
app.js (unchanged):
/*
* A simple application utilizing the Node.js Client SDK to:
* 1) Enroll a user
* 2) User deploys chaincode
* 3) User queries chaincode
*/
// "HFC" stands for "Hyperledger Fabric Client"
var hfc = require("hfc");
console.log(" **** STARTING APP.JS ****");
// get the addresses from the docker-compose environment
var PEER_ADDRESS = process.env.CORE_PEER_ADDRESS;
var MEMBERSRVC_ADDRESS = process.env.MEMBERSRVC_ADDRESS;
var chain, chaincodeID;
// Create a chain object used to interact with the chain.
// You can name it anything you want as it is only used by client.
chain = hfc.newChain("mychain");
// Initialize the place to store sensitive private key information
chain.setKeyValStore( hfc.newFileKeyValStore('/tmp/keyValStore') );
// Set the URL to membership services and to the peer
console.log("member services address ="+MEMBERSRVC_ADDRESS);
console.log("peer address ="+PEER_ADDRESS);
chain.setMemberServicesUrl("grpc://"+MEMBERSRVC_ADDRESS);
chain.addPeer("grpc://"+PEER_ADDRESS);
// The following is required when the peer is started in dev mode
// (i.e. with the '--peer-chaincodedev' option)
var mode = process.env['DEPLOY_MODE'];
console.log("DEPLOY_MODE=" + mode);
if (mode === 'dev') {
chain.setDevMode(true);xs
//Deploy will not take long as the chain should already be running
chain.setDeployWaitTime(10);
} else {
chain.setDevMode(false);
//Deploy will take much longer in network mode
chain.setDeployWaitTime(120);
}
chain.setInvokeWaitTime(10);
// Begin by enrolling the user
enroll();
// Enroll a user.
function enroll() {
console.log("enrolling user admin ...");
// Enroll "admin" which is preregistered in the membersrvc.yaml
chain.enroll("admin", "Xurw3yU9zI0l", function(err, admin) {
if (err) {
console.log("ERROR: failed to register admin: %s",err);
process.exit(1);
}
// Set this user as the chain's registrar which is authorized to register other users.
chain.setRegistrar(admin);
var userName = "JohnDoe";
// registrationRequest
var registrationRequest = {
enrollmentID: userName,
affiliation: "bank_a"
};
chain.registerAndEnroll(registrationRequest, function(error, user) {
if (error) throw Error(" Failed to register and enroll " + userName + ": " + error);
console.log("Enrolled %s successfully\n", userName);
deploy(user);
});
});
}
// Deploy chaincode
function deploy(user) {
console.log("deploying chaincode; please wait ...");
// Construct the deploy request
var deployRequest = {
chaincodeName: process.env.CORE_CHAINCODE_ID_NAME,
fcn: "init",
args: ["a", "100", "b", "200"]
};
// where is the chain code, ignored in dev mode
deployRequest.chaincodePath = "github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02";
// Issue the deploy request and listen for events
var tx = user.deploy(deployRequest);
tx.on('complete', function(results) {
// Deploy request completed successfully
console.log("deploy complete; results: %j",results);
// Set the testChaincodeID for subsequent tests
chaincodeID = results.chaincodeID;
invoke(user);
});
tx.on('error', function(error) {
console.log("Failed to deploy chaincode: request=%j, error=%k",deployRequest,error);
process.exit(1);
});
}
// Query chaincode
function query(user) {
console.log("querying chaincode ...");
// Construct a query request
var queryRequest = {
chaincodeID: chaincodeID,
fcn: "query",
args: ["a"]
};
// Issue the query request and listen for events
var tx = user.query(queryRequest);
tx.on('complete', function (results) {
console.log("query completed successfully; results=%j",results);
process.exit(0);
});
tx.on('error', function (error) {
console.log("Failed to query chaincode: request=%j, error=%k",queryRequest,error);
process.exit(1);
});
}
//Invoke chaincode
function invoke(user) {
console.log("invoke chaincode ...");
// Construct a query request
var invokeRequest = {
chaincodeID: chaincodeID,
fcn: "invoke",
args: ["a", "b", "1"]
};
// Issue the invoke request and listen for events
var tx = user.invoke(invokeRequest);
tx.on('submitted', function (results) {
console.log("invoke submitted successfully; results=%j",results);
});
tx.on('complete', function (results) {
console.log("invoke completed successfully; results=%j",results);
query(user);
});
tx.on('error', function (error) {
console.log("Failed to invoke chaincode: request=%j, error=%k",invokeRequest,error);
process.exit(1);
});
}
My goal is to create an authentication service using the HFC so that an Android app invoke a transaction. Any help would be greatly appreciated.
you installed node modules in your mac and used them in your Linux docker image. This is what causing the problem.
Make sure that npm modules are built on the platform you are executing it. Re-install your node modules in your linux environment by first deleting node_modules and running npm install from inside starter docker image.
Please consult these questions as well,
NodeJs Google Compute Engine Invalid ELF Header when using 'gcloud' module
"invalid ELF header" when using the nodejs "ref" module on AWS Lambda
Credit to Sufiyan Ghori for pointing this out - the issue was that the node modules were installed in my host (mac) and therefore weren't compatible with the linux docker image I was trying to execute code within.
SOLUTION:
Delete node_modules folder from work directory.
Run npm install hfc#0.6.x from inside the starter docker image.

Can't connect java client to Marklogic database

I've just installed a MarkLogic nosql database out of the box on a windows machine.
I wrote a simple javaclient to put data in to the database but I get this error:
org.apache.http.conn.HttpHostConnectException: Connection to http://my.caci.local:8003 refused
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:158)
The Marklogic database is started. This is the code :
DatabaseClient client = DatabaseClientFactory.newClient("localhost", 8003, "admin", "admin", Authentication.DIGEST);
XMLDocumentManager docMgr = client.newXMLDocumentManager(); BinaryDocumentManager binMgr = client.newBinaryDocumentManager();
DOMHandle handle = new DOMHandle(); for (int i = 0; i < AANT_PERSONEN; i++) {
Document document = createDocument(i);
String docId = "/zaak/" + 20;
handle.set(document);
docMgr.write(docId, handle); }
....
The Marklogic console reports the following ports to be active on my.caci.local:
Default :: Admin : 8001 [HTTP]
Default :: App-Services : 8000 [HTTP]
Default :: HealthCheck : 7997 [HTTP]
Default :: Manage : 8002 [HTTP]
I'm new to marklogic and this is my question:
- what port should I use to connect to from my java client?
In agreement with MystyxMac, I notice the console does not report a REST server on 8003.
Here's the documentation for setting up a REST server:
http://docs.marklogic.com/guide/rest-dev/intro#id_97899
You should also add users for the rest-reader, rest-writer, and rest-admin roles.
Hoping that helps,
Erik Hennum
For testing purposes you can simply switch the port you are using to 8000.
From the documentation:
When you install MarkLogic Server, a pre-configured REST API instance
is available on port 8000. This instance uses the Documents database
as the content database and the Modules database as the modules
database.
The instance on port 8000 is convenient for getting started, but you
will usually create a dedicated instance for production purposes.
http://docs.marklogic.com/guide/rest-dev/service#id_15309

Resources