I'm using the online WLST script to configure the WebLogic server during Docker image build. Basically the docker image build starts up the WebLogic and executes the following script
import os
import time
import getopt
import sys
import re
# Deployment Information
domainname = os.environ.get('DOMAIN_NAME', 'base_domain')
domainhome = os.environ.get('DOMAIN_HOME', '/u01/oracle/user_projects/domains/' + domainname)
cluster_name = os.environ.get("CLUSTER_NAME", "DockerCluster")
admin_name = os.environ.get("ADMIN_NAME", "AdminServer")
connect(username,password,server_url)
edit()
print ""
print "================== DataSource ==================="
startEdit()
# Create Datasource
# ==================
cd('/')
cmo.createJDBCSystemResource(dsname)
cd('/JDBCSystemResources/' + dsname + '/JDBCResource/' + dsname)
cmo.setName(dsname)
cd('/JDBCSystemResources/' + dsname + '/JDBCResource/' + dsname)
cd('JDBCDataSourceParams/' + dsname)
set('JNDINames', jarray.array([String(dsjndiname)], String))
cd('/JDBCSystemResources/' + dsname + '/JDBCResource/' + dsname)
cd('JDBCDriverParams/' + dsname)
cmo.setDriverName(dsdriver)
cmo.setUrl(dsurl)
set('PasswordEncrypted', encrypt(dspassword))
print 'create JDBCDriverParams Properties'
cd('Properties/' + dsname)
cmo.createProperty('user')
cd('Properties/user')
cmo.setValue(dsusername)
print 'create JDBCConnectionPoolParams'
cd('/JDBCSystemResources/' + dsname + '/JDBCResource/' + dsname)
cd('JDBCConnectionPoolParams/' + dsname)
set('TestTableName','SQL SELECT 1 FROM DUAL')
# Assign
# ======
#assign('JDBCSystemResource', dsname, 'Target', admin_name)
#assign('JDBCSystemResource', dsname, 'Target', cluster_name)
cd('/SystemResources/' + dsname)
set('Targets',jarray.array([ObjectName('com.bea:Name=' + targetname + ',Type=' + targettype)], ObjectName))
# Update Domain, Close It, Exit
# ==========================
#save()
activate()
print ""
#disconnect()
exit()
The problem is, the database host doesn't exists at the build time, as it is the container name of another docker container in the docker-compose environment. With this script, setting the target on data source throws exception, as the host name couldn't be resolved, thus the activate call fails, as well as all the following WLST scripts which depends on the data source. Yet, I don't want to manually set the target after the whole environment is up and running. How do I avoid the exception in this case?
Set the inital and the minimum capacity of the datasource to 0, this allows activation without testing and should skip your error.
Related
I am new to Fabric 2.0 and recently installed all samples and I was able to run test-network without an issue with 2 orgs. Then I followed the directory on addOrg3 to add 3rd organization and join the channel I created earlier.
Now the fun part came in when I wanted to add 4th organization. What I did was, I copied the addOrg3 folder and renamed almost everything in each file to represent 4th organization. I even assigned new PORT for this organization. However I am seeing the following error.
I've also added the following in Scripts/envVar.sh
export PEER0_ORG4_CA=${PWD}/organizations/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/tls/ca.crt
And added the following in envVarCLI.sh
elif [ $ORG -eq 4 ]; then
CORE_PEER_LOCALMSPID="Org4MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG4_CA
CORE_PEER_ADDRESS=peer0.org4.example.com:12051
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/peerOrganizations/org4.example.com/users/Admin#.../msp
I have also added step1Org4.sh and step2Org4.sh basically following by following addOrg3's structure.
What steps do you follow to add additional organizations ? Please help.
"No such container: Org4cli"
Sorry for the formatting since I wasn't able to put in to coding style but here is the output from running the command "./addOrg4.sh up"
**Add Org4 to channel 'mychannel' with '10' seconds and CLI delay of '3' seconds and using database 'leveldb'
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
##########################################################
############ Create Org4 Identities ######################
##########################################################
+ cryptogen generate --config=org4-crypto.yaml --output=../organizations
org4.example.com
+ res=0
+ set +x
Generate CCP files for Org4
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/configtxgen
##########################################################
####### Generating Org4 organization definition #########
##########################################################
+ configtxgen -printOrg Org4MSP
2020-05-29 13:33:04.609 EDT [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-29 13:33:04.617 EDT [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 002 Loaded configuration: /Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/configtx.yaml
+ res=0
+ set +x
###############################################################
####### Generate and submit config tx to add Org4 #############
###############################################################
Error: No such container: Org4cli
ERROR !!!! Unable to create config tx **
In your addOrg4.sh have condition check like this:
CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
fi
If you already run addOrg3.sh up, CONTAINER_IDS alway have value (Example: 51b4ad60d812). It is ContainerID of Org3cli. So function Org4Up will never call. Simple way is just comment code like this:
# CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
# if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
# fi
It will bring up Org4cli you missing.
First check the container is up or not and if it is up then I think the CLI where the command is executed is not bootstrapped with the Org4 details.
I have added the 4th Organization from the three Org Hyperledger Fabric Network .Firstly, you have to create the Org4-artifacts (Crypto.yaml and Org4 docker file including the Org4Cli) and then try to follow the manual (step by step) process to add the new Organization from the official documentation.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/channel_update_tutorial.html
Omit the process of editing scripts (step1 Org3.sh ...) because the workflow for adding the 4th or a new Org is slightly changed.So,you will spend a lot of time in just modifying the scripts.
I will write an article to add a new Org (4th) on medium,will paste the link here too.
I run cmd.exe to move a file with Administrator rights:
ThisParams := '/K move ' + '"' + ThisSourceFile + '"' + ' ' + '"' + ATargetFile + '"';
Winapi.ShellAPI.ShellExecute(0, 'runas', 'cmd.exe', PChar(ThisParams), '', Winapi.Windows.SW_HIDE);
However, the cmd.exe process (although invisible) after execution remains active and in memory and stays visible in Task Manager.
How can cmd.exe, in this case, be automatically closed after execution?
As documented /k makes the command interpreter to continue running after executing the passed command. You should instead use
/c Carries out the command specified by String and then stops.
I have the following apache beam pipeline:
package ch.mycompany.bb8;
import ch.mycompany.bb8.transforms.LogRecords;
import java.io.File;
import java.io.IOException;
import org.apache.avro.Schema;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.PipelineResult;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.ParDo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Bb8Pipeline {
private static final Logger LOG = LoggerFactory.getLogger(Bb8Pipeline.class);
/**
* Runs the pipeline with the supplied options.
*
* #param options The execution parameters to the pipeline.
* #return The result of the pipeline execution.
*/
public static PipelineResult run(CustomOptions options) {
Pipeline pipeline = Pipeline.create(options);
String schemaJson = "{"
+ "\"type\": \"record\","
+ "\"namespace\": \"com.google.cloud.pso\","
+ "\"name\": \"User\","
+ "\"fields\": ["
+ "{"
+ "\"name\": \"name\","
+ "\"type\": \"string\""
+ "},"
+ "{"
+ "\"name\": \"surname\","
+ "\"type\": \"string\""
+ "},"
+ "{"
+ "\"name\": \"age\","
+ "\"type\": \"int\""
+ "},"
+ "{"
+ "\"name\": \"retired\","
+ "\"type\": \"boolean\""
+ "}"
+ "]"
+ "}";
Schema avroSchema = new Schema.Parser().parse(schemaJson);
LOG.info(avroSchema.toString());
pipeline.apply("Read PubSub record strings",
PubsubIO.readAvroGenericRecords(avroSchema)
.fromSubscription(options.getInputSubscription()))
.apply("Simply log records", ParDo.of(new LogRecords()))
.apply("Write PubSub records", PubsubIO.writeStrings().to(options.getOutputTopic()));
return pipeline.run();
}
/**
* Main entry point for executing the pipeline.
*
* #param args The command-line arguments to the pipeline.
*/
public static void main(String[] args) {
CustomOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().as(CustomOptions.class);
options.setStreaming(true);
run(options);
}
}
I run the pipeline using maven as follows:
mvn compile exec:java \
-Dexec.mainClass=ch.mycompany.bb8.Bb8Pipeline \
-Dexec.args="--project=t2-prod \
--stagingLocation=gs://bb-8-staging/staging/ \
--tempLocation=gs://bb-8-staging/staging/ \
--runner=DataflowRunner \
--region=europe-west1 \
--jobName=bb-8-avro-test \
--outputTopic=projects/t2-prod/topics/bb-8-output \
--inputSubscription=projects/t2-prod/subscriptions/bb-8-ingest \
--maxNumWorkers=1"
And I get the following null pointer exception:
INFO: {"type":"record","name":"User","namespace":"com.google.cloud.pso","fields":[{"name":"name","type":"string"},{"name":"surname","type":"string"},{"name":"age","type":"int"},{"name":"retired","type":"boolean"}]}
[WARNING]
java.lang.NullPointerException
at java.util.concurrent.ConcurrentHashMap.get (ConcurrentHashMap.java:936)
at java.util.concurrent.ConcurrentHashMap.containsKey (ConcurrentHashMap.java:964)
at org.apache.avro.LogicalTypes.fromSchemaImpl (LogicalTypes.java:73)
at org.apache.avro.LogicalTypes.fromSchema (LogicalTypes.java:47)
at org.apache.beam.sdk.schemas.utils.AvroUtils.toFieldType (AvroUtils.java:673)
at org.apache.beam.sdk.schemas.utils.AvroUtils.toBeamField (AvroUtils.java:290)
at org.apache.beam.sdk.schemas.utils.AvroUtils.toBeamSchema (AvroUtils.java:313)
at org.apache.beam.sdk.schemas.utils.AvroUtils.getSchema (AvroUtils.java:415)
at org.apache.beam.sdk.io.gcp.pubsub.PubsubIO.readAvroGenericRecords (PubsubIO.java:592)
at ch.mycompany.bb8.Bb8Pipeline.run (Bb8Pipeline.java:68)
at ch.mycompany.bb8.Bb8Pipeline.main (Bb8Pipeline.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run (ExecJavaMojo.java:282)
at java.lang.Thread.run (Thread.java:748)
As seen in the stack trace above, the schema is logged as expected and so the schema isn't null.
Does anyone know how to fix this error, or how I can debug further?
mvn -version
Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 2018-10-24T20:41:47+02:00)
Maven home: /opt/apache-maven
Java version: 1.8.0_191, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-8-oracle/jre
Default locale: en_ZA, platform encoding: UTF-8
OS name: "linux", version: "4.15.0-88-generic", arch: "amd64", family: "unix"
Beam version 2.19.0
org.apache.avro version 1.8.0
This appears to be a dependency conflict related issue:
Beam 2.19.0 depends on Avro 1.8.2 (link), which has the correct implementation (see this line) and thus will not cause the problem.
But you mentioned you use Avro 1.8.0, which has the incorrect implementation (see this line) that may throw the NullPointerException
So a easy fix for this problem is to bump the Avro version you use to 1.8.2
I use the following command to merge a single changeset from Source to target branch:
result = BatchCommand(#"tf merge /version:" + chgnumber + "~" + chgnumber + #" """ + Source + #""" """ + Target + #""" /recursive /login:" + UID + "," + PWD + "", SourceTar[2]);
BatchCommand is another method which executes the command in cmd in my workspace SourceTar[2].
in some cases I get the error where I need to overwrite files. How can I do this automatically (Overwrite files).
Should I use /force for that? It definetly will resolve that overwrite conflict but will it also resolve other conflict(I don't want that).
I only want to overwrite files if that error occurs, other conflicts are resolved programmatically. Any suggestion would be helpful;
You need to work with tf resolve command to resolve conflicts. Your commands can be the similar to:
tf merge $/TeamProjectRoot/Branches/Source $/TeamProjectRoot/Branches/Target
tf resolve $/TeamProjectRoot/Branches/Target /r /i /auto:TakeTheirs
/auto:TakeTheirs option accepts the changes from the source of the merge and overwrites the changes in the target.
/auto:KeepYours option discards the changes from the source of the merge and leaves the target unchanged.
Background:
Now I write some scripts to output the differences between two files into a file. Now I using Linux command diff -u.
Is there a way in Ant to diff files?
So that I can use groovy + ant + diff, and need not to invoke the local command.
No, there is no diff command in ant.
You could just grab something like java-diff-utils and write your own though (if you want to avoid the system diff command)
#Grab('com.googlecode.java-diff-utils:diffutils:1.2.1')
import difflib.*
def fileAContents = '''Line 1
|Line 2
|Line 3'''.stripMargin().split( '\n' ).toList()
def fileBContents = '''Line 1
|Line Two
|Line 3'''.stripMargin().split( '\n' ).toList()
DiffUtils.diff( fileAContents, fileBContents ).deltas.each {
println it
}
which prints:
[ChangeDelta, position: 1, lines: [Line 2] to [Line Two]]