oramds unknown protocol error - business-process-management

i have below given code to get data from bpm engine but came across an error as mentioned below. Can someone tell me some working code or help me to resolve the issue
Exception in thread "main" java.lang.ExceptionInInitializerError
at oracle.integration.platform.blocks.FabricConfigManager.getMetadataManager(FabricConfigManager.java:205)
at oracle.integration.platform.blocks.FabricConfigManager.loadConfigObject(FabricConfigManager.java:620)
at oracle.tip.pc.services.identity.config.ISConfiguration.init(ISConfiguration.java:170)
at oracle.tip.pc.services.identity.config.ISConfiguration.<clinit>(ISConfiguration.java:130)
at bpm.BpmTester.main(BpmTester.java:52)
Caused by: oracle.fabric.common.FabricException: oracle.fabric.common.FabricException: java.net.MalformedURLException: unknown protocol: oramds: unknown protocol: oramds: java.net.MalformedURLException: unknown protocol: oramds: unknown protocol: oramds
at oracle.fabric.common.FabricMetadataManagerFactory.createMetadataManager(FabricMetadataManagerFactory.java:217)
at oracle.integration.platform.blocks.FabricConfigManager$MDMHolder.<clinit>(FabricConfigManager.java:200)
... 5 more
Caused by: oracle.fabric.common.FabricException: java.net.MalformedURLException: unknown protocol: oramds: unknown protocol: oramds
at oracle.integration.platform.common.MDSMetadataManagerImpl.<init>(MDSMetadataManagerImpl.java:171)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:355)
at java.lang.Class.newInstance(Class.java:308)
at oracle.fabric.common.FabricMetadataManagerFactory.createMetadataManager(FabricMetadataManagerFactory.java:213)
... 6 more
Caused by: java.net.MalformedURLException: unknown protocol: oramds
at java.net.URL.<init>(URL.java:574)
at java.net.URL.<init>(URL.java:464)
at java.net.URL.<init>(URL.java:413)
at oracle.integration.platform.common.MDSMetadataManagerImpl.<init>(MDSMetadataManagerImpl.java:138)
... 13 more
JavaCode::
IWorkflowServiceClient wfSvcClient;
try{
Map properties = new HashMap<IWorkflowServiceClientConstants.CONNECTION_PROPERTY, String>();
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.SOAP_END_POINT_ROOT, "http://hostname:port");
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.SECURITY_POLICY_URI, "oracle/wss10_saml_token_client_policy");
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.MANAGEMENT_POLICY_URI, "oracle/log_policy");
wfSvcClient=WorkflowServiceClientFactory.getWorkflowServiceClient(WorkflowServiceClientFactory.SOAP_CLIENT,properties,null);
IWorkflowContext wfCtx=wfSvcClient.getTaskQueryService().authenticate(userId, password.toCharArray(),oracle.tip.pc.services.identity.config.ISConfiguration.getDefaultRealmName()
);
IWorkflowContext adminCtx = wfSvcClient.getTaskQueryService().authenticate(adminUserId, pwd,
oracle.tip.pc.services.identity.config.ISConfiguration.getDefaultRealmName(),
userId);
ITaskQueryService querySvc=wfSvcClient.getTaskQueryService();
Predicate.enableXMLSerialization(true);
// Build the predicate
Predicate statePredicate = new Predicate(TableConstants.WFTASK_STATE_COLUMN,
Predicate.OP_NEQ,
IWorkflowConstants.TASK_STATE_ASSIGNED);
statePredicate.addClause(Predicate.AND,
TableConstants.WFTASK_NUMBERATTRIBUTE1_COLUMN,
Predicate.OP_IS_NULL,
nullParam);
Predicate datePredicate = new Predicate(TableConstants.WFTASK_ENDDATE_COLUMN,
Predicate.OP_ON,
new Date());
Predicate predicate = new Predicate(statePredicate, Predicate.AND, datePredicate);
// Create the ordering
Ordering ordering = new Ordering(TableConstants.WFTASK_TITLE_COLUMN, true, true);
ordering.addClause(TableConstants.WFTASK_PRIORITY_COLUMN, true, true);
// List of display columns
// For those columns that are not specified here, the queried Task object will not hold any value.
// For example: If TITLE is not specified, task.getTitle() will return null
// For the list of most comonly used columns, check the table below
// Note: TASKID is fetched by default. So there is no need to explicitly specity it.
List queryColumns = new ArrayList();
queryColumns.add("TASKNUMBER");
queryColumns.add("TITLE");
queryColumns.add("PRIORITY");
queryColumns.add("STATE");
queryColumns.add("ENDDATE");
queryColumns.add("NUMBERATTRIBUTE1");
queryColumns.add("TEXTATTRIBUTE1");
// List of optional info
// Any optionalInfo specified can be fetched from the Task object
// For example: if you have specified "CustomActions", you can retrieve
// it using task.getSystemAttributes().getCustomActions();
// "Actions" (All Actions) - task.getSystemAttributes().getSystemActions()
// "GroupActions" (Only group Actions: Actions that can be permoded by the user as a member of a group)
// - task.getSystemAttributes().getSystemActions()
// "ShortHistory" - task`enter code here`.getSystemAttributes().getShortHistory()
List optionalInfo = new ArrayList();
optionalInfo.add("Actions");
//optionalInfo.add("GroupActions");
//optionalInfo.add("CustomActions");
//optionalInfo.add("ShortHistory");
// The following is reserved for future use.
// If you need them, please use getTaskDetailsById (or) getTaskDetailsByNumber,
// which will fetch all information related to a task, which includes these
//optionalInfo.add("Attachments");
//optionalInfo.add("Comments");
//optionalInfo.add("Payload");
List tasksList = querySvc.queryTasks(wfCtx,
queryColumns,
optionalInfo,
ITaskQueryService.ASSIGNMENT_FILTER_MY_AND_GROUP,
keyword,
predicate,
ordering,
0,0); // No Paging
// How to use paging:
// 1. If you need to dynamically calculate paging size (or) to display/find
// out the number of pages, the user has to scroll (Like page X of Y)
// Call queryTasks to find out the number of tasks it returns. Using this
// calculate your paging size (The number of taks you want in a page)
// Call queryTasks successively varing the startRow and endRow params.
// For example: If the total number of tasks is 30 and your want a paging size
// of 10, you can call with (startRow, endRow): (1, 10) (11, 20) (21, 30)
// 2. If you have fixed paging size, just keep calling queryTasks successively with
// the paging size (If your paging size is 10, you can call with (startRow, endRow):
// (1, 10) (11, 20) (21, 30) (31, 40)..... until the number of tasks returned is
// less than your paging size (or) there are no more tasks returned
if (tasksList != null) { // There are tasks
System.out.println("Total number of tasks: " + tasksList.size());
System.out.println("Tasks List: ");
Task task = null;
for (int i = 0; i < tasksList.size(); i++) {
task = (Task) tasksList.get(i);
System.out.println("Task Number: " + task.getSystemAttributes().getTaskNumber());
System.out.println("Task Id: " + task.getSystemAttributes().getTaskId());
System.out.println("Title: " + task.getTitle());
System.out.println("Priority: " + task.getPriority());
System.out.println("State: " + task.getSystemAttributes().getState());
System.out.println();
// Retrive any Optional Info specified
// Use task service, to perform operations on the task
}
}
}
catch(WorkflowException e){
e.printStackTrace();
}
catch(BPMConfigException e){
System.out.print("hello");
}

oramds is a oracle soa specific thing, are you developing a client that runs outside of the soa suite? then you ll have to get the referenced wsdl/xsd and make sure they look ok in some other tool like eclipse or soapui

Related

How to apply deduplication for an array returned in Zapier Code output

I have a Zapier Code block that does fetch for JSON array and the preprocess this data. I cannot use Zapier Webhook with polling, because I need to process the data a bit.
Zapier Webhook offers deduplication feature, by having id parameter associated with the items returned in an array from the URL endpoint. How can I achieve the same for Zapier Code? Currently, my zap is trying to process and trigger on the same data twice. This leads to the error that Zapier tries to send out the same tweet twice, every time the Code is triggered.
Here is mock data returned by my Code:
output = [{id: 1, name: "foo"}, {id: 2, name: "bar"}]
Currently, without deduplication, I am getting this email and having my zap disabled:
Your Zap named XXX was just stopped. This happened because our systems detected this Zap posted a duplicate tweet, which is against Twitter's Terms Of Service.
You can use storage by Zapier to achieve this. the ideal flow will be :
Trigger
Storage by Zapier [Get Value (use storage key = lastItemId) ]
Code By Zapier (Filter array return only those record that has id greater than the lastItemId)
Storage By Zapier (Set Value) : update lastItemId with the last item processed by Code By Zapier step
You can also use StoreClient in place of the Storage By zapier, but always update a existing key lastItemId and compare id of the record with ```lastItemId`` and at the end update StoreCLient key (lastItemId)
Based on the answer from Kishor Patidar, here is my code. I am adding the example code, here is too some time to figure it out. Specifically, in my case, the items cannot be processed in the order of appearance (no running counter primary keys) and also there are some limitations how far in the future Zapier can schedule actions (you can delay only up to one month).
The store also has a limitation of 500 keys.
// We need store for deduplication
// https://zapier.com/help/create/code-webhooks/store-data-from-code-steps-with-storeclient
// https://www.uuidgenerator.net/
var store = StoreClient('xxx');
// Where do we get our JSON data
const url = "https://xxx";
// Call FB public backend to get scheduled battles
const resp = await fetch(url);
const data = await resp.json();
let processed = [];
for(let entry of data.entries) {
console.log("Processing entry", entry.id);
// Filter out events with a non-wanted prize type
if(entry.prizes[0].type != "mytype") {
continue;
}
// Zapier can only delay tweets for one month
// As the code fires every 30 minutes,
// we are only interested scheduling tweets that happen
// very soon
const when = Date.parse(entry.startDate);
const now = Date.now();
if(!when) {
throw new Error("startDate missing");
}
if(when > now + 24 * 3600 * 1000) {
// Milliseconds not going to happen for next 24h
console.log("Too soon to schedule", entry.id, entry.startDate, now);
continue;
} else {
console.log("Starting to schedule", entry.id, entry.startDate, now);
}
const key = "myprefix_" + entry.id;
// Do manual deduplication
// https://stackoverflow.com/questions/64893057/how-to-apply-deduplication-for-an-array-returned-in-zapier-code-output
const existing = await store.get(key);
if(existing) {
// Already processed
console.log("Entry already processed", entry.id);
continue;
}
// Calculate some parameters on entry based on nested arrays
// and such
entry.startTimeFormat = "HH:mm";
// Generate URL for the tweet
entry.signUpURL = `https://xxx/${entry.id}`;
processed.push(entry);
// Do not tweet this entry twice,
// by setting a marker flag for it in store
await store.set(key, "deduplicated");
}
output = processed;

HTPP POST to Google Forms or Alternative

I have a google form setup that emails me upon a manual submission when somebody fills it out (new lead) and transfers the information to a Google spreadsheet. Easy enough to figure that out.
However, now I'm trying to figure out how to send the same information information contained within a url string and automatically POST that information to the form. Or find a company who offers that ability, via an api or other means. So far I've tested out jotform and a few others. The information passed along fine, but it doesn't auto populate the fields. I assume it's because it doesn't know that x=y due to the fields being named differently. I've found a ton of documentation about pre-populating the forms, but not much about filling out a form every time a new POST url is generated.
URL looks like the following
VARhttps://script.google.com/macros/s/XXXXXXXXXXXXXXXX/exec?/--A--
first_name--B--/--A--last_name--B--/--A--address1--B--/--A--city--B--/--A--
state--B--/--A--postal_code--B--/--A--phone_number--B--/--A--date_of_birth--
B--/--A--email--B--
Information passed is as follows
https://website
here.com/Pamela/Urne/123+Test+Street/Henderson/TX/75652/281XXXXXX/1974-01-
01/test0101cw#test.com
The script I'm testing out
// original from: http://mashe.hawksey.info/2014/07/google-sheets-as-a-database-insert-with-apps-script-using-postget-methods-with-ajax-example/
// original gist: https://gist.github.com/willpatera/ee41ae374d3c9839c2d6
function doGet(e){
return handleResponse(e);
}
// Enter sheet name where data is to be written below
var SHEET_NAME = "Sheet1";
var SCRIPT_PROP = PropertiesService.getScriptProperties(); // new property service
function handleResponse(e) {
// shortly after my original solution Google announced the LockService[1]
// this prevents concurrent access overwritting data
// [1] http://googleappsdeveloper.blogspot.co.uk/2011/10/concurrency-and-google-apps-script.html
// we want a public lock, one that locks for all invocations
var lock = LockService.getPublicLock();
lock.waitLock(30000); // wait 30 seconds before conceding defeat.
try {
// next set where we write the data - you could write to multiple/alternate destinations
var doc = SpreadsheetApp.openById(SCRIPT_PROP.getProperty("key"));
var sheet = doc.getSheetByName(SHEET_NAME);
// we'll assume header is in row 1 but you can override with header_row in GET/POST data
var headRow = e.parameter.header_row || 1;
var headers = sheet.getRange(1, 1, 1, sheet.getLastColumn()).getValues()[0];
var nextRow = sheet.getLastRow()+1; // get next row
var row = [];
// loop through the header columns
for (i in headers){
if (headers[i] == "Timestamp"){ // special case if you include a 'Timestamp' column
row.push(new Date());
} else { // else use header name to get data
row.push(e.parameter[headers[i]]);
}
}
// more efficient to set values as [][] array than individually
sheet.getRange(nextRow, 1, 1, row.length).setValues([row]);
// return json success results
return ContentService
.createTextOutput(JSON.stringify({"result":"success", "row": nextRow}))
.setMimeType(ContentService.MimeType.JSON);
} catch(e){
// if error return this
return ContentService
.createTextOutput(JSON.stringify({"result":"error", "error": e}))
.setMimeType(ContentService.MimeType.JSON);
} finally { //release lock
lock.releaseLock();
}
}
function setup() {
var doc = SpreadsheetApp.getActiveSpreadsheet();
SCRIPT_PROP.setProperty("key", doc.getId());
}
I get a success message after accessing the url, but all information listed in the spreadsheet is "Undefined."
That's as far as I got so far. If somebody knows an easier solution or can point me in the right direction I'd appreciate it.

How do I get a continuation token for a bulk INSERT on Azure Cosmos DB?

I want to upload a CSV file that represents 10k documents to be added to my Cosmos DB collection in a manner that's fast and atomic. I have a stored procedure like the following pseudo-code:
function createDocsFromCSV(csv_text) {
function parse(txt) { // ... parsing code here ... }
var collection = getContext().getCollection();
var response = getContext().getResponse();
var docs_to_create = parse(csv_text);
for(var ii=0; ii<docs_to_create.length; ii++) {
var accepted = collection.createDocument(collection.getSelfLink(),
docs_to_create[ii],
function(err, doc_created) {
if(err) throw new Error('Error' + err.message);
});
if(!accepted) {
throw new Error('Timed out creating document ' + ii);
}
}
}
When I run it, the stored procedure creates about 1200 documents before timing out (and therefore rolling back and not creating any documents).
Previously I had success updating (instead of creating) thousands of documents in a stored procedure using continuation tokens and this answer as guidance: https://stackoverflow.com/a/34761098/277504. But after searching documentation (e.g. https://azure.github.io/azure-documentdb-js-server/Collection.html) I don't see a way to get continuation tokens from creating documents like I do for querying documents.
Is there a way to take advantage of stored procedures for bulk document creation?
It’s important to note that stored procedures have bounded execution, in which all operations must complete within the server specified request timeout duration. If an operation does not complete with that time limit, the transaction is automatically rolled back.
In order to simplify development to handle time limits, all CRUD (Create, Read, Update, and Delete) operations return a Boolean value that represents whether that operation will complete. This Boolean value can be used a signal to wrap up execution and for implementing a continuation based model to resume execution (this is illustrated in our code samples below). More details, please refer to the doc.
The bulk-insert stored procedure provided above implements the continuation model by returning the number of documents successfully created.
pseudo-code:
function createDocsFromCSV(csv_text,count) {
function parse(txt) { // ... parsing code here ... }
var collection = getContext().getCollection();
var response = getContext().getResponse();
var docs_to_create = parse(csv_text);
for(var ii=count; ii<docs_to_create.length; ii++) {
var accepted = collection.createDocument(collection.getSelfLink(),
docs_to_create[ii],
function(err, doc_created) {
if(err) throw new Error('Error' + err.message);
});
if(!accepted) {
getContext().getResponse().setBody(count);
}
}
}
Then you could check the output document count on the client side and re-run the stored procedure with the count parameter to create the remaining set of documents until the count larger than the length of csv_text.
Hope it helps you.

BatchInserters.batchDatabase fails - sometimes - silently to persist node properties

I use BatchInserters.batchDatabase to create an embedded Neo4j 2.1.5 data base. When I only put a small amount of data in it, everything works fine.
But if I increase the size of data put in, Neo4j fails to persist the latest properties set with setProperty. I can read back those properties with getProperty before I call shutdown. When I load the data base again with new GraphDatabaseFactory().newEmbeddedDatabase those properies are lost.
The strange thing about this is that Neo4j doesn't report any error or throw an exception. So I have no clue what's going wrong or where. Java should have enough memory to handle both the small data base (Database 2.66 MiB, 3,000 nodes, 3,000 relationships) and the big one (Database 26.32 MiB, 197,267 nodes, 390,659 relationships)
It's hard for me to extract a running example to show you the problem, but I can do if this helps. Here the main steps I do though:
def createDataBase(rules: AllRules) {
// empty the data base folder
deleteFileOrDirectory(new File(mainProjectPathNeo4j))
// Create an index on some properties
db = new GraphDatabaseFactory().newEmbeddedDatabase(mainProjectPathNeo4j)
engine = new ExecutionEngine(db)
createIndex()
db.shutdown()
// Fill the data base
db = BatchInserters.batchDatabase(mainProjectPathNeo4j)
//createBatchIndex
try {
// Every function loads some data
loadAllModulesBatch(rules)
loadAllLinkModulesBatch(rules)
loadFormalModulesBatch(rules)
loadInLinksBatch()
loadHILBatch()
createStandardLinkModules(rules)
createStandardLinkSets(rules)
// validateModel shows the problem
validateModel(rules)
} catch {
// I want to see if my environment (BIRT) is catching any exceptions
case _ => val a = 7
} finally {
db.shutdown()
}
}
validateModel is updating some properties of already created nodes
def validateModule(srcM: GenericModule) {
srcM.node.setProperty("isValidated", true)
assert(srcM.node == Neo4jScalaDataSource.testNode)
assert(srcM.node eq Neo4jScalaDataSource.testNode)
assert(srcM.node.getProperty("isValidated").asInstanceOf[Boolean])
When I finally use Cypher to get some data back
the properties set by validateModel are missing
class Neo4jScalaDataSet extends ScriptedDataSetEventAdapter {
override def beforeOpen(...) {
result = Neo4jScalaDataSource.engine.profile(
"""
MATCH (fm:FormalModule {isValidated: true}) RETURN fm.fullName as fullName, fm.uid as uid
""");
iter = result.iterator()
}
override def fetch(...) = {
if (iter.hasNext()) {
for (e <- iter.next().entrySet()) {
row.setColumnValue(e.getKey(), e.getValue())
}
count += 1;
row.setColumnValue("count", count)
return true
} else {
logger.log(Level.INFO, result.executionPlanDescription().toString())
return super.fetch(dataSet, row)
}
}
batchDatabase indeed causes this problem.
I have switched to BatchInserters.inserter and now everything works just fine.

HttpResponseMessage ReasonPhrase max length?

I have this code:
public void Put(int id, DistributionRuleModelListItem model)
{
CommonResultModel pre = new BLL.DistributionRules().Save(id, model, true);
if(!pre.success){
DAL.DBManager.DestroyContext();
var resp = new HttpResponseMessage(HttpStatusCode.InternalServerError)
{
Content = new StringContent(string.Format("Internal server error for distruleId: {0}", id)),
ReasonPhrase = pre.message.Replace(Environment.NewLine, " ")//.Substring(0,400)
};
throw new HttpResponseException(resp);
}
}
There is logic that can set the value of pre.message to be an exception.ToString() and if it is too long i receive the following application exception:
Specified argument was out of the range of valid values. Parameter
name: value
But if I uncomment .Substring(0,400) everything works fine and on client side I receive the correct response and it is possible to show it to the user.
What is the max length of ReasonPhrase? I can't find any documentation that specifies this value.
I couldn't find the max value documented anywhere, however through trial and error, I found it to have a maximum length of 512 bytes.

Resources