Zendesk - CreateUser JobStatus Results is Null - zendesk

So I am currently working with the ZenDesk API. I am creating many users using the batch CreateUser method that takes an array of up to 100 user objects. Now, for some reason, some users fail to generate. Therefore, I wanted to obtain the result of the JobStatus after creating the users so that I can identify the problem easily. The issue is that the result variable is null after performing the CreateUsers() method.
Some sample code:
public static void createEndUsers(Zendesk zd){
for(Organization org : zd.getOrganizations()){
List<User> usersList = getUsersPerOrg(org, 0)
JobStatus js = zd.createUsers(usersList);
List<T> resultElements = js.getResults();
}
}
Why is getResults() returning null in this instance? Is it because the operation has not yet been performed when it reaches that part of the code? How can I make sure that I "wait" until the result is ready before I try to access it?

If you have a look at the posssible values from org.zendesk.client.v2.model.JobStatus.JobStatusEnum you will notice that the job may be queued for execution or it could still be running at the time that the job status was returned by the operation org.zendesk.client.v2.Zendesk#createUsers(org.zendesk.client.v2.model.User...).
As can be seen from the Zendesk Documentation for createUsers Operation
This endpoint returns a job_status JSON object and queues a background job to do the work. Use the Show Job Status endpoint to check for the job's completion.
only when the job is completed, there will be a corresponding result delivered for the operation.
A possible solution in your case would be to check (possibly in a separate thread) every 500ms whether the job status is not queued or not running (otherwise said whether it completed) and if it successfully completed to retrieve the results.

Related

anylogic agent communication and message sending

In my model, I have some agents;
"Demand" agent,
"EnergyProducer1" agent
"EnergyProducer2" agent.
When my hourly energy demands are created in the Main agent with a function, the priority for satisfying this demand is belongs to "EnergyProducer1" agent. In this agent, I have a function that calculate energy production based on some situtations. The some part of the inside of this function is following;
**" if (statechartA.isStateActive(Operating.busy)) && ( main.heatLoadDemandPerHour >= heatPowerNominal) {
producedHeatPower = heatPowerNominal;
naturalGasConsumptionA = naturalGasConsumptionNominal;
send("boilerWorking",boiler);
} else ..... "**
Here my question is related to 4th line of the code. If my agent1 fails to satisfy the hourly demand, I have to say agent2 that " to satisfy rest of demand". If I send this message to agent2, its statechart will be active and the function of agent2 will be working. My question is that this all situations will be realized at the same hour ??? İf it is not, is accessing variables and parameters of other agent2 more appropiaote way???
I hope I could explain my problem.
thanks for your help in advance...
**Edited question...
As a general comment on your question, within AnyLogic environment sending messages is alway preferable to directly accessing variable and parameters of another agent.
Specifically in the example presented the send() function will schedule message delivery the next instance after the completion of the current function.
Update: A message in AnyLogic can be any Java class. Sending strings such as "boilerWorking" used in the example is good for general control, however if more information needs to be shared (such as a double value) then it is good practice to create a new Java class (let's call is ModelMessage and follow these instructions) with at least two properties msgStr and msgVal. With this new class sending a message changes from this:
...
send("boilerWorking", boiler);
...
to this:
...
send(new ModelMessage("boilerWorking",42.0), boiler);
...
and firing transitions in the statechart has to be changed to use if expression is true with expression being msg.msgString == "boilerWorking".
More information about Agent communication is available here.

How to parallelize HTTP requests within an Apache Beam step?

I have an Apache Beam pipeline running on Google Dataflow whose job is rather simple:
It reads individual JSON objects from Pub/Sub
Parses them
And sends them via HTTP to some API
This API requires me to send the items in batches of 75. So I built a DoFn that accumulates events in a list and publish them via this API once they I get 75. This results to be too slow, so I thought instead of executing those HTTP requests in different threads using a thread pool.
The implementation of what I have right now looks like this:
private class WriteFn : DoFn<TheEvent, Void>() {
#Transient var api: TheApi
#Transient var currentBatch: MutableList<TheEvent>
#Transient var executor: ExecutorService
#Setup
fun setup() {
api = buildApi()
executor = Executors.newCachedThreadPool()
}
#StartBundle
fun startBundle() {
currentBatch = mutableListOf()
}
#ProcessElement
fun processElement(processContext: ProcessContext) {
val record = processContext.element()
currentBatch.add(record)
if (currentBatch.size >= 75) {
flush()
}
}
private fun flush() {
val payloadTrack = currentBatch.toList()
executor.submit {
api.sendToApi(payloadTrack)
}
currentBatch.clear()
}
#FinishBundle
fun finishBundle() {
if (currentBatch.isNotEmpty()) {
flush()
}
}
#Teardown
fun teardown() {
executor.shutdown()
executor.awaitTermination(30, TimeUnit.SECONDS)
}
}
This seems to work "fine" in the sense that data is making it to the API. But I don't know if this is the right approach and I have the sense that this is very slow.
The reason I think it's slow is that when load testing (by sending a few million events to Pub/Sub), it takes it up to 8 times more time for the pipeline to forward those messages to the API (which has response times of under 8ms) than for my laptop to feed them into Pub/Sub.
Is there any problem with my implementation? Is this the way I should be doing this?
Also... am I required to wait for all the requests to finish in my #FinishBundle method (i.e. by getting the futures returned by the executor and waiting on them)?
You have two interrelated questions here:
Are you doing this right / do you need to change anything?
Do you need to wait in #FinishBundle?
The second answer: yes. But actually you need to flush more thoroughly, as will become clear.
Once your #FinishBundle method succeeds, a Beam runner will assume the bundle has completed successfully. But your #FinishBundle only sends the requests - it does not ensure they have succeeded. So you could lose data that way if the requests subsequently fail. Your #FinishBundle method should actually be blocking and waiting for confirmation of success from the TheApi. Incidentally, all of the above should be idempotent, since after finishing the bundle, an earthquake could strike and cause a retry ;-)
So to answer the first question: should you change anything? Just the above. The practice of batching requests this way can work as long as you are sure the results are committed before the bundle is committed.
You may find that doing so will cause your pipeline to slow down, because #FinishBundle happens more frequently than #Setup. To batch up requests across bundles you need to use the lower-level features of state and timers. I wrote up a contrived version of your use case at https://beam.apache.org/blog/2017/08/28/timely-processing.html. I would be quite interested in how this works for you.
It may simply be that the extremely low latency you are expecting, in the low millisecond range, is not available when there is a durable shuffle in your pipeline.

Transaction Block has wrong Return [duplicate]

I have a problem with transactions. The data in the transaction is always null and the update handler is called only singe once. The documentation says :
To accomplish this, you pass transaction() an update function which is
used to transform the current value into a new value. If another
client writes to the location before your new value is successfully
written, your update function will be called again with the new
current value, and the write will be retried. This will happen
repeatedly until your write succeeds without conflict or you abort the
transaction by not returning a value from your update function
Now I know that there is no other client accessing the location right now. Secondly if I read the documentation correctly the updateCounters function should be called multiple times should it fail to retrieve and update data.
The other thing - if I take out the condition if (counters === null) the execution will fail as counters is null but on a subsequent attempt the transaction finishes fine - retrieves data and does the update.
simple once - set on this location work just fine but it is not safe.
Please what do I miss?
here is the code
self.myRef.child('counters')
.transaction(function updateCounters(counters){
if (counters === null) {
return;
}
else {
console.log('in transaction counters:', counters);
counters.comments = counters.comments + 1;
return counters;
}
}, function(error, committed, ss){
if (error) {
console.log('transaction aborted');
// TODO error handling
} else if (!committed){
console.log('counters are null - why?');
} else {
console.log('counter increased',ss.val());
}
}, true);
here is the data in the location
counters:{
comments: 1,
alerts: 3,
...
}
By returning undefined in your if( ... === null ) block, you are aborting the transaction. Thus it never sends an attempt to the server, never realizes the locally cached value is not the same as remote, and never retries with the updated value (the actual value from the server).
This is confirmed by the fact that committed is false and the error is null in your success function, which occurs if the transaction is aborted.
Transactions work as follows:
pass the locally cached value into the processing function, if you have never fetched this data from the server, then the locally cached value is null (the most likely remote value for that path)
get the return value from the processing function, if that value is undefined abort the transaction, otherwise, create a hash of the current value (null) and pass that and the new value (returned by processing function) to the server
if the local hash matches the server's current hash, the change is applied and the server returns a success result
if the server transaction is not applied, server returns the new value, client then calls the processing function again with the updated value from the server until successful
when ultimately successful, and unrecoverable error occurs, or the transaction is aborted (by returning undefined from the processing function) then the success method is called with the results.
So to make this work, obviously you can't abort the transaction on the first returned value.
One workaround to accomplish the same result--although it is coupled and not as performant or appropriate as just using the transactions as designed--would be to wrap the transaction in a once('value', ...) callback, which would ensure it's cached locally before running the transaction.

Query timeout in Neo4j 3.0.6

It looks like previously working approach is deprecated now:
unsupported.dbms.executiontime_limit.enabled=true
unsupported.dbms.executiontime_limit.time=1s
According to the documentation new variables are responsible for timeouts handling:
dbms.transaction.timeout
dbms.transaction_timeout
At the same time the new variables look related to the transactions.
The new timeout variables look not working. They were set in the neo4j.conf as follows:
dbms.transaction_timeout=5s
dbms.transaction.timeout=5s
Slow cypher query isn't terminated.
Then the Neo4j plugin was added to model a slow query with transaction:
#Procedure("test.slowQuery")
public Stream<Res> slowQuery(#Name("delay") Number Delay )
{
ArrayList<Res> res = new ArrayList<>();
try ( Transaction tx = db.beginTx() ){
Thread.sleep(Delay.intValue(), 0);
tx.success();
} catch (Exception e) {
System.out.println(e);
}
return res.stream();
}
The function served by the plugin is executed with neoism Golang package. And the timeout isn't triggered as well.
The timeout is only honored if your procedure code invokes either operations on the graph like reading nodes and rels or explicitly checks if the current transaction is marked as terminate.
For the later, see https://github.com/neo4j-contrib/neo4j-apoc-procedures/blob/master/src/main/java/apoc/util/Utils.java#L41-L51 as example.
According to the documentation the transaction guard is interested in orphaned transactions only.
The server guards against orphaned transactions by using a timeout. If there are no requests for a given transaction within the timeout period, the server will roll it back. You can configure the timeout in the server configuration, by setting dbms.transaction_timeout to the number of seconds before timeout. The default timeout is 60 seconds.
I've not found a way how to trigger timeout for a query which isn't orphaned with a native functionality.
#StefanArmbruster pointed a good direction. The timeout triggering functionality can be got with creating a wrapper function in Neo4j plugin like it is made in apoc.

Why doesn't my GORM object save to the database?

Consider the following code:
if (!serpKeyword) {
serpKeyword = new SerpKeyword(
keyword: searchKeyword,
geoKeyword: geoKeyword,
concatenation: concatenation,
locale: locale
)
serpKeyword.save(flush: true, failOnError: true)
}
serpService.submitKeyword(serpKeyword, false)
Here's the submitKeyword method:
#Transactional(propagation = Propagation.REQUIRES_NEW)
boolean submitKeyword(keywordToSubmit, boolean reset) {
def keyword = SerpKeyword.get(keywordToSubmit.id)
No error is raised when I call serpKeyword.save, but when I get into the submitKeyword method, SerpKeyword.get(keywordToSubmit.id) returns null. What could be preventing this from saving?
Edit
Changing REQUIRES_NEW to REQUIRED seems to do the trick. Here's what I think is happening.
The code that calls serpService.submitKeyword is located within a service method. From what I understand, service method's have a default propagation strategy of REQUIRED. Since all this database writes are happening within the context of a transaction, the writes are queued up in the database, but not actually executed against the database until the transaction is completed, according to the docs:
Note that flushing is not the same as committing a transaction. If
your actions are performed in the context of a transaction, flushing
will execute SQL updates but the database will save the changes in its
transaction queue and only finalize the updates when the transaction
commits.
We call serpService.submitKeyword before our transaction is actually finished. That method starts a completely new transaction where our serpKeyword is not available. Changing it to REQUIRED works because we are now operating within the context of our parent transaction.
I think some part of the stack is being lazy. I've ran into this behavior with Hibernate, if that's what you're using. I'm not sure it's an approved maneuver, but you could clear the session before calling submitKeyword like so:
long keywordId = serpKeyword.id
SerpKeyword.withSession{it.clear()}
serpService.submitKeyword(keywordId, false)
And then change the method to:
#Transactional(propagation = Propagation.REQUIRES_NEW)
boolean submitKeyword(keywordId, boolean reset) {
def keyword = SerpKeyword.get(keywordId)
Then I bet the .get() will work. If you have other objects in the session you need. You will need to lift those out by storing their id's and .get()ing them as well.

Resources