Trigger or Javascript in processmaker? - processmaker

The subject is:
I have connected one of my processes to an external sql database through Database Connection( very hard of course due to lack of sql connection!). In my dynaform i have two textbox that are named RefNumber and Value and also i have a button. i want to enter a number in RefNumber and press the button and related value to that number appears in Value (i have that number and its corresponding value in my database that i earlier connected).
So the first question is : Which one should i use ?Trigger or Javascript?
and the second question: what is the code?
Any comment is appreciated.

It depends on you whether you want to update your value at database side or backend side.
Also I would like to add that trigger sometimes fail due to bad connection issues or some random user shit error and it might happen that your data is updated and value is not incremented(updated). So I recommend not to use trigger, yes trigger saves time but sometimes it doesn't work.
And benefit of doing it at backend side is that if your data is not updated, failed, value will also not.

My solution is to create an API on server app processmaker(use plugin to create api). Or the server BackEnd will configure CORS enabled so that the client side can call the api.
In dynaform, use ajax to make request to api.

Related

Load-testing in Orbeon - request generation

I've been trying to use Gatling to load-test our Orbeon servers. More specifically we want to know how many concurrent users the server can handle submitting forms.
I've already captured the requests using Gatling (one request per form field that is filled in). However, the requests are not working when I replay them. My first thought upon inspecting the requests is that it should contain a valid UUID. But where can I generate this ID, or parse it from the initial request? Is it even possible to manually generate these requests?
Any other suggestion for a load-testing tool for Orbeon would also be helpful.
We often do something similar here, using JMeter, but the idea is the same whatever tool you're using. Indeed, Ajax requests:
Need to have to be "in" the same session used to generate the page to which they are related, i.e. typically carry the correct JESSSIONID cookie.
Need to refer to the proper UUID. You can find the UUID in the HTML of the page, in the <input type="hidden" name="$uuid" value="…">.
Need to have the correct <xxf:sequence>1</xxf:sequence> number. I.e. 1 for the first request made after the page is loaded, then 2, and so on.

Sync-ing a basic IMAP client w/ server

I am writing a simple IMAP client that will be able to sync w/ any Google email account. I don't want to have to read the ENTIRE set of message headers on the server every time I sync in order to be assured that I do not miss something. I would prefer to not ever have to do that, and to rely on some field that ensures total order. For example, I would prefer to rely on Google extended Message ID field or even just on Receieved-Date and have my logic be: "keep reading backwards until you hit something you have previously read". But alas, it does not seem to be that simple.
What is the preferred way to do sync such that it is both efficient (in terms of time + bandwidth) and guaranteed (i.e., no missed messages)?
Thanks!

How to get location field from linkedin api while fetching network updates

I want to fetch location of person and connections so how should I specify fields for this purpose?
http://api.linkedin.com/v1/people/~/network/updates:(update-content:(person:(id,headline,location)))?type=CONN
If I'll make another calls for just getting location, it will be very costly for me, as it will require to make extra calls for each of new connection and will increase number of calls exponentially. So, I want some solution using which I can get location in network updates API call itself.
EDIT: And another thing I need is to check about the privacy setting of connections. As per my knowledge, linkedin doesn't provide any api which returns that which connection allows to see updates and which are not. So, when I try to get network update for a particular connection, it returns error that this user doesn't allow public to see updates. If I want to check this thing before call network updates API, how can I do it in Ruby Language.
OR
Let me know some way to pass multiple dynamic IDs while calling linkedin API.
When retrieving person data associated with a Network Update, it appears that only the basic fields are available. The solution would be to get the id for the person and make a second call to the Profile API:
http://api.linkedin.com/v1/people/id=12345:(first-name,last-name,connections,location)
Currently, linkedin doesn't provide any API for this purpose. You have to make multiple calls for this purpose. But you should make these calls in chunks to avoid timeout issues.
Reference
Try this api
`String url = "https://api.linkedin.com/v1/people/~/connections:(id,first-name,last-name,location,picture-url,positions:(title,company:(name)))"; `

Delphi XE2 Datasnap Session Management - get session information after page reload

I am trying to determine how to retrieve session information using a Delphi REST DataSnap server.
I know that, when on the same client page, you have access to the current thread session using the TDSSession method GetThreadSession.
What I want to do, however, is store data in the session (putData) and still be able to retrieve it when the user moves from page1 to page2. At present, if the user moves to a different page, the session is lost and a new one is created, thus loosing the data in the session that I had previously set.
I have tried playing with TDSSessionManager.SetThreadSession(sessionid) - but I cant seem to get it working.
I've reviewed the much acclaimed Marco Cantu white paper, however, it doesn't provide a solution to this issue.
Any help I can get on this would be great - even if its just 'hey, this topic is covered in book X'.
Thanks!
The TDSSessionManager.SetThreadSession(sessionid) works with Session.sessionname.
Plus make sure your Lifecycle is set to Session (as stated by tondrej).
If you reconnect your client. a new session is started. So you want to keep your Datasnap connection open.
Or you can set the lifecycle to Server and mannage the client-sessions yourself.
Edit: Rest Servers are Stateless. So you need to store the page you are on on the Client. And Query the needed Page from the Server
You have to tweak the client side JavaScript to use a cookie to store session info.
See the last part of JavaScript Client Sessions
If you want to keep server side objects active for the session use the Session life cycle.
I believe what you need to do is set LifeCycle property of your TDSServerClass instance to Session (stateful). From your question it seems you are currently using Invocation (stateless).
Well, in Datasnap REST (GET, POST, DELETE, PUT) if you set your TDSServerClass to session, as is a REST in this case session is the same as invocation, is stateless (http://docwiki.embarcadero.com/RADStudio/Tokyo/en/Server_Class_LifeCycle#REST_Clients). It is right, you give the oportunity to all kind of clients to use your datasnap server with JSONs for example.
You need to create your owner model to session control to your REST server, or look for some framework to do this. In my case I use custom objects on lifecicle server (some cases with database too), and using tokens on request headers and other informations, I know if is the same client and I control too when the token expires and need to do new login, for exemple and I can give much more security too as on PUT resquests, only on records gave to client (it is only one case, but there are much others...). You need to resolve other way, not with classic way using TDSSession.

Design for long running ASP.net MVC web request

I'm aware of the model that involves a scheduled task runninng in the back ground which runs jobs registered with a web request but how about this for an idea that keeps everything within ASP.net...
User uploads a CSV file with, perhaps, several thousand rows. The rows are persisted to the database. I think this would take maybe a minute or so which would be an acceptable wait.
Request returns to the browser and then an automatic Ajax request would go back to the server and request, say, ten rows at a time and process them. (Each row requires a number of web service requests.)
Ajax call returns, display is updated and then another automatic Ajax request goes back for more rows. This repeats until all rows are completed.
If user leaves the web page, then they could return and restart the job.
Any thoughts?
Cheers, Ian.
If i get you right, you actually dont need any "interaction" between background jobs and the long-running request, you just want to "lauch" background jobs with incoming requests? Not such a good idea. Take a look at the Quartz.NET project, it is scheduler embeddable into ASP.NET application, it will handle this stuff for you without need of requests. Of course, if there is app pool shutdown, also your scheduler goes down, but this you cant guarantee not to happen even with your long-running requests solution, dependent on browser waiting on other side.
Also take a look on this interesting article from phil haack on this topic, with his own little scheduler library specific for ASP.NET :
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
A server side program (or ideally service) could still be quick and dirty and would be more reliable. You could still do step 1 as you have proposed, upload the file and insert the data ( don't forget to increase the maxRequestLength time out value in the web.config ). Then have a program running on the server that checks for new records and processes them.
If the user needs status you could store an entry in the database for each file and update the database record when the import is complete.
Maybe I'm reading the question and interpreting it in a weird way, but why couldn't you read the file into a database and store in a table the current line of the file that you've completed through. You could then track your progress via the db and just send small json objects telling the user how far along you are. That way if their connection drops you can keep processing their request, and if they return later you can notify them of how far along the job is. Also, if multiple clients are connecting you can use the db to queue and throttle (by serializing) the workload. Or if the user connects mid-job with another file, then their new request will be queued up after their current job.

Resources