Shutdown activity worker and workflow - amazon-swf

Is there any way to shutdown activity and workflow worker after all the activities in my workflow implementation have completed its execution or if any of them throws any exception.

You can have special "shutdown worker" activity type. This activity implementation initiates shutdown of its worker. At the end of the workflow execution you always invoke this activity from doFinally method of TryCatchFinally that wraps all other workflow logic.

Related

How can I stop other application in gen_server:terminate/2?

I'm making my own server with erlang OTP and I stuck in the problem when I use Mnesia.
I start Mnesia in gen_server:init/1 of my worker and stop it in gen_server:terminate/2 of the same worker.
Unfortunately, When function mnesia:stop/0 is called by calling application:stop(myApplication) or init:stop(), the application stucks and ends up with this :
=SUPERVISOR REPORT==== 23-Jun-2021::16:54:12.048000 ===
supervisor: {local,temp_sup}
errorContext: shutdown_error
reason: killed
offender: [{pid,<0.159.0>},
{id,myMnesiaTest_sup},
{mfargs,{myMnesiaTest_sup,start_link,[]}},
{restart_type,permanent},
{shutdown,10000},
{child_type,supervisor}]
Of course, this doesn't happen when gen_server:terminate/2 isn't called by setting trap_exit flag as false, but Mnesia also doesn't stop.
I don't know why an application cannot be stopped in other application and want to know it's ok if I don't call mnesia:stop() in the end of my application.
The reason you cannot stop Mnesia when your application is stopping is that at that time the application_controller process is busy with stopping your application. This is a classic deadlock situation when one gen_server (in this case quite indirectly) performs a synchronous call to an other gen_server which in turn wants to do a synchronous call to the first one.
You can break the deadlock by asynchronously shutting down Mnesia after your application stopped. Try calling from your terminate/2 timer:apply_after(0, mnesia, stop, []) for example. (Just spawning a process to do the call is not ideal, it would still belong to your application and would get killed when the application terminates.)
But most of the time you don't really have to bother with stopping Mnesia. Erlang applications by convention leave their dependencies started when stopped. And in case your application is terminated by init:stop(), it will take care of stopping all other applications anyway, including Mnesia.

How to stop a Flink job using REST API

I am trying to deploy a job to Flink from Jenkins. Thus far I have figured out how to submit the jar file that is created in the build job. Now I want to find any Flink jobs running with the old jar, stop them gracefully, and start a new job utilizing my new jar.
The API has methods to list the jobs, cancel jobs, and submit jobs. However, there does not seem to be a stop job endpoint. Any ideas on how to gracefully stop a job using API?
Even though the stop endpoint is not documented, it does exist and behaves similarly to the cancel one.
Basically, this is the bit missing in the Flink REST API documentation:
Stop Job
DELETE request to /jobs/:jobid/stop.
Stops a job, result on success is {}.
For those who are not aware of the difference between cancelling and stopping (copied from here):
The difference between cancelling and stopping a (streaming) job is the following:
On a cancel call, the operators in a job immediately receive a cancel() method call to cancel them as
soon as possible.
If operators are not not stopping after the cancel call, Flink will start interrupting the thread periodically
until it stops.
A “stop” call is a more graceful way of stopping a running streaming job. Stop is only available for jobs
which use sources that implement the StoppableFunction interface. When the user requests to stop a job,
all sources will receive a stop() method call. The job will keep running until all sources properly shut down.
This allows the job to finish processing all inflight data.
As i'm using Flink 1.7, below is how to cancel/stop flink job about this version.
Already Tested By Myself
Request path:
/jobs/{jobid}
jobid - 32-character hexadecimal string value that identifies a job.
Request method: PATCH
Query parameters:
mode (optional): String value that specifies the termination mode. Supported values are: "cancel, stop".
Example
10.xx.xx.xx:50865/jobs/4c88f503005f79fde0f2d92b4ad3ade4?mode=cancel
host an port is available when start yarn-seesion
jobid is available when you submit a job
Ref:
https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/rest_api.html`

How to spawn a Erlang process dynamically for one activity and kill it once activity done

There are many parallel requests coming to one erlang OTP (gen_server) process.
One process is not sufficient to handle this.
I can have fix number pool of same processes to handle this using Poolboy or worker_pool.
But I dont want to have fix set of process pool.
I want to create dynamically Process to handle that activity and get killed once it done its work.
So I will be having N numbers of active process for N parallel request.
and than it get killed once that process complete the processing.
How I can achieve this?
Use Erlang supervisor module and use transient in its flags.
When your event comes, start new child for doing that and when event done, exit process with reason 'normal'.
Supervisor behavior info: Design - API

How to not offer a task to specific worker on Twilio

I am new in Twilio and i have been facing an issue while designing outbound dialer currently preview dialing. If a worker rejects a task than the same task should not be offered to that worker again.
How do i handle this case?
Typically if a worker rejects the task, the worker should be moved to an unavailable activity. Otherwise, if the worker is the only available and qualified worker, TaskRouter will continue to create new reservations.
You can specify a new activitySid upon rejection, so that the worker is moved to an unavailable activity at the same time:
https://www.twilio.com/docs/api/taskrouter/worker-js#reservation-reject
Here, making the worker activity unavailable would simply mean the worker will not be able to get any task.
But let's look at a more complicated use case where a Worker can accept, reject or cancel tasks. They need to be available to make this choice.
If you have only that agent, and they are available, then there is no way to prevent that agent from receiving the Task, unless you manipulate the Task attributes or worker attributes so that TaskRouter doesn’t assign the Task. For example, you could update TaskAttributes to have a rejected worker SID list, and then in the workflow say that worker.sid NOT IN task.rejectedWorkerSids.
And the ability to do this Target Workers Expression just shipped as a bug fix today! It should look like:
worker.sid NOT IN task.rejectedWorkers

Erlang init_per_group terminates gen_server

Common test init_per_group/2 terminates gen_server when it is started with gen_server:start_link.
But it is ok to start server with gen_server:start.
gen_server can be started with any of these methods (start and start_link) in init_per_suite/1 and init_per_testcase/2.
Why it is not possible to start gen_server in init_per_group/2 with gen_server:start_link?
This happens because init_per_group is run in a separate process, just like each test case, and that process exits with an exit reason that communicates information about success/failure of group initialisation. From test_server:run_test_case_eval:
exit({Ref,Time,Value,Loc,Opts}).
Since the gen_server is linked to the process that runs init_per_group, and since the exit reason is not normal and the gen_server is not trapping exits, the gen_server process exits with that same exit reason.
On the other hand, init_per_testcase is run in the same process as the test case itself, so this problem does not appear.

Resources