can you reuse code in different flows? in Precess Maker V4 - processmaker

is it possible with version 4 of process maker to reuse code for different flows?
Can you have common variables, triggers, etc for each process you perform?

Yes, the scripts you define can be used in any process. There is no constraint between the script and the processes you can use it in.
You can also define environment variables wich should be accessible from everywhere (i have not used them yet)

Related

Helm subchart conditionally include sibling?

I am migrating a micro-service system to Helm. The system has roughly 30 distinct deployments depending on an installation context. We are using Helm 3. Our currently layout is a three tier chart/subcharts organised by functionality that may or may not be required in a given context. The subcharts, when grouped in a 2nd-level subchart, usually need to be enabled/disabled together; so this is easy by disabling their parent in the top-level values file. However, there are some scenarios where grand-child charts depend on an uncle chart and I'm having difficulty finding an elegant solution to these situations.
What are strategies that have been used successfully in other charts?
Two scenarios that currently fall into this category for me are:
I would like to have a "feature flag" (global) that allows the installer to decide if a PVC should be create and mounted on applicable pods so that they can log into a central place for retrieval later (ELK, I know, I know...). If the flag is set then the PVC needs creation and the deployments will mount it. If not, then no PVC should be created and an empty dir used.
Some of the deployments use a technical "account" to communicate with each-other. So when these services are enabled, I'd like to create a secret with the username/password and run a Job to create the user in our identity provider. That same secret would then be added to the applicable deployments' environment variables. There are a handful of these technical accounts that are reused my multiple deployments. As such, I'd like to only create their secret and run the user creation job once.
Many thanks for any hints or helpful experience that you can send my way.

Durable tasks sub orchestration with micro services

I'm attempting to use azure durable tasks to orchestrate some microservices but am running into a small gap in understanding how taskhubs work as well as coordinating the projects correctly.
I'm trying to create a main orchestrator that is in charge of kicking off sub orchestrations to do the actual work. Below is a diagram of what I'm trying to achieve.
The idea is that each .net Project will be able to scale independent of the other, so if .Net project 2 was under quite a bit of load I'd be able to scale that project only and not have to worry about the other 2 projects. The problem I'm running into is from what I understand the taskhub queue is shared by all the services so there is no way to have each process focus on only it's work, meaning each project can see everything in the queue and it may cause 1 project to dequeue a message intended for project 2. Is this correct?
From reading the documentation it doesn't seem clear that I can send project 2 it's sub orchestration messages as well as send project 3 it's specific orchestration.
Am I thinking about this problem incorrectly, is there a different way I might want to approach this?
What you want cannot be achieve.
As of now, Azure Function only allow orchestrator functions to call activity and sub-orchestrator functions that exist in the same function app. The main reason is a technical one: queues within a task hub are shared across all functions, so there's no way to guarantee that a message intended for FunctionAppA does not get picked up by FunctionAppB.
If cross-project communication is required, the correct method is to use http or queue.

Complex/Orchestrated CD with AWS CodePipeline or others

Building a AWS serverless solution (lambda, s3, cloudformation etc) I need an automated build solution. The application should be stored in a Git repository (pref. Bitbucket or Codecommit). I looked at BitBucket pipelines, AWS CodePipeline, CodeDeploy , hosted CI/CD solutions but it seems that all of these do something static as in receiving a dumb signal that something changed to rebuild the whole environment.... like it is 1 app, not a distributed application.
I want to define ordered steps of what to do depending on the filetype per change.
E.g.
1. every updated .js file containing lambda code should first be used to update the existing lambda
2. after that, every new or changed cloudformation file/stack shoud be used to update or create existing ones, there may be a needed order (importing values from each other)
3. after that, code for new lambda's in .js files should be used to update the created lambda's (prev step) code.
Non updated resources should NOT be updated or recreated!
It seems that my pipelines should be ordered AND have the ability to filter input (e.g. only .js files from a certain path) and receive as input also what the name of the changed resource(s) is(are).
I dont seem to find this functionality withing AWS or hosted git solutions like BitBucket or CI/CD pipelines like CircleCI or Codeship, aws CodePipeline, CodeDeploy etc.
How come? Doesn't anyone need this? Seems like a basic requirement in my eyes....
I looked again at available AWS tooling and got to the following conclusion:
When coupling CodePipeline to CodeCommit repositry, every commit puts a whole package of the repositry on S3 as input for CodeCommit. So not only the changes but everything.
In CodePipeline there is the orchestration functionality i was looking for. You can have actions for every component like create-change-set for SAM component and execute-chage-set etc and have control over the order of all.
But:
Since all code is given as input I assume all actions in CodeCommit will be triggered even for a small change in code which does not affect 99% of the resources. Underwater SAM or CF will determine themself what did or did not change. But it is not very efficient. See my post here.
I cannot see in the pipeline overview which one was run the last time and its status...
I cannot temporary disable a pipeline or trigger with custom input
In the end I think to make a main pipeline with custom lambda code determining what actually changed using CodeCommit API and splitting all actions in sub pipelines. From the main pipeline I will push their needed input to S3 and execute them.
(i'm not allow to comment, so i'll try and provide an answer instead - probably not the one you were hoping for :) )
There is definitely a need and at Codeship we're looking into how best to support FaaS/Serverless workflows. It's been a bit of a moving target over the last years, but more common practices etc. are starting to emerge/mature to a point where it makes more sense to start codifying them.
For now, it seems most people working in this space have resorted to scripting (either the Serverless framework, or directly against the FaaS providers) but everyone's struggling with the issue of just deploying what's changes vs. deploying everything as you point to. Adding further complexity with sequencing is obviously just making things harder.
Most services (Codeship included) will allow you some form of sequenced/stepped approach to deploying, but you'll have to do all the heavy lifting of working out what has changed etc.
As to your question of How come? i think it's purely down to how fast the tooling has been changing lately combined with how few are really doing it. There's a huge push for larger companies to move to K8s and i think they've basically just drowned out the FaaS adopters. Not that it should be like that, or that we at Codeship don't want to change that; it's just how i personally see things.

Debugging Amazon SQS consumers

I'm working with a PHP frontend which connects to a distributed back end, using Amazon SQS and a variety of message types and message consumers. I'm trying to come up with a way to safely debug those consumers, as we don't want message handlers with new, untested code consuming end-user messages, risking the messages being lost or incorrectly processed.
The actual message queue names are hardcoded as PHP constants in a class, so my first tactic was to create two different sets of queues, one for production and another for debugging, and to externalise the queue name constants into two different files. Depending on whether our debug condition is true or not, I wanted to include one or the other of those constant definitions and assign the constants in the included file to the class constants which currently have the names hardcoded.
This doesn't seem to work though because constants seem to act like class variables in PHP whereas I am trying to assign the values like instance variables. The next tactic was to see if there was anything on Amazon's side that would allow us to debug our message consumers transparently without adding lots of hacks to our code, but I couldn't see anything there that facilitated this. I'd love to know if anyone else has experienced (and ideally, solved this problem)
SQS doesn't provide a way to inspect the contents of messages in the queue, or for the sender to see if any consumers are failing to process messages.
A common approach to this problem would be to set up two sets of queues as you suggest and have the producer post the same message onto both queues. That way you can debug your code against a stream of production messages without affecting the actual production queue.
I'd recommend moving the decision of which queue to use out of your code and into config, and then deploy different config files to your development boxes vs your production boxes. The risk is always that a development box ends up talking to production systems, so having a single consistent approach to configuring those end-points across all your code is much less risky that doing it on an ad-hoc basis each time you call out to a service.
I'd also recommend putting your production and development queues in different AWS accounts with different access credentials. That way you can give your production account permission to publish to the development account's queue, but you can guarantee that your development systems can't read from the production queue.

Why aren't global (dollar-sign $) variables used?

I'm hacking around Rails for a year and half now, and I quite enjoy it! :)
In rails, we make a lots of use of local variables, instance variables (like #user_name) and constants defined in initializers (like FILES_UPLOAD_PATH). But why doesn't anyone use global "dollarized" variables ($) like $dynamic_cluster_name?
Is it because of a design flaw? Is it performance related? A security weakness?
Is it because of design flaw issue ?
Design... flaw? That's a design blessing, design boon, design merit, everything but flaw! Global variables are bad, and they are especially bad in Web applications.
The sense of using global variables is keeping—and changing—the "global state". It works well in a simple single-threaded scripts (no, not well, it works awful, but, still, works), but in web apps it just does not. Most web applications run concurrent backends: i.e. several server instances that respond to requests through a common proxy and load balancer. If you change a global variable, it gets modified only in one of the server instances. Essentially, a dollar-sign variable is not global anymore when you're writing a web app with rails.
Global constant, however, still work, because they are constants, they do not change, and having several instances of them in different servers is OK, because they will always be equal there.
To store a mutable global state, you have to employ more sophisticated tools, such as databases (SQL and noSQL; ActiveRecord is a very nice way to access the DB, use it!), cache backends (memcached), even plain files (in rare cases they're useful)! But global variables simply don't work.
Global variables are often a sign of bad design, and can be a source of bugs due to concurrency issues. Global constants don't really have these issues.
Instead of using a global variable, consider using a singleton or a class variable. That way, you can limit access to the shared state to a small part of your code, making it easier to avoid these problems.
I've once used them to keep FTP connections alive across AJAX calls for a web-based FTP client. This allowed the user to repeatedly interact with their FTP site without having to reconnect each time for every action performed.
So one nice benefit of globals in Ruby is that you can safely store resource type objects in them.
The apparent lack of global usage is an indicator of the flaw of global variable concept, not of ruby's implementation of them. In fact, I didn't even know ruby had a $global syntax. They aren't needed, and so I have never looked for them. Good ruby code never needs them.

Resources