Can Temporal workflow handle/manage serverless workloads? - google-cloud-run

I just started looking at Temporal and it looks like a great way to orchestrate microservices. I have knative & cloudrun based microservices in my project and I would like to adapted Temporal to orchestrate the workflow between my services.
From a quick look through docs I couldn't figure out if Temporal can manage serverless microservices (knative/cloudrun). Have you used Temporal and do you have serverless workloads in your project? If so can you share your experience?
Thanks

It looks like all temporal code runs inside a (persistent) temporal server. That probably makes it a poor fit for an environment like Cloud Run or Knative (or AWS Lambda containers).
Looking further through the doc, it also appears that multiple temporal servers end up individually addressing each other through their own clustering protocol.
From the video at the start, it does seem like you could use an Activity to encapsulate a call to a service running on Knative or Cloud Run.

Related

Azure CI container per customer

I have a monolithic application based on .NET , the application itself is a web based app.
I am looking at multiple articles and trying to figure out if the Azure CI or similar would be an correct service to use.
The application will run 24/7 and i guess this is where confusion comes in, wouldn't it be normal to have always on application running on CI?
What i am trying to achieve is a container per customer where each customer gets one or more instances that he owns. The other question would be costs and scalability, i would expect to have thousands of containers so perhaps i should be looking at Kubernetes ?
Thanks.
Here is my understanding. I'm pretty new to both ACI and Kubernetes, so treat this as a suggestions and not a definitive answers 🙂.
Azure Container Instances is a quick, easy and cheap way to run a single-instance of a container in Azure. However, it doesn't scale very well on its own (it can scale up, but not out, and not automatically..), and it lacks the many container-orchestration features that kubernetes offers.
Kubernetes offers a lot more, such as zero-downtime deployments, scaling out with multiple replicates, and many more features. It is also a lot more complex, costs more, and takes much longer to set up.
I think ACI is a bit too simple to meet your use-case.

Is Serverless Cloud Native too?

Is Server-less a subset or attribute of Cloud Native? Or is it another way round -- Is Cloud Native a subset or attribute of Server-less?
Nathan Aw (Singapore)
Cloud native is a more general approach to building and running applications that take advantage of cloud computing. Serverless is more of an execution model in the cloud.
A Cloud native stack will usually aim to make use of containers and microservices:
Each part of the stack is packaged in its own container. This promotes reproducibility, transparency, and resource isolation. Dynamically orchestrated containers are then actively scheduled and managed to optimize resource utilization.
Applications are also segmented or broken-down into microservices, which are more easily testable and maintainable, are loosely-coupled, and independently deployable.
Serverless describes a model of providing backend services on an as-used basis.The cloud provider (AWS Lambda/Google Cloud Functions/Azure Functions) is responsible for executing a piece of code by dynamically allocating the resources.
Many of today's apps apply elements of both.

Stateful Containers with Kubernetes/Docker is it possible?

I apologize if this is an ignorant question but I am building out a Kubernetes cluster and I really like the idea of abstracting my computing infrastructure from a single cloud provider and steering away from a VM platform but what about statefulness? I want to be able to setup a mysql server for example and keep that data for life, I want Kubernetes to load balance a mysql container with a data drive, we speak about containers and we think life and death within seconds but what about when we want to keep data around and build a kick ass data center without VM's is there a concept of of being stateful in this paradigm?
Kubernetes provides StatefulSets for Deploying stateful containers like databases. Follow the below link to understand how to deploy mysql database In highly available mode
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
Not ignorant at all, in fact, stateful apps (often DBs) require special consideration in Kubernetes.
StatefulSets are one of the primary Kubernetes objects that exist to help support the use of stateful apps.
This is a decent guide to solving some of the challenges. It's written by Google for Kubernetes Engine but the concepts apply to Kubernetes broadly. There is also a GKE-specific overview.

Does developing using RoboMaker tie me tightly to AWS?

I am in the process of learning ROS and Gazebo for a project and saw that Amazon's Robomaker seems to be a great environment to get something up and working quickly. My concern is if I design and develop my project in Robomaker, that it will be tough to move over to my own servers should the time come.
Has anyone had any experience with this? Is it pretty simple to move over to another solution/infrastructure or are you married to AWS once you go down the rabbit hole?
Do they demand ownership of your IP?
Basically, if it is "normal ROS", there's a robot encoded in an URDF file, there's a bunch of standard ROS libraries (gazebo, controller, robot_localization, amcl, etc.), all strung together by a bunch of launch files (aka xml startup scripts).
They can't stop you from using this on your own system.
What they really offer, if you look at what they're pitching (https://aws.amazon.com/robomaker/), is a bunch of custom software, chiefly useful for running a fleet/multiple of robots in an industrial setting, involving human interaction / voice, data analytics and monitoring.
For most people learning, if you were to even touch this (you can get entirely on your own on a native ubuntu or ubuntu virtualbox), their simulation environment may be nicer out-of-the-box (I haven't played with it). The rest of the benefits, to translate them, is to have robots with data analytics and fancy AI+web tools, and then to do fleet lifecycle management.
You are most certainly not locked in - just make sure you want what they offer before shelling out anything.
In general, is it also a good idea to put your application in containers using docker o AWS ECR. So you can deploy your app in any other environment using a CI/CD pipeline, including the edge computer.

Web-based complex data-center automation tool

After evaluating existent tools like Ansible Tower, rundeck and others, it seems that no tool can fulfill the needed requirements.
We have complex data-center servers, cluster of DB and web servers, the data-center has a lot of client-systems, +100, and other tools like solr, redis, kafka... deployed there across the physical servers, not to mention that the same data-center servers have different accounts, linux users, (QA,stag,production..etc), for now the meta-data about these environments alongside their web-apps, source code to be used, servers of the cluster are all defined on xml and there is a bash scriptsreads from that XML that operated manually to run any operation/task (like checkout the source, build, deploy, start, stop... and other customized operations)
This system should be done with a developer and DevOps engineers together, but what I want to know, is there any preferable framework(s) that could be used for this system? does the workflows frameworks are usable on this case? e.g, Activiti BPMN? the Ant is an option but we don't need just an automation tool more that scheduler and logging and a lot of other services.
And if this is not the right place, can you please point out where cat I ask such question?
What's required is to create a web-based system as automation tool with:
UI to define the specific operations to be done, like build, deploy across the cluster specific web-app on specific env, start/stop specific web-app on specific machine, or any other customized operation, with multiple selections and flexible and dynamic options choosing way.
The FE should show the executing workflows and operations within them.
Dynamic way to create set of operations as a single workflow, that have dynamic ability to set the dependencies among them.
An interface between the back-end code of this system with the already existent bash scripts that will do the actual tasks across the DC servers.
A scheduler to be able to organize these operations in respect to a defined complicated dependencies between the workflows.
Authentication & authorization services to the users since there will be a lot of customized roles upon the operations,environments, the products...etc
Logging system to save the operations outputs.
Why not use a combination of ansible/Docker and jenkins, Jenkins can do most of the stuff you described using Pipeline projects/MultiProjects and Ansible for your UI and role related details
The task you're describing is basic network orchestration, and there're a bunch of orchestration engines/software out there. However, the fact that you are asking this question means you're just starting out and not ready to invest in a full fledged management product yet.
I strongly suggest that you take a look at Chef for your purposes.
Hope this helps.
I would recommend you to take a look at jenkinsx.io if you are targeting kubernetes and containers (docker). As part of the activiti bpm team we are trying to align with such technologies to make people life easy to integrate more complex workflows with DevOps and operations of your projects.

Resources