How to Undeploy selected SOA composite applications from the SOA_server.
I have 70 application deployed on soa_server, few of them are important for future reference. I need help if we can undeploy the selected soa composite application from soa_server.
I have found a command using WLST on oracle site, if anyone can help on undeploying selected soa composite application in one go from soa_server1.
Thanks in advance
The use case you describe is suited to the use of SOA-Infra Partitions. Using partitions you can undeploy, retire, shutdown and startup all composites deployed to a partition as a group. To create a partition, in the EM FMC, go to soa-infra, then select Manage Partitions from the drop-down menu. Deploy the desired composites to this partition and then you can control them all at once.
This is how to manage a logical group of composites in "one go." Without the partition solution, you will need to script-undeploy each composite separately. These docs provide specific instructions.
Related
We have around 30 Jenkins installs across our organization, both Windows and Linux. They are all used for different tasks and by different teams (e.g. managing Azure, manipulating data, testing applications etc.)
I have been tasked with looking at whether we could bring these all into one 'Jenkins Farm' but as far as I can see such a thing doesn't exist? Ultimately 'we' want some control and to minimize the footprint of Jenkins. The articles I have found don't recommend using a single Master server (with multiple nodes) because of the following:
No role-based access for projects (affecting other teams code)
Plugins can affect all projects
Single point of failure as there is only one master server
Is it best to leave these on separate servers? Are there any other options?
I believe Role based access for projects is possible using
https://wiki.jenkins.io/display/JENKINS/Role+Strategy+Plugin
However, a single master isn't ideal as you pointed out due to 'Plugins can affect all projects'. Probably best to have separate jenkins master nodes but configure agents such that they can be shared across teams/projects.
I'm looking for a solution to running a large amount of tasks and monitoring their status on a cluster.
In detail: Each task consists of 3-4 processes which are docker contained (each process is a docker run command). All of the processes have to run on the same server.
The amount of tasks we're talking about is bursts of several hundreds of tasks at a time.
I've looked into several solutions all of them based on Mesos:
Chronos - Seems like it would falter under high load and in any case is more directed towards recurring (cron) jobs. While I need one-time (heavy) job.
Custom Mesos FW - Seems to low-level for my needs would require me to write scheduling and retrying mechanisms, I'd save this for last resort.
Aurora - This seems promising as each task is run on the same node and comprised of several processes. I am missing a couple of this here though: Aurora seems to not be able to run several tasks as a part of a single job. Since my tasks are all similar with different input I could use a single job with many (say 400) instances and the first process of each task (whose role is to download the input from S3) could download a different set based on the instance ID. Which brings me to another problem: I can't find a working example of using {{ mesos.instance }} in .aurora files can anyone give me an example?
Thanks for all the fish people
You could also have a look on Kubernetes (which also can be run as a framework in Mesos). Kubernetes has the concept of Pods which are basically a set of co-located containers. So in your case a pod would consist of your 3-4 processes/containers and then these pods can be scaled up/down.
Short comments regarding the other solutions you mentioned:
Chronos: Not really targeting your use case
Custom FW: Actually not so difficult, but good call to save this as last resort.
Aurora: Very powerful but also complex framework
Marathon (which you didn't mention): targeted for long running applications which can be easily scaled up and down.
In addition to the excellent other answer, you could check out Two Sigma's Cook which they have only recently open sourced but have been using in prod at scale for a while.
I have a rails application running on a single VPS that uses passenger, apache and MySQL. I am moving this to Amazon AWS with the following simple setup:
ELB > Web Server > MySQL
Lets say I am expecting a huge spike in daily users and want to start to scale this out on Amazon AWS using multiple instances. Where does a newbie start on this journey? Do I simply create an AMI from my production configured web server and get the ASG to launch these when required?
I understand that AWS increases the number of instances using auto scale groups as the load demands it, but do I need to architect anything differently in my Rails application for it to run at scale across multiple interfaces?
The problem with scaling horizontally is that it really depends on the application. There's no "just-add-water" ways to do it.
But there are some generic recipes you can follow in the beginning:
Extract MySQL server into a separate instance, which is capable of holding a higher load. Then create as many worker (i.e. app) instances that connect to the MySQL database as you need. You can keep doing so before your MySQL server gets saturated with requests, and can no longer keep up with the load.
When you're done with step 1, you can add MySQL replicas and setup a master-slave replication. This will leave you with a MySQL cluster, where one server can accept writes and all the others are read-only. After your set it up, change your application to send SELECT's to read-only replicas and INSERT/DELETE/UPDATE's to the writeable master server. This approach is based on the fact that most of the applications do reads way more often than writes. It can be not the case for you, but if it is, it'll keep your afloat pretty long. Right before you saturate MySQL master server write performance.
Once you've squeezed everything from step 2, you can go ahead and shard the data. This is now becoming more and more dependent on your application. But I will provide a blind example in order to convey the idea. Say, you have a user-centric application (e.g. a private photo-album, with no sharing capabilities), and each user has a name. In this case you can make two completely independent clusters, where the first one will serve users with names starting A-M, and the second one will serve ones with N-Z. It essentially makes the load twice as less, but complicates the whole architecture.
Though generic, these recipes can help you build a pretty solid application capable of serving millions of users daily before you're forced to bring up more exotic ways of scaling.
Hope this helps!
We have a Master erlang node that has an application with a supervisor and multiple, dynamically added worker processes. For each worker process, there is another erlang node dynamically started. We would like to monitor all nodes on one screen and detect failures so that corrective action can be taken.
Is there an utility that can let us do this?
Thanks,
Yash
I think for almost every erlang distributed application, there is similiar nodes managments requirements. For pman and appmon webtools, I think they are too basic and not enough.
I have read rabbitmq source code, there is a website for management and it seems suitable for your requirement.
In addition, I start to read riak source code now, the nodes management codes seems better than rabbitmq. It is also suitable for your requirement.
I think you could read both of them, and modify based on them and create a new one for your application.
I need something that runs in the background and go into my database and scan and update certain rows based on certain logic. I need this to run like every hour and my environment is Windows Server 2003, SqlServer 2005.
Is WWF good for this purpose? Or should I create a Windows Service? And, what's difference between WWF and Windows Service, or just simply what is the best way to just do this?
Thanks,
Ray.
I would say use a windows service not a workflow. Using a workflow is for when there is a process involved. As you are just updating records in a table, I would say a service is as good as anything..
Actually, now that I have read your question again, you might want to consider a SQL Server job as well as they can be scheduled to run at whatever interval you like.
A windows service is a long running process that runs in the background in windows. A Windows Workflow Foundation workflow is used for laying out a workflow for a business process (or something). You need to host the workflow runtime within something (Console App, ASP.Net, Windows Service, etc)
I would use a windows service if I were you. I've done a lot of work in WF and the main reason I would say to not do it in WF is that MS is basically completely rewriting the next version of WF according to what MS said at PDC in Oct. There will be a way to run legacy 3.0/3.5 activities in 4.0, but my impression was that there are going to be major changes.
Also, it sounds like you don't need the modular activity capability that WF provides. WF is going to add another layer of abstraction that it sounds like you are not going to need, plus you are still going to need to write a windows service to run your workflow that you create. WF would be a good choice if you had a business person that needed to constantly change the logic that was happening and you wanted to make a big investment in the management of this process you want to create.
Also I agree that based on what you are saying you should consider creating an SSIS package in SQL Server, unless you don't have direct access to the database.
Windows service had work for me in the past, workflow primary feature is not scheduling and you will need to provide a host for it, when windows services infrastructure already contains all of this and its also well documented.