WSO2 version 5.7 issue with multiple instance running - docker

I am using wso2 version 5.7 in our product as identity server and consuming its all soap api almost like to create tenant in , to created user store in , to create service provider in as well claim also
Now, Every things is working with single instance of wso2 means all the crud operation of the above operations.
Though, we are using docker on production env. whenever we are scaling up or down with wso2 node more than one then all the above operations some time work and some time not working mostly not working means i think the data in not sync properly with all the running node.
So please provide me solution for this.

This is a known issue [1] and already fixed in the public branch. If you build the latest product from the source, this will be fixed. Or else, you can use WUM [2] to get this for IS 5.7.0.
[1] https://github.com/wso2/product-is/issues/5015
[2] https://wso2.com/updates/wum

Related

Can an Akka.net node hosted within a container participate in a cluster outside of the container host?

I'm fairly new to Akka.net and I'm a total noob when it comes to containers so please forgive me if this is too simple (but I kind of hope it is).
I'm trying to build a web app cluster using Azure app services. I want the lighthouse to be hosted in an Azure container instance. I've been successful putting the cluster together on my local box (without docker). I've tried standing up a local docker container with port forwarding but I haven't been able to get it to work.
Thanks in advance for your help.
You can definitely do this, but since you're using Azure App Services I'd recommend taking a look at Akka.Management and Akka.Disovery.Azure instead.
This will eliminate the need to use Lighthouse at all - and instead your nodes can form a cluster on Azure App Service by querying a shared Azure Table Storage table instead.
There's a complete Azure App Services demo that shows how to do this here: https://github.com/petabridge/azure-app-service-akkadotnet
And the relevant code is here: https://github.com/petabridge/azure-app-service-akkadotnet/blob/dev/src/Akka.ShoppingCart/Startup.cs
NOTE: this uses the Akka.Hosting methods, which eliminates 99% of HOCON configuration and ties into Microsoft.Extensions for configuration, hosting, and DI. Akka.Hosting is a relatively new package and just hit stable at the end of 2022. You should definitely use it - all of the documentation and examples will be reworked to incorporate it once Akka.NET v1.5 ships at the end of February, 2023.

Jenkins configuration as a code + SSO SAML: How to generate first API token programmatically?

I'm deploying a new Jenkins from scratch on a single host using Docker all on top of AWS.
Its authentication mode is set to SAML (using Okta) and we configure it using JCasC (Configuration as code).
The deployment strategy we decided is to deploy a new instance each time a configuration change is made.
However, in order to give a good experience to our end users, we want to make 2 steps before swapping between old and new release:
Put the old instance in Quiet mode.
Query running builds
When running query builds = 0 then swap the instance.
We have no problem on doing that with the API but... the problem is that we depend on one single thing: the API token!
How do we programmatically create and retrieve an API token each time a new Jenkins instance is released when the authentication mode is set to SAML?

Spring Cloud Data Flow Basic Authentication

Spring Cloud Data Flow Server (Local) does not have any dynamic way to set up users and roles either through dashboard UI or shell, ie. there is no way to add or delete users with roles while the server is running.
I have been able to get both single user or file based authentication and authorization working but both of them I had to set up the docker-compose.yml file like so:
spring.cloud.dataflow.security.authentication.file.enabled=true
spring.cloud.dataflow.security.authentication.file.users.bob=bobpass, ROLE_MANAGE
spring.cloud.dataflow.security.authentication.file.users.alice=alicepass, ROLE_VIEW, ROLE_CREATE
spring.cloud.dataflow.security.authentication.file.users.hare=harepass, ROLE_VIEW
However, if I have to add new users with roles, I will have to docker-compose down, edit the docker-compose.yml and then do docker-compose up, for the new user authentication authorization to work.
Is there any work around this?
There isn't any other approach to dynamically add/update users and then have it reflect at runtime in SCDF.
However, in SCDF 2.0, we have redesigned/rewritten the security architecture. In this baseline, we rely on Cloud Foundry's UAA component, which is a standalone application that can work in Local, CF or K8s.
Here, you can directly interact with UAA outside of SCDF. You can add, update, and delete users, too. Of course, you can centrally manage the OAuth token-credentials such as remote renewals and revocations. Check out the end-to-end sample demonstration of the new design with SCDF + OAuth + LDAP, all in action.
The recent 2.0 M1 release already include this improvement - see blog. Try it out and let us know if you have any questions/feedback.
UPDATE:
I recently also bumped into a UAA Web-UI from the community. Perhaps UAA team could consider adding it to the official stack eventually.

AzureWorkerHost get the uri after startup for Neo4jClient

I am trying to create a ASP.Net with neo4jclient project to be hosted on the Azure and am kind of unable to grasp how to do the following:
get hold of an neo4j rest endpoint address once the worker role has started. I think I am seeing a different address each time the emulator spins up a instance of worker role. I believe that i'll need this to create an client somewhat like this
neo4jClient = new GraphClient(new Uri("http ://localhost:7474/db/data"));
so any thoughts on how to get hold of the uri after the neo4j is deployed by AzureWorkerHost.
Also how is the graph database persisted on the blob store, in the example its always deploying a new instance of pristine db in the zip and updating, which is probably not correct. I am unable to understand where to configure this.
BTW I am using the Neo4j 2.0 M06 and when it runs in emulator, I get an endpoint somewhat like this http://127.255.0.1:20000 in the emulator log but i am unable to access it from my base machine.
any clue what might be going on here?
Thanks,
Kiran
AzureWorkerHost was a proof of concept that hasn't been touched in a year.
The GitHub readme says:
Just past alpha. Some known deficiencies still. Not quite beta.
You likely don't want to use it.
The preferred way of hosting on Azure these days seems to be IaaS approach inside a VM. (There's a preconfigured one in VM Depot, but that's a little old now too.)
Or, you could use a hosted endpoint from somebody like GrapheneDB.
To answer you question generally though, Azure manages all the endpoints. The worker roles says "hey, I need an endpoint to bind to!" and Azure works that out for it.
Then, you query this from the Web role by interrogating Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.Roles.
You'll likely not want to use the AzureWorkerHost for a production scenario, as the instances in the deployed configuration will destroy your data when they are re-imaged.
Please review these slides that illustrate step-by-step deployment of a Windows Azure Virtual Machine image of Neo4j community edition.
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
A Neo4j 2.0 Community Virtual Machine image will be released with the official release build of Neo4j 2.0. If you plan to use more than 30GB of data storage, please be aware that the currently supported VM image in Windows Azure's image depot must be configured from console through remote SSH to Linux.
Continue with your development using http://localhost:7474/ and then setup the VM when you are ready for a staging or production build to be deployed.
Also you can use Heroku's free Neo4j database deployment but you must configure the basic authentication for your GraphClient connection in Neo4jClient.

How to provide saas customer with server snapshot for business continuity concerns

I'm proposing a SaaS solution to a prospective client to avoid the need for local installation and upgrades. The client uploads their input data as needed and downloads the outputs, so data backup and maintenance is not an issue, but continuity of the online software service is a concern for them.
Code escrow would appear to be overkill here and probably of little value. I was wondering is there an option along the lines of providing a snapshot image of a cloud server that includes a working version of the app, and for that to be in the client's possession for use in an emergency where they can no longer access the software.
This would need to be as close to a point and click solution as possible - say a one page document with a few steps that a non web savvy IT person can follow - for starting up the backup server image and being able to use the app. If I were to create a private AWS EBS snapshot / AMI that includes a working version of the application, and they created an AWS account for themselves, might they be able to kick that off easily enough?
Update:the app is on heroku at the moment so hopefully it'd be pretty straightforward to get it running in amazon EC2.
Host their app at any major PAAS providers, such as EngineYard or Heroku. Check their code into a private Github repository that you can assign them as the owner. That way they have access to the source code and can create a new instance quickly using the repository as the source.
I don't see the need to create an entire service mirror for a Rails app, unless there are specific configuration needs that can't be contained in the project or handled through capistrano.

Resources