openshift wso2api manager redirect error - docker

I am currently trying to setup wso2 api manager on openshift. The problem i am running into is that when i try to browse the url created by the openshift route, the application redirects me to the internally created IP address of the publisher app. However when i launch the container without openshift, the application directs me to it's intended API login page which is the Mgt console url.
I suspect this has to do with how the HAProxy embedded load balancer is behaving. I was able to hack around the configurations by changing the default ports to 443 however that created a new set of issues because changing the ports also required me hard coding container hostnames in the carbon.xml. Hardcoding settings in the configuration files prevents me from being able to scale up the containers.
Any assistance on this will be much appreciated.

Related

Trouble connecting to Docker application via subdirectory instead of port

Preface: I'm new to the whole web hosting thing, so I apologize if any information I give doesn't make sense or is inaccurate. I will do my best to explain things.
I currently have a self-hosted server running Windows Server 2019 that is hosting two sites via IIS. I recently have created an application that runs on a Docker container instance that hosts a website on port 40444. I would like to access this site via a specific subdirectory on my website instead of the port (www.mywebsite.com/website3 instead of www.mywebsite.com:40444). For clarification, here is an example of what I'm looking to do:
www.mywebsite.com/website1 (hosted on IIS)
www.mywebsite.com/website2 (hosted on IIS)
www.mywebsite.com/website3 (hosted on docker via port 40444)
I was able to get a basic reverse proxy set up and successfully got the docker application to show on localhost/, but I would prefer using a subdirectory if possible.(image below).
I attempted to change (.*) to (.*)website3$ and it did what I wanted, but the website cannot load any files (i.e css, js, etc.) and gives me the following error
https://www.mywebsite.com/css/style.css net::ERR_ABORTED 404 (Not Found)
If IIS isn't the best option to accomplish what I need I am more than happy to use a different solution. As I mentioned before, I'm new to web hosting and it was just the simplest to set up.

How to connect via http instead of default https on nifi docker container

I am currently running latest versions Nifi and Postgresql via docker compose.
as of 1.14 version update of Nifi, when you accesss the UI on web it connects via https, thus asking you for ID and Password every time you log in. Its too cumbersome to go to nifi-app.log file and look for credentials every time I access the UI. I know that you can change the setting where it keeps https as the default method but I am not sure how to do that in a docker container. Can anyone help me with this?
You could use some env like AUTH in the documentation
You can find the full explanations here

Jenkins pointing server to domain created

Good Morning
I have created a Jenkins server in AWS I am able to access the platform using the IP of the server
however, I want to access it more securely.
I have set up a subdomain on my hosting service and I set the IP of the server as an A record
I have also defined this in the configuration section of Jenkins
however, when I access the URL https://domainname I get nothing
but if I add 8080 at the end of it it takes me to the Jenkins platform
what am I missing here?
Thanks
I recommend you to use AWS Application Load Balancer to access to you jenkins web server.
I will host https certificat (if you are using AWS Certificate Manager) and you will be able configure DNS to redirect to ALB name.

Azure Cloud Service microservice to K8 Migration

I am in the process of evaluating moving a very large Azure Cloud Service (Web Role) microservice architecture to AKS and have been working through the necessary code and build changes to support it.
In order to replicate the production environment locally for the developers, we run nginx on the host with SSL offloading and DNS (hosted in Azure) A records pointing to 127.0.0.1. When running in the Azure Emulator, the net affect is the ability for both the developer to visit the various web front ends in their browser (i.e. https://myapp.mydomain.dev) as well as hit the various API's in the solution (Web API 2) in Postman/cURL, etc.
Additionally due to how the networking of the Azure Emulator works, the apps themselves can resolve each other through nginx on the host (i.e. MVC app at https://myapp.mydomain.dev can obtain a token from the IdP web API at https://identity.mydomain.dev and then use that token at the API at https://api.mydomain.dev). This is the critical piece and the source of my question.
All attempts at getting the containers themselves to resolve each other the same way the host OS can (browser/Postman, SSL offloading via nginx) have failed. Many of the instructions out there are understandably for linux containers but having adapted the various networking docker-compose settings for the windows container equivalent have not yet yielded an success. In order to keep the development environments aligned with the real work systems, which are tenantized and make sure of the default mapping in nginx to catch all incoming traffic and route it to a specific user facing app/container, it is not as simple as determining a "static" method of addressing these on startup and why the effort was put in to produce the development environments we have today.
Right now when one service (container) attempts to communication with another, it ultimately results in a resolution error as all requests resolve to https://127.0.0.1 due to the DNS A records hosted in Azure for the domain. Since this migration will be a longer term project, the environments need to co-exist so changing the way that DNS is resolved (real DNS A records pointing to 127.0.0.1), host running nginx and handling SSL offloading to the various webroles normally running in the Azure Emulator is not an option.
Is there a way (with Windows containers) to either:
Allow the container to utilize nginx on the host OS transparently (app must still call the API at https://api.mydomain.dev), which will cause the traffic to be routed properly to the correct container/port defined in the docker-compose file?
OR
Run nginx on each container, allowing each container to then resolve and route appropriately without knowing the IP of the other container, possibly through an alias which could be added to the containers nginx.conf before the service starts?
The platform utilizes OAuth2/OIDC and it is critical to maintain the full URL to the other services from the applications perspective. Beyond mirroring production and sandbox environments, this URL's are utilized for redirect URL and post logout redirect URL validation among other things so using "https://myContainerNameForOtherContainerAlias" is not a workable solution.
Will I have the same problem when setting up the AKS environment as well?

Jenkins Server - Issues with setting URL

I am trying to set up an internal Jenkins server for our QA team and facing some issues with the server URL. This is inside a corporate network and all sort of firewall and proxy settings are in place, however we need to access the server only with in our internal network. This server runs from a Mac Mini. I was able to install and access the server without any issues using localhost:8080.
I tried to set a custom URL (something like testjenkins.local:8080)under the Manage Jenkins option and never was able to access the server. The only option worked for me is with the IP address (IP:8080). I was able to access the server from other machines in the network using this URL.
The real problem with the above setup is that the machine IP changes(I am not able to make it static), and hence wont be able to get an always working URL.
Highly appreciate if any one guide me in the wright direction.
Given you have a dynamic IP on your server, a good alternative would be using ngrok. Ngrok can expose the port 8080 of that server to the internet via secure tunnels, and you can access it via an URL, so changes in the IP won't affect it.
However, ngrok exposes the server to the whole Internet. To make it accessible only for your team you can add authentication in both ngrok tunnel and Jenkins server (would it work for you?).

Resources