Identity Server 4 in Docker using OIDC and Selenium Testing - docker

I am currently trying to run an integration test with Selenium on the following setup:
IdentityServer4 hosted in a net core 3.1 REST service - running in its own docker container (securityservice)
MVC Test Web User Interface running under net core 3.1 - running in its own docker container (testuserinterface)
The selenium test is running on my local pc under a net core 3.1 test project
Where I am getting the issue is when i attempt to access a secure page on the Web Application the redirect is attempted to the identity server to show the user login page, however the url that the redirect is setup with id that of the docker container (e.g. http://securityservice/accouunt/login). This url is not accessible from my local browser so my test is failing locally.
Is there a way that the login url can be customised (only for test purposes) to be that of the local machine and the locally exposed docker port (e.g. http://127.0.0.1:dockerport/account/login).
I have tried many different examples and combinations with currently no success.
Any help in this area would be most appreciated.
Thanks in advance,
Stuart

What I have done for local development and testing in some cases is to create a local HTTPS certificate using mkcert and then in my local host file add an entry to map the domain of the certificate to 127.0.0.1
So I can use urls like https://identityservice:6001 to point to my IdentityServer.

Related

Trouble connecting to Docker application via subdirectory instead of port

Preface: I'm new to the whole web hosting thing, so I apologize if any information I give doesn't make sense or is inaccurate. I will do my best to explain things.
I currently have a self-hosted server running Windows Server 2019 that is hosting two sites via IIS. I recently have created an application that runs on a Docker container instance that hosts a website on port 40444. I would like to access this site via a specific subdirectory on my website instead of the port (www.mywebsite.com/website3 instead of www.mywebsite.com:40444). For clarification, here is an example of what I'm looking to do:
www.mywebsite.com/website1 (hosted on IIS)
www.mywebsite.com/website2 (hosted on IIS)
www.mywebsite.com/website3 (hosted on docker via port 40444)
I was able to get a basic reverse proxy set up and successfully got the docker application to show on localhost/, but I would prefer using a subdirectory if possible.(image below).
I attempted to change (.*) to (.*)website3$ and it did what I wanted, but the website cannot load any files (i.e css, js, etc.) and gives me the following error
https://www.mywebsite.com/css/style.css net::ERR_ABORTED 404 (Not Found)
If IIS isn't the best option to accomplish what I need I am more than happy to use a different solution. As I mentioned before, I'm new to web hosting and it was just the simplest to set up.

Locally installed webserver is not reachable from docker while containerised webserver is reachable by host IP

I have recently faced an issue that made me spend some time to understand what is going on exactly. I have a container with tomcat. Also I have some UI tests running in container (selenium/standalone-chrome-debug with inbuilt selenium server). So I'm running a non-dockerised Java-process which is rising Chrome inside the Selenium container by the url http://localhost:4444/wd/hub which is opening application running in Tomcat container by the url 192.168.1.66:8080/app. This is working perfectly and the only thing I have to do is to set my local IP 192.168.1.66:8080/app instead of localhost:8080/app as an URL of my app.
Recently I had to do the same not in Tomcat container but using locally installed Tomcat. On the same port 8080 192.168.1.66:8080/app is not reachable any more as well as localhost:8080/app. The only working option is to use host.docker.internal:8080/app. But here is the issue - I also make some API calls to that app and host.docker.internal:8080/app is not working in this case because API calls are being made from the outside of docker by non-dockerised Java-process. And I can't use different urls for UI and API for many reasons. For API simple localhost:8080/app would work, but it should work for UI as well at the same time.
What can I do in this situation?

Access Pivotal SSO tile in local development

Our OPS team have configured a SSO tile that connects to ADFS. I am building a sample application that utilize an SSO service instance. I can deploy my application to PCF and remote debug my SSO configuration. These things work.
What I need is a way to access the SSO service instance while I am developing on my PC. Otherwise only way to verify my code really works is to deploy my application to PCF and either add log statements or configure remote debugging. Both of these are pretty time consuming.
I looked into configuring ssh access to pivotal services. That works for database service instances, but not for SSO service instance. Has anyone figured it out?
After repeated trials and error, I found the solution. Posting it here in case someone else has similar issue
In PCF, for your SSO add a new application. Auth redirect url for this application should point to your localhost. In my case it is http://localhost:8080
run cf env . Copy the p-identity section only and save to vcap_services.json. Then update the clientId and clientSecret with the values from the new application created in previous step.
Use the following command to start your application
VCAP_APPLICATION=true VCAP_SERVICES=$(cat vcap_services.json) SPRING_PROFILES_ACTIVE=... ./gradlew bootRun

openshift wso2api manager redirect error

I am currently trying to setup wso2 api manager on openshift. The problem i am running into is that when i try to browse the url created by the openshift route, the application redirects me to the internally created IP address of the publisher app. However when i launch the container without openshift, the application directs me to it's intended API login page which is the Mgt console url.
I suspect this has to do with how the HAProxy embedded load balancer is behaving. I was able to hack around the configurations by changing the default ports to 443 however that created a new set of issues because changing the ports also required me hard coding container hostnames in the carbon.xml. Hardcoding settings in the configuration files prevents me from being able to scale up the containers.
Any assistance on this will be much appreciated.

Ngrok + IIS Express and Windows Authentication

Im trying to expose a web application I have developed in ASP.NET MVC 5 through ngrok and Im having no luck with the Windows Authentication. My plan was to test the app using other VMs with IE8 (insert rage here) and a few mobile devices connecting through ngrok.
My setup details are as follows.
VM with Server 2008 (Domain Controller), Visual Studio 2013, SQL etc and development tools
Domain XYZ setup in VM with test users
The Web App is running by F5'ing VS in IIS Express and uses Windows Authentication. IIS express is configured to support Windows Authentication.
I have configured ngrok bindings in the applicationhost config file and also run the netsh command "netsh http add urlacl url=URLPLUSPORT user=everyone"
I can access and use/debug the app fine on the VM using localhost, this has always worked. However, when I run ngrok and then access the app from outside the VM I get the login credential prompt (was expecting this). I enter the correct user/password and I still get 401 Unauthorised and cannot access the app.
Can anyone help? Do I need any extra configuration to allow the authentication to pass through? Is this even possible?
I am pretty much stumped right now and the ngrok site is down although I cant imagine there is much documentation on this scenario :(
Thanks for your help

Resources