WSO2 API Manager in Docker - docker

I am trying to deploy API Manager and Enterprise Integrator using Docker Compose. This is deployed using a cloud server.
Everything works locally when using localhost as the host but when I deploy it on a using a cloud server, I cannot access the API Manager using the public IP of the server. The Enterprise Integrator works though. I've modified some configuration parameters as shown below but the problem persists:
<APIStore>
<!--GroupingExtractor>org.wso2.carbon.apimgt.impl.DefaultGroupIDExtractorImpl</GroupingExtractor-->
<!--This property is used to indicate how we do user name comparision for token generation https://wso2.org/jira/browse/APIMANAGER-2225-->
<CompareCaseInsensitively>true</CompareCaseInsensitively>
<DisplayURL>false</DisplayURL>
<URL>https://<PUBLIC IP HERE>:${mgt.transport.https.port}/store</URL>
<!-- Server URL of the API Store. -->
<ServerURL>https://<PUBLIC IP HERE>:${mgt.transport.https.port}${carbon.context}services/</ServerURL>
I've also whitelisted the said public IP:
"whiteListedHostNames" : ["localhost","PUBLIC IP HERE"]

Meanwhile please check the reference.

Related

VueJS Windows Authentication against ASP.NET Core 5 Docker API not appearing

Setup:
We have setup on our windows VM (on-premises) to run docker (windows container) + gMSA / service account for our ASP.NET Core 5 API - internally running on Kestrel with .AddAuthentication(NegotiateDefaults.AuthenticationScheme).AddNegotiate(); (NOT IIS). It authenticates well as the configured service account e.g. against MSSQL or the File Server.
If I open up any protected endpoint its using my windows credentials or is asking me (if not on a domain joined computer). The user test endpoint return the windows users claims.
This just the API which works fine!
Issue:
The "issue" is, that our VueJS application is running in a docker container (linux containers) on a linux host - inside hosted via nginx. Same network. After opening the UI the first time (without having opened the API) no authentication request is happening. The interesting part is: After opening the API the first time and entering windows credentials and then opening the UI works and shows the use/claims (which we return from the backend).
In the frontend we are using axios with withCredentials: true.
Question:
What must be done to enable the UI to negotiate the windows login?
The reverse proxy that's passing requests to your container must have NTLM support enabled for Windows authentication to work. IIS supports this by default, but for others, you need to activate it manually. This must be repeated down the proxy chain.
From the docs:
Credentials can be persisted across requests on a connection. Negotiate authentication must not be used with proxies unless the proxy maintains a 1:1 connection affinity (a persistent connection) with Kestrel.
See the docs for your reverse proxy:
nginx: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm
caddy: https://github.com/caddyserver/ntlm-transport

Jenkins pointing server to domain created

Good Morning
I have created a Jenkins server in AWS I am able to access the platform using the IP of the server
however, I want to access it more securely.
I have set up a subdomain on my hosting service and I set the IP of the server as an A record
I have also defined this in the configuration section of Jenkins
however, when I access the URL https://domainname I get nothing
but if I add 8080 at the end of it it takes me to the Jenkins platform
what am I missing here?
Thanks
I recommend you to use AWS Application Load Balancer to access to you jenkins web server.
I will host https certificat (if you are using AWS Certificate Manager) and you will be able configure DNS to redirect to ALB name.

Azure API Management service with external virtual network to Docker

I want to use the Azure API Management Service (AMS) to expose the API created with R/Plumber hosted in a Docker container and runs in an Ubuntu machine.
Scenario
With R/Plumber I created some APIs that I want to protect. Then, I created a virtual machine on Azure with Ubuntu and installed Docker. The APIs are in a container that I published on the virtual machine by Docker. I can access them via internet.
On Azure I created an API Management service and added the APIs from the Swagger OpenAPI documentation.
Problem
I want to secure the APIs. I want to expose to the internet only the AMS. Then, my idea was to remove the public IP from the virtual machine and via a virtual network using the internal IPs to connect the API Management Service to the API with the internal IP (http://10.0.1.5:8000).
So, I tried to set a Virtual Network. Clicked on the menu, then External and then on the row, I can select a network. In this virtual network, I have one network interface that is the one the virtual machine is using.
When I save the changes, I have to wait a while and then I receive an error
Failed to connect to management endpoint at azuks-chi-testapi-d1.management.azure-api.net:3443 for a service deployed in a virtual network. Make sure to follow guidance at https://aka.ms/apim-vnet-common-issues.
I read the following documentation but I can't understand how to do what I wanted
Azure API Management - External Type: gateway unable to access resources within the virtual network?
How to use Azure API Management with virtual networks
Is there any how-to to use? Any advice? What are I doing wrong?
Update
I tried to add more Address space in the Virtual network.
One of them (10.0.0.2/24) is delegate for the API Management.
Then, in the Network security group I added the port 3443.
From the API manager I can't reach the server with the internet IP (10.0.2.5). What did I miss?
See common network configuration issues, it lists all dependencies that are expected to be exposed for APIM to work. Make sure that your vnet allows ingress at port 3443 for the subnet where APIM service is located. This configuration must be done on VNET side, not APIM.

How to find the URL/Public IP to access the server run by a Docker container, from my website on any network

I have set up a node.js server and run it in a Docker container.
I am hosting MySQL database on my computer and have the server connected.
I have deployed my website on netlify and it uses REST API to send and retrieve data to the server.
Currently, the API url uses 'http://localhost:4941/api/v1...' When I access the website on the same computer as the database is hosted, I am able to see the data retrieved from the server.
However, when I access the website on another computer (and from a different network), I am not able to see the data obtained from the server.
I have tried using 127.0.0.1 as well as the Docker container's IPAddress of 172.17.0.2 but was unable to solve the problem too.
I expect to be able to use my website from any computers in the world that have access to Internet and be able to send and retrieve data from the server.
So, does the problem lie in the API url of me using localhost? If so, what address should I use instead?

How to access rails server in a remote VM

I set up a Virtual Machine (VM) on OpenStack remotely. The VM is running Red Hat Enterprise Linux (RHEL) 7.
I ssh into the above VM using ssh vm-url, and then I setup a rails server during that ssh session and get it running using rails server -b vm-url
Now, I try to access the rails website above from my local Chrome browser by typing the URL vm-url:3000 into Chrome's address bar (the Omnibox), but I get:
This site can’t be reached
10.150.8.101 took too long to respond.
Why Can't I access the rails website, what have I done wrong?
Please correct me if any terminologies I used are incorrect.
Thank you.
Two things to check,
The ip attached to the VM is public and accessible
Http port is enabled to be accessed from outside
The port accessed is handled in security groups which is generally configured while creating the instance. Either add new security group with enough privileges or update the same with new added ports.

Resources