Docker - use the same URL inside the container and outside - docker

I have a docker container running an open source identity server UI. This consists of several web applications, and I am running them from the same docker container.
One web application, calls an API endpoint in another web application to get a config file. It then does a redirect to a URL found in that config file. The config file is dynamically generated using the domain name in the request.
I make a call from my local host, to the exposed port. This page then calls another webapi using the docker-compose service name for the URL: ex: https://webapi2/well-known/openid-configuration. This returns a config file with URLs that use webapi2 as the domain.
This causes a browser redirect to https://webapi2/singin. This fails, because my localhost does not know about wepapi1, it needs to use uses localhost:44310.

Related

traefik subfolder: links are broken

I try to migrate a few intranet sites (wordpress, wiki, others) in our company to docker. The services for itself are working properly after the migration. I can connect to the services with http://hostname:8081, http://hostname:8082 and so on.
Now I want to use traefik to access to the services via http://hostname/servicename. That works principally with PathPrefixStrip.
But when I try to access a service via http://hostname/service, then all links (css, javascript, ...) inside of my service's sources are failing, because they assume to run on root itself, and not in a subfolder called service. How can I manage that problem?
Links are generated by wordpress, not Traefik. You need to configure wordpress to use the new URL with the new path to generate links.
I would advise using PathPrefix instead of PathPrefixStrip in this case.
https://tanyanam.com/2015/07/13/setting-up-wordpress-behind-reverse-proxy/

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

Docker Swarm Service Discovery in index.html

I have two express web apps (server and client) that I am using docker-compose and / or docker stack to deploy in docker swarm. They both have APIs that communicate with each other via their service names, as they are both connected to the same overlay network. A snippet of the config file that client uses to make REST calls to server follows:
"server": {
"url":"http://server:8085",
"endpoints": {
"devices": "/devices",
"temperature": "/temperature",
"mock": "/mock"
}
}
Finding the server by host name is no issue from the node side as it is running directly inside the docker container. However, both express apps serve web pages. Both client and server's css and js dependencies are almost identical and I do not want to write each stylesheet twice. I'd rather server a single copy from server that both index.html files from server and client can use.
In the index.html, of server I can use relative paths because the host is the same, and thus implied. But, in index.html of client I need a fully qualified url. Something like:
<link rel="stylesheet" href="http://server:8085/style.css">
Obviously this would not work once I serve index.html from client to a browser because the browser is going to look for http://server over the internet, rather than in the docker overlay network for these services.
I thought about downloading the files in client's node app before it serves index.html but, that's not the cleanest solution.
Is there an elegant way to accomplish this without binding server to a static ip / domain or programmatically downloading these files first?
If your external users' browser needs to access files on client and server then you will need to publish both Swarm Services to the external IP's of the Swarm nodes, and then put those IP's in DNS names or an external LB, and only use those URL's for remote connectivity.
When you do that, you'll likely need to bind both services to the same port (443). If that's the case, then you also need another layer of proxy that routes traffic to the proper container based on path or DNS name.
Both http://proxy.dockerflow.com/ and https://traefik.io/ work for that purpose.

openshift wso2api manager redirect error

I am currently trying to setup wso2 api manager on openshift. The problem i am running into is that when i try to browse the url created by the openshift route, the application redirects me to the internally created IP address of the publisher app. However when i launch the container without openshift, the application directs me to it's intended API login page which is the Mgt console url.
I suspect this has to do with how the HAProxy embedded load balancer is behaving. I was able to hack around the configurations by changing the default ports to 443 however that created a new set of issues because changing the ports also required me hard coding container hostnames in the carbon.xml. Hardcoding settings in the configuration files prevents me from being able to scale up the containers.
Any assistance on this will be much appreciated.

Multiple MVC projects to publish on single domain [duplicate]

Let's say we have 2 separate applications, a Web Api application and a MVC application both written in .NET 4.5. If you were to host the MVC application in IIS under the host header "https://www.mymvcapp.com/" would it be possible to host the Web Api application separately in IIS under the host header "https://www.mymvcapp.com/api/"?
The processes running the 2 applications in IIS need to be separate. I know of the separate methods of hosting, self hosting and hosting using IIS. I would like to use IIS if at all possible.
Also, how would I host two applications (an API and a web application) if each were on a separate server so that I could serve the api from http://www.mymvcapp.com/api?
There are at least 4 ways of doing what you want to do. The first two methods are for if you have 1 web server, and both applications are served from that one web server running IIS. This method also works if you have multiple web servers running behind a load-balancer, so long as the API and the Web site are running on the same server.
The second two methods are using what's called a "Reverse Proxy", essentially a way to route traffic from one server (the proxy server) to multiple internal servers depending on what type of traffic you're receiving. This is for when you run your web servers on a set of servers and run your API on a different set of servers. You can use any reverse proxy software you want, I mention nginx and HAProxy because I've used both in the past.
Single Web Server running IIS
There are two ways to do it in IIS:
If your physical folder structure is as follows:
c:\sites\mymvcapp
c:\sites\mymvcapp\api
You can do the following:
Create a Child Application
Creating a child application will allow your "API" site to be reachable from www.mymvcapp.com/api, without any routing changes needed.
To do that:
Open IIS Manager
Click on the appropriate site in the "Sites" folder tree on the left side
Right Click on the API folder
click "Convert to Application"
The downside is that all Child Applications inherit the web config of their parent, and if you have conflicting settings in there, you'll see some runtime weirdness (if it works at all).
Create a directory Junction
The second way is a way to do it so that the applications maintain their separateness; and again you don't have to do any routing.
Assuming two folder structures:
c:\sites\api
c:\sites\mvcapp
You can set up Junctions in Windows. From the command line*:
cd c:\sites
mklink /D /J mymvcapp c:\sites\mvcapp
cd mymvcapp
mklink /D /J api c:\sites\api
Then go into IIS Manager, and convert both to applications. This way, the API will be available in \api\, but not actually share its web.config settings with the parent.
Multiple Servers
If you use nginx or haproxy as a reverse proxy, you can set it up to route calls to each app depending.
nginx Reverse Proxy settings
In your nginx.conf (best practice is to create a sites-enabled conf that's a symlink to sites-available, and you can destroy that symlink whenever deploying) do the following:
location / {
proxy_pass http://mymvcapp.com:80
}
location /api {
proxy_pass http://mymvcapp.com:81
}
and then you'd set the correct IIS settings to have each site listen on ports 80 (mymvcapp) and ports 81 (api).
HAProxy
acl acl_WEB hdr_beg(host) -i mymvcapp.com
acl acl_API path_beg -i /api
use_backend API if acl_API
use_backend WEB if acl_WEB
backend API
server web mymvcapp.com:81
backend WEB
server web mymvcapp.com:80
*I'm issuing the Junction command from memory; I did this a few months ago, but not recently, so let me know if there are issues with the command
NB: the config files are not meant to be complete config files -- only to show the settings necessary for reverse proxying. Depending on your environment there may be other settings you need to set.

Resources