Is there a way to keep a single url, domain, or ip, for communication between docker containters, and between localhost and containers? - docker

I am working on a web app where a single environmental variable is used for specifying a certain server (a rest api), like this:
.env:
...
URL_SERVER_API="http://localhost:8080"
...
the application is running inside a container, and it uses the server api variable for two things related to my problem:
It could generate and serve dynamic html where it append URL_SERVER_API to complete full api urls, for example {{URL_SERVER_API}}/someendpoint
It calls the api directly from a (php) script using CURL, defining the endpoint in the same fashion as 1
so I end up with a situation where if I set URL_SERVER_API to localhost:8080 the main application forms valid urls to call because the api app (which is also running in a docker container) was exposed in the correspondingly port, but the CURL calls don't work because localhost:8080 is not a known server inside the container.
Also I configured a bridge network and attached both apps to it, and I was capable to ping from the main app to the api succesfuly (e.g. ping api_docker), then when I set URL_SERVER_API=api_docker the CURL calls to the api are successfull, but the html files returned from the main app are constructed with unreachable urls like http://api_docker/someendpoint
Hope you can see my issue
I am able to solve the issue by having two variables URL_SERVER_API and URL_SERVER_API_INTERNAL and using the first for html serving and the second for the CURL calls, but I think it is not the best solution to add new variables to remember because I am not in charge to do so.
Thanks for the time taken to read

Related

Rspec receive post from tested application

I am testing a running application using rspec/capybara. I have a route I want to test that is supposed to talk to a secondary service via a provided url.
Since the tests don't encapsulate the application, they just talk to it, I cant use the normal methods of stubbing out api calls, to make sure its calling the service properly.
What I would like is to be able to give the route a url, then have rspec receive a post back from the application. Is there a way to do this?
To be clear, I do NOT want rspec to mock/stub the request, because this isn't running as a wrapper to the application.
I will suppose the secondary service response is exposed somehow back to you.
So hitting https://not-my-service.com?secondary-service=http://service-i-control.com results in something that contains the response (partial or complete) from http://service-i-control.com.
If this service is up & running in production your secondary-service must also be something exposed to the internet, you can consider using something like ngrok to expose a local Rack application your testing environment is spinning up that returns a specific response.
If you don't mind using external services you could also consider using httpbin.org for example: https://not-my-service.com?secondary-service=https://httpbin.org/ip you will return a 200 OK with the IP of the origin that hit the server. So you could match that IP to https://not-my-service.com.
If you don't get any information besides the fact that it calls the secondary-service then I would suggest as part of the spec:
Spin up a rack application and expose it to the internet.
Hit the service passing your local application as parameter.
Wait until you get the request your are expecting, then stop the application and the test has succeeded.
Or it times out (say 30 seconds) and your test has failed (service was never called).

Getting "ECONNREFUSED" error when trying to upload to Wolkenkit Blob Server

I'm currently developing a Wolkenkit application which is run on my local machine.
I want to upload a file from the Wolkenkit app to the blob server (as documented here).
When sending a POST request from the server to https://local.wolkenkit.io:3001/, Node.js gives me the error ECONNREFUSED.
I've tested the POST-Request with another program and it works there. Any idea why it doesn't work from the wolkenkit application itself?
Thanks!
The Storing files sample you linked to shows code that is to be run in the browser, not in the backend itself. Of course, both should work, but there are a few minor differences you need to watch out for.
Fixing the host name
First, I suppose that local.wolkenkit.io in your case maps to 127.0.0.1, which is the default for wolkenkit. That means that when you try to connect to this domain from within a Docker container, the container does not try to call out to the blog storage container, but it stays within itself. So, the first thing that needs to be fixed is the host name.
Basically, there are two options for this: You can either setup local.wolkenkit.io so that it resolves to the external IP address of your machine. This would work, but is pretty cumbersome. The other option is to directly address the appropriate container that is responsible for blob storage, by its internal name. The internal name is <name-of-your-app>-depot-file. So you need to replace https://local.wolkenkit.io:3001/ by https://<...>-depot-file.wolkenkit.io:3001/.
Fixing the port
Second, the port is wrong. This is because the blob storage service is internally running on port 3000, externally on 3001. So instead of https://<...>-depot-file.wolkenkit.io:3001/ you need to use https://<...>-depot-file.wolkenkit.io:3000/.
Once you have done this you should not get any more errors like ECONNREFUSED, since now the service can be found.
Fixing SSL issues
Third, since you are now connecting to the blob storage service using a different domain name, the SSL certificate doesn't match any more, since it was issued for local.wolkenkit.io. As a result, you will get SSL errors when trying to connect.
The simplest way to get around this is to disable any SSL checks (albeit this is also the most insecure way to handle this!). How to do this depends on the HTTP client module you are using. E.g., in request there is an option called strictSSL that you can set to false.
Of course, what you actually should do is to either use a custom certificate which includes this domain name as well, or to write a function that handles the certificate check and accepts the presented one, especially in this case.
If you do all of this, things should work :-)
PS: I am one of the authors of wolkenkit. Thanks a lot for bringing up this issue, and we will take care of this in the future, to make storing blobs easier.

Twilio IP Messaging token issue

I'm setting up an iOS app to use the IP Messaging and video calling apis. I'm able to connect, create channels and setup a video call if I manually create hard-coded tokens for the app. However, if I want to use the PHP server (as described here https://www.twilio.com/docs/api/ip-messaging/guides/quickstart-ios) then I always get an error and it can't connect anymore.
I'm attaching a screenshot of what I see when I hit the http://localhost:8080 address which seems to produce a 500 Internal error on this URL: https://cds.twilio.com/v2/Streams
Thanks so much!
After much time spent on this I decided to try the node backend instead - under other server-side languages of the PHP and I have it running in 2 minutes! I used the exact same credentials as the ones that I was using on the PHP config file so either my PHP environment has something strange or the PHP backend needs some fixing. In any case, I'm able to move forward using the node backend, so if you run into the same issue just try node instead of PHP. woohoo!

Swagger proxied by haproxy can't execute requests

I have a swagger working with a haproxy. I use built in swagger in Websphere Liberty Profile (apiDiscovery feature):
Browser -swagger.mydomain.com-> haproxy -swagger.intranet-> IBM Liberty server with Swagger
The first swagger page is generated and shown correctly in the browser, but as Liberty server gets the request from haproxy, not my browser, and gets them to the intranet name/ip (swagger.intranet), Swagger code to execute GETs, POSTs, etc. is generated with that intranet IP name (swagger.intranet), so when I try any of the methods they won't work as reference this internal ip name from in a browser outside that zone.
Can I configure haproxy with some header to inform haproxy that he should generate code with the original server name (swagger.mydomain.com) request used in the request? (That is the one to be used in the generated HTML/Javascript code)
Thanks.
Liberty trusts the Host: header and uses it to assemble self-referential links.
Where you define the backend, try setting http-request set-header Host swagger.mydomain.com to what the client will be using or removing a similar stanza if you are setting it to some swagger.intranet already.
(sorry, I'm not an HAProxy user. This is based on searching for 'HAProxy equivalent of ProxyPreserveHost')

Response time of web application using different url

I have a very basic doubt regarding web application url.
Suppose a web application is running locally on my machine.
Will there be any difference in the response time if I access the application using below two url ?
http://localhost:8080/SomeApplicationContext
http://hello:8080/SomeApplicationContext -- Assuming my machine name is hello
depends on whether or not you have hello in your hosts file. (same place where localhost is defined) if its not, then yes because your computer will have to check with DNS before it can access the resource, in which case the difference will close to the round trip latency of that request.
No difference, You can run the script/coding using http://localhost:8080/SomeApplicationContext from your own machine only, if you like to run this program from other pc/system you can use the http://hello:8080/SomeApplicationContext url ( you can run this url from your own machine also).
I hope this is perfect.
no. Why would that be? Response time depends on servers ability to serve the content -- that is latency in processing teh request and the other is network latency. In your case both are the same. So, no difference.
localhost or hello, both of them must be defined in your hosts file. Your OS looks to hosts file (such as /etc/hosts in many Linuxes or %windir%/system32/drivers/etc in some Windowses) if it cannot find it in there, OS asks a DNS server for server ip.
In your situation, both must be defined in your hosts file. No change in network latency.
But if you mean different domains pointing same IP, then it depends on how your server application (apache, nginx, IIS etc.) handles different domain names.

Resources