I've got a reverse proxy, frontend and backend container. The backend is running Pyppeteer and the reverse proxy is set up with an alias "servicename.localhost" in docker-compose.yml:
networks:
default:
aliases:
- servicename.localhost
This way I'm able to curl --insecure https://servicename.localhost from the backend container, but unfortunately it seems Chromium on the same container ignores that setting and so "servicename.localhost" resolves to 127.0.0.1:
pyppeteer.errors.PageError: net::ERR_CONNECTION_REFUSED at https://servicename.localhost/login
How can I work around this?
It looks like it may be related to DNS prefetching or asynchronous DNS, but there doesn't seem to be a command line flag to disable either of them anymore.
Things I've tried which didn't change anything:
Adding "--host-rules='MAP servicename.localhost {}'".format(socket.gethostbyaddr('servicename.localhost')[-1][0]) to the pyppeteer.launch args list parameter.
Adding "--host-resolver-rules=[same as above] to the pyppeteer.launch args list parameter.
I worked around this by changing the TLD from "localhost" to "test".
Related
Apologises, I know this question is very common but I have now come up against a brick wall and I need a point in the right direction. Thanks.
I am struggling to use a API that is on localhost within a docker container. I have followed many guides but I seem to be missing something. My steps:
In Windows Command prompt, I use the CURL command to fire a GET request to the API on localhost. The request succeeds:
curl http://localhost:57888/api/reference
[HttpGet()]
public ActionResult CheckIfOnline()
{
// Breakpoint hits here
return Ok();
}
Now I would like to call this end-point inside my Docker container. I've tried to do this in the compose file like:
container-api:
container_name: container-api
build:
context: ..
dockerfile: Dockerfile
ports:
- "3007:3001"
env_file:
- ./file.env
extra_hosts:
- "host.docker.internal:host-gateway"
I assume from my research this essentially means the container can now 'see' the host machine and, therefore could use localhost? (Happy to be proved wrong)
So when I create the container, I first ping host.docker.internal to see if it's available.
ping host.docker.internal
PING host.docker.internal (192.168.65.2) 56(84) bytes of data
As you can see, there is a response, but I am not entirely sure what IP 192.168.65.2 is. Looking around the web, it is apparently a 'magic' IP that represents the host IP, I am not sure if this is right as I don't see this IP using 'ipconfig', but for now, I will continue.
For Docker on Mac, there is a magic ip 192.168.65.2 in docker VM which
represent host machine, or you can just use host.docker.internal
inside docker VM will ok.
Lastly I use 'CURL' in the bash container to see if I can hit the API that I hit at the start of this post. However I get this error:
# curl http://host.docker.internal:57888/api/reference
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid Hostname</h2>
<hr><p>HTTP Error 400. The request hostname is invalid.</p>
</BODY></HTML>
Can anyone point me in the right direction please?
Thanks!
With curl http://host.docker.internal:57888/api/reference you are indeed connected to your API.
I know that because you get some HTML back. Curl doesn't generate HTML when things go wrong, so the HTML must come from somewhere else: Your API.
Maybe the API doesn't like to be called with a Host: header containing host.docker.internal and that's why it's returning the 400 error. To figure that out, we'd need more information on how the API is coded and hosted.
#Hans led me in the right direction. host.docker.internal was indeed connected to my API, but the API didn't like the hostname so it caused the HTTP 400 error.
Therefore in the Docker Compose file, I changed the extra_hosts to this
container-api:
container_name: container-api
build:
context: ..
dockerfile: Dockerfile
ports:
- "3007:3001"
env_file:
- ./file.env
extra_hosts:
- "localhost:host-gateway"
Now the container use http://localhost:57888/api/reference and it connects to the API successfully
I'm still early on my Docker journey and I'm trying to 'Dockerise' an existing UI/API/DB web app where the UI & API/DB are on two different servers. It's gone fairly well so far and my docker-compose fires up services for UI, API & DB (and phpMyAdmin) just fine. I know that those services can communicate inherently using service names as host names due to the default networking method of Docker.
The UI (an Angular codebase) makes various calls to https://api.myproject.com which works fine in the live sites but isn't ideal in my Docker version as I want it to reach my API service in the container not the API live site.
I know that I can edit the code on the UI container and replace calls to https://api.myproject.com with calls to 'api' but that's inconvenient and lacks portability if I want to redeploy on different servers (as it is now) in future.
Is it possible for a container to redirect all POST/GET etc. for a URL to a container service? I thought the --add-host might do something like this but it seems to want an IP address rather than a service name.
Thanks in advance.
EDIT [clarification]
My issue lies in the UI page (HTML/JS) that the user sees in the browser. When the page is loaded it throws some GET requests to the API URL and that's what I was hoping to redirect to the container API service.
You can use network aliases for the given container and those can override an existing fqdn.
Here is a 2mn crafted example:
docker-compose.yml
---
services:
search_engine:
image: nginx:latest
networks:
default:
aliases:
- google.com
- www.google.com
client:
image: alpine:latest
command: tail -f /dev/null
You can start this project with:
docker-compose up -d
Then log into the client container with
docker-compose exec client sh
And try to reach your search_engine service with either of the following
wget http://search_engine
wget http://google.com
wget http://www.google.com
In all cases you will get the default index.html page of a brand new installed nginx server.
New to Couchbase, been using couchdb, but I think the Couchbase data model will work better for my purposes.
I've set up a docker-compose file that uses the couchbase:community image:
version: "3"
services:
couchbase:
container_name: couchbase
image: couchbase:community
ports:
- "8091:8091"
- "8092:8092"
- "8093:8093"
- "8094:8094"
- "11210:11210"
networks:
- cbtemp
volumes:
- ../demodbs/cbdir:/opt/couchbase/var
networks:
cbtemp:
external:
name: cbtemp
(the cbtemp network is created beforehand so I can add a sync-gateway image separately)
It comes up fine and accessing localhost:8091 in chrome brings up the admin panel just fine.
But, if I try to 'curl http://localhost:8091', I get this response:
<!DOCTYPE ...>
<title>301 Moved Permanently</title>
...
The document has moved <a href="http://localhost:8091/ui/index.html>here<
...
If I curl the redirected url, I get an html page (with some angular stuff in it, no less - I'm presuming that's the admin page?)
If I 'curl http://localhost:8092', I get the expected response, but, of course, nothing wants to access couchbase on :8092
As an aside, bringing up the sync-gateway image accesses the :8091 url just fine and works as expected.
Not a deal-breaker (yet), but annoying.
You're accessing the root path when you go to just port 8091 with nothing else. Anything accessing Couchbase functionality is going to add a path, so this will be dealt with by internal routing. You can see those paths if you look at the REST api docs.
For whatever reason, they decided to host the admin UI off a base base starting with /ui. Hence the redirect, as they're assuming that if you didn't supply any path you want the UI.
It's not correct that nothing wants to access Couchbase through port 8092, either. Various services use different ports. 8092 is used for some forms of query and other purposes. You can find out more about the different ports and why you need them open in the Couchbase docs.
I created simple compose config to try Postgres BDR replication.
I expect containers to have host names as service names I defined and I expect one container to be able to resolve and reach another with this hostname. I expect it to be true because of that:
https://docs.docker.com/compose/networking/
My config:
version: '2'
services:
bdr1:
image: bdr
volumes:
- /var/lib/postgresql/data1:/var/lib/postgresql/data
ports:
- "5001:5432"
bdr2:
image: bdr
volumes:
- /var/lib/postgresql/data2:/var/lib/postgresql/data
ports:
- "5002:5432"
But in reality both containers get rubbish hostnames and are not reachable by container names:
Creating network "bdr_default" with the default driver
Creating bdr_bdr1_1
Creating bdr_bdr2_1
Attaching to bdr_bdr1_1, bdr_bdr2_1
bdr1_1 | Hostname: 938e0585fee2
bdr2_1 | Hostname: 7153165f4d5b
Is it a bug, or I did something wrong?
I use Ubuntu 14.04.4 LTS, Docker version 1.10.1, build 9e83765, docker-compose version 1.6.0, build d99cad6
docker-compose gives you the option of scaling services up or down, meaning you can launch multiple instances of the same service. That is at least one reason why the hostnames are not just service names. You will notice that if you scale bdr1 to 2 instance, you will then have bdr_bdr1_1 and bdr_bdr1_2 containers.
You can work around this inside the containers that were started up by docker-compose in at least two ways:
If a service refers to other service, you can use links section, for example make bdr1 link to bdr2. In this case when you are inside bdr1 you can call host bdr2 by name. I have not tried what happens when you scale up bdr2 in this case.
You can force the hostname of a container internally to the name you want by using the hostname section. For example if you add hostname: bdr1 to bdr1, then you can internally connect to bdr1, which is itself.
You can possibly achieve similar result with the networks section, but I have not yet used it myself so I don't know for sure.
The hostname inside the container should be the short container id, so this is correct (note there was a bug with Compose 1.6.0 and the short container id, so you should use at least version 1.6.2). Also /etc/hosts is not longer used, there is now an embedded dns server that handles resolving names to container ip addresses.
The container is discoverable by other containers with 3 names: the container name, the container short id, and the service name.
However, the other container may not be available immediately when the first one starts. You can use depends_on to set the order.
If you are testing the discovery, try using ping, and make sure to retry , because the name may not resolve immediately.
I have setup a few docker-containers with docker-compose.
When I start them via docker-compose up I can access them via their exposed ports, e.g. localhost:9080 and localhost:9180.
I really would like to access them via hostnames, the localhost:9180 should be accessable on my localhost via api.local and the localhost:9080 via webservice.local
How can I achieve that? Is that something that docker-compose can do or do I have to use a reverse proxy on my localhost?
Currently my docker-compose.yml looks like this:
api:
build: .
ports:
- "9180:80"
- "9543:443"
external_links:
- mysql_mysql_1:mysql
links:
- booking-api
webservice:
ports:
- "9080:80"
- "9443:433"
image: registry.foo.bar:5000/webservice:latest
volumes:
- ~/.docker-history:/.bash_history
- ~/.docker-bashrc:/.bashrc
- ./:/var/www/virtual/webservice/current
No, you can't do this.
/etc/hosts file resolves host-names only. Thus it can only resolve localhost to 127.0.0.1.
If you add a line like
api.local 127.0.0.1:9180 it wont work.
The only think you can do is to setup a reverse proxy (like nginx) on your host that listen to api.local and forwards the requests to localhost:9180.
You should check out the dory project. By adding a VIRTUAL_HOST environment variable, the container becomes accessible by domain name. For example, if you set VIRTUAL_HOST=web.docker, you can reach the container at http://web.docker.
The project home page has more info. It's a young project but under active development. Support for macOS is also planned now that Docker for Mac and dlite have emerged/matured.