Apologises, I know this question is very common but I have now come up against a brick wall and I need a point in the right direction. Thanks.
I am struggling to use a API that is on localhost within a docker container. I have followed many guides but I seem to be missing something. My steps:
In Windows Command prompt, I use the CURL command to fire a GET request to the API on localhost. The request succeeds:
curl http://localhost:57888/api/reference
[HttpGet()]
public ActionResult CheckIfOnline()
{
// Breakpoint hits here
return Ok();
}
Now I would like to call this end-point inside my Docker container. I've tried to do this in the compose file like:
container-api:
container_name: container-api
build:
context: ..
dockerfile: Dockerfile
ports:
- "3007:3001"
env_file:
- ./file.env
extra_hosts:
- "host.docker.internal:host-gateway"
I assume from my research this essentially means the container can now 'see' the host machine and, therefore could use localhost? (Happy to be proved wrong)
So when I create the container, I first ping host.docker.internal to see if it's available.
ping host.docker.internal
PING host.docker.internal (192.168.65.2) 56(84) bytes of data
As you can see, there is a response, but I am not entirely sure what IP 192.168.65.2 is. Looking around the web, it is apparently a 'magic' IP that represents the host IP, I am not sure if this is right as I don't see this IP using 'ipconfig', but for now, I will continue.
For Docker on Mac, there is a magic ip 192.168.65.2 in docker VM which
represent host machine, or you can just use host.docker.internal
inside docker VM will ok.
Lastly I use 'CURL' in the bash container to see if I can hit the API that I hit at the start of this post. However I get this error:
# curl http://host.docker.internal:57888/api/reference
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid Hostname</h2>
<hr><p>HTTP Error 400. The request hostname is invalid.</p>
</BODY></HTML>
Can anyone point me in the right direction please?
Thanks!
With curl http://host.docker.internal:57888/api/reference you are indeed connected to your API.
I know that because you get some HTML back. Curl doesn't generate HTML when things go wrong, so the HTML must come from somewhere else: Your API.
Maybe the API doesn't like to be called with a Host: header containing host.docker.internal and that's why it's returning the 400 error. To figure that out, we'd need more information on how the API is coded and hosted.
#Hans led me in the right direction. host.docker.internal was indeed connected to my API, but the API didn't like the hostname so it caused the HTTP 400 error.
Therefore in the Docker Compose file, I changed the extra_hosts to this
container-api:
container_name: container-api
build:
context: ..
dockerfile: Dockerfile
ports:
- "3007:3001"
env_file:
- ./file.env
extra_hosts:
- "localhost:host-gateway"
Now the container use http://localhost:57888/api/reference and it connects to the API successfully
Related
I'm still early on my Docker journey and I'm trying to 'Dockerise' an existing UI/API/DB web app where the UI & API/DB are on two different servers. It's gone fairly well so far and my docker-compose fires up services for UI, API & DB (and phpMyAdmin) just fine. I know that those services can communicate inherently using service names as host names due to the default networking method of Docker.
The UI (an Angular codebase) makes various calls to https://api.myproject.com which works fine in the live sites but isn't ideal in my Docker version as I want it to reach my API service in the container not the API live site.
I know that I can edit the code on the UI container and replace calls to https://api.myproject.com with calls to 'api' but that's inconvenient and lacks portability if I want to redeploy on different servers (as it is now) in future.
Is it possible for a container to redirect all POST/GET etc. for a URL to a container service? I thought the --add-host might do something like this but it seems to want an IP address rather than a service name.
Thanks in advance.
EDIT [clarification]
My issue lies in the UI page (HTML/JS) that the user sees in the browser. When the page is loaded it throws some GET requests to the API URL and that's what I was hoping to redirect to the container API service.
You can use network aliases for the given container and those can override an existing fqdn.
Here is a 2mn crafted example:
docker-compose.yml
---
services:
search_engine:
image: nginx:latest
networks:
default:
aliases:
- google.com
- www.google.com
client:
image: alpine:latest
command: tail -f /dev/null
You can start this project with:
docker-compose up -d
Then log into the client container with
docker-compose exec client sh
And try to reach your search_engine service with either of the following
wget http://search_engine
wget http://google.com
wget http://www.google.com
In all cases you will get the default index.html page of a brand new installed nginx server.
I want to spin up a database container (e.g: MongoDb) with docker-compose so that I can run some tests against the database.
This is my mongodb.yml docker-compose file.
version: '3.7'
services:
mongo:
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=example
mongo-express:
image: mongo-express:latest
restart: always
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=example
depends_on:
- mongo
When I run it with docker-compose -f mongodb.yml up I can successfully connect to MongoDb on localhost. In other words, the following connection string is valid: "mongodb://root:example#localhost:27017/admin?ssl=false"
I want to use the equivalent to alias so that instead localhost, MongoDb is accessible through hostname potato
In GitLab CI/CD, with a Docker runner, I can spin up a mongo container and provide an alias without any issue. Like this:
my_ci_cd_job:
stage: my_stage
services:
- name: mongo:latest
alias: potato
variables:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
This allows me to connect within the GitLab CI/CD with "mongodb://root:example#potato:27017/admin?ssl=false"
I need the same locally, so that I can reuse my connection string.
I've added under image: mongo:latest the following
container_name: potato
But I cannot connect to the server potato. I've tried a few combinations with network, alias, etc. No luck. I don't event understand what I am doing anymore. Isn't there a simple equivalent to give an alias to a container so that my C# app or MongoDb client can access it?
Even the documentation, at the section https://docs.docker.com/compose/compose-file/#external_links is useless in my opinion. It mentions random samples not defined elsewhere.
Any help is much appreciated!
I've tried..
I've tried the following: How do I set hostname in docker-compose? without success
I have spent a few hours reading the docker compose documentation and it's extremly confusing. The fact that most of the questions are out of context without specific examples, does not help either, because it requires some deeper knowledge.
SOLUTION
Thanks to the replies it's clear this is a hacky thing not really recommended.
I went with the recommendations and I have now the connection string as:
mongodb://root:example#potato:27017/admin?ssl=false
by default. That way, I don't need to change anything for my GitLab CI/CD pipeline that has the alias potato for mongo db.
When running locally the mongoDb container with docker compose, it does so in localhost, but I edited my hosts file (e.g: in Windows C:\Windows\System32\drivers\etc, in Linux /etc/hosts) to make any request to potato go to localhost or 127.0.0.1
127.0.0.1 potato
And that way I can connect to MongoDb from my code or Robo3T as if it was running on a host called potato.
Feel free to comment any better alternative if there is. Ta.
If I understand you correctly, you want to bind a hostname (e.g., potato) to an ip address on your host machine.
Afaik this is not possible, but there are workarounds[1].
Everytime you start your docker-compose a network is used between those containers, and there is no way for you to be sure which ip addresses they will get. These could be 172.17.0.0/24 or 172.14.0.0/24 or anything else really.
The only thing you know for sure is that your host will have a service running at port 27017. So you could say that the hostname potato points to localhost on your hostmachine by adding 127.0.0.1 potato to your /etc/hosts file on your host.
That way the connection string "mongodb://root:example#localhost:27017/admin?ssl=false" will point to the local port from the perspective of your host machine, while it will point to the docker container from the perspective of the rest of your docker-compose services.
I do have to say that I find this a hacky approach, and as #DavidMaze said, it's normal to need different connection strings depending on the context you use them in.
[1] https://github.com/dlemphers/docker-local-hosts
New to Couchbase, been using couchdb, but I think the Couchbase data model will work better for my purposes.
I've set up a docker-compose file that uses the couchbase:community image:
version: "3"
services:
couchbase:
container_name: couchbase
image: couchbase:community
ports:
- "8091:8091"
- "8092:8092"
- "8093:8093"
- "8094:8094"
- "11210:11210"
networks:
- cbtemp
volumes:
- ../demodbs/cbdir:/opt/couchbase/var
networks:
cbtemp:
external:
name: cbtemp
(the cbtemp network is created beforehand so I can add a sync-gateway image separately)
It comes up fine and accessing localhost:8091 in chrome brings up the admin panel just fine.
But, if I try to 'curl http://localhost:8091', I get this response:
<!DOCTYPE ...>
<title>301 Moved Permanently</title>
...
The document has moved <a href="http://localhost:8091/ui/index.html>here<
...
If I curl the redirected url, I get an html page (with some angular stuff in it, no less - I'm presuming that's the admin page?)
If I 'curl http://localhost:8092', I get the expected response, but, of course, nothing wants to access couchbase on :8092
As an aside, bringing up the sync-gateway image accesses the :8091 url just fine and works as expected.
Not a deal-breaker (yet), but annoying.
You're accessing the root path when you go to just port 8091 with nothing else. Anything accessing Couchbase functionality is going to add a path, so this will be dealt with by internal routing. You can see those paths if you look at the REST api docs.
For whatever reason, they decided to host the admin UI off a base base starting with /ui. Hence the redirect, as they're assuming that if you didn't supply any path you want the UI.
It's not correct that nothing wants to access Couchbase through port 8092, either. Various services use different ports. 8092 is used for some forms of query and other purposes. You can find out more about the different ports and why you need them open in the Couchbase docs.
I've got a reverse proxy, frontend and backend container. The backend is running Pyppeteer and the reverse proxy is set up with an alias "servicename.localhost" in docker-compose.yml:
networks:
default:
aliases:
- servicename.localhost
This way I'm able to curl --insecure https://servicename.localhost from the backend container, but unfortunately it seems Chromium on the same container ignores that setting and so "servicename.localhost" resolves to 127.0.0.1:
pyppeteer.errors.PageError: net::ERR_CONNECTION_REFUSED at https://servicename.localhost/login
How can I work around this?
It looks like it may be related to DNS prefetching or asynchronous DNS, but there doesn't seem to be a command line flag to disable either of them anymore.
Things I've tried which didn't change anything:
Adding "--host-rules='MAP servicename.localhost {}'".format(socket.gethostbyaddr('servicename.localhost')[-1][0]) to the pyppeteer.launch args list parameter.
Adding "--host-resolver-rules=[same as above] to the pyppeteer.launch args list parameter.
I worked around this by changing the TLD from "localhost" to "test".
/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.