I have setup a few docker-containers with docker-compose.
When I start them via docker-compose up I can access them via their exposed ports, e.g. localhost:9080 and localhost:9180.
I really would like to access them via hostnames, the localhost:9180 should be accessable on my localhost via api.local and the localhost:9080 via webservice.local
How can I achieve that? Is that something that docker-compose can do or do I have to use a reverse proxy on my localhost?
Currently my docker-compose.yml looks like this:
api:
build: .
ports:
- "9180:80"
- "9543:443"
external_links:
- mysql_mysql_1:mysql
links:
- booking-api
webservice:
ports:
- "9080:80"
- "9443:433"
image: registry.foo.bar:5000/webservice:latest
volumes:
- ~/.docker-history:/.bash_history
- ~/.docker-bashrc:/.bashrc
- ./:/var/www/virtual/webservice/current
No, you can't do this.
/etc/hosts file resolves host-names only. Thus it can only resolve localhost to 127.0.0.1.
If you add a line like
api.local 127.0.0.1:9180 it wont work.
The only think you can do is to setup a reverse proxy (like nginx) on your host that listen to api.local and forwards the requests to localhost:9180.
You should check out the dory project. By adding a VIRTUAL_HOST environment variable, the container becomes accessible by domain name. For example, if you set VIRTUAL_HOST=web.docker, you can reach the container at http://web.docker.
The project home page has more info. It's a young project but under active development. Support for macOS is also planned now that Docker for Mac and dlite have emerged/matured.
Related
Let's assume we have two backend apps, where backend_for_frontend needs to fetch some data from api.
If both apps are run in docker or api runs in docker and backend_for_frontend runs locally, backend_for_frontend can use http://host.docker.internal:3001/api address to connect to api.
If both apps are run locally(not in docker) then backend_for_frontend needs to use http://127.0.0.1:3001/api for api connection.
Issue is that when we switch running api between docker or locally, we need to use different ip for backend_for_frontend that needs to be manually changed because backend_for_frontend doesn't know how we run api.
Is there a way to resolve this ip somehow automatically or use ip as env variable that will work in any case? Basically I want to run backend_for_frontend and api in any combination, while connection url for backend_for_frontend still can be resolved not by hand.
docker.compose example:
services:
api:
ports:
- 3001:3001
backend_for_frontend:
ports:
- 3002:3002
That's a very common configuration scenario and you'll usually solve it by setting an environment variable in backend_for_frontend to the URL of the API.
Let's call the environment variable API_URL. Then you can do
services:
api:
ports:
- 3001:3001
backend_for_frontend:
ports:
- 3002:3002
environment:
- API_URL=http://api:3001/
Then, when you run the API locally, you'd change it to http://host.docker.internal:3001/.
You'll need to change your backend_for_frontend code to fetch the URL from the environment variable. There's no universal way of doing it and it depends on what language your backend_for_frontend is coded in.
If you have an URL that you want to be the default, you can add an ENV statement to the backend_for_frontend Dockerfile to set it. Then you only need to specify it in your docker-compose file when you want to override it.
It can be achieved via using hostname instead of using direct IP address to access the API.
this will give flexibility to use hostname regardless of API running locally or on Docker.
for example, it can be used http://api:3001/api as the base URL for the API in backend_for_frontend, and then set up Docker Compose file to define a hostname for the api service:
services:
api:
hostname: api
ports:
- 3001:3001
backend_for_frontend:
ports:
- 3002:3002
Hello it's just the same as if you don't dockerize, call them by your port and socket
Your first api will call by : http://localhost:3001
And the seconds will be
http://localhost:3002
P.s you can use a network so they are in an internal network
That's all
I'm trying to run Artifactory (Artifactory CE-C++, V7.6.1) behind a reverse proxy (Traefik v2.2, latest).
Both are official unaltered docker-images. For starting them up I'm using docker-compose.
My artifactory-yml-file (docker-compose.yml) uses the following traefik-configuration:
image: docker.bintray.io/jfrog/artifactory-cpp-ce
[...]
lables:
- "traefik.enable=true"
- "traefik.http.routers.artifactory.rule = Host(`localhost`) && PathPrefic(`/artifactory`)"
- "traefik.http.routers.artifactory.middlewares=artifactory-strippprefix"
- "traefik.http.middlewares.artifactory-strippprefix.strippprefix.prefixes=/"
- "traefik.docker.network=docker-network"
Note: My network docker network is just a simple docker network (external). I have this still in there because of traefik v1
My artifactory is accessible at the beginning over http://localhost/artifactory/, but only when starting up. As soon as artifactory wants me to redirect to its UI, it takes me to http://localhost/ui/ instead of (I guess?) http://localhost/artifactory/ui/, which is invalid.
I'm seeking either for a feature to tell artifactory it should account the prefix /artifactory when forwarding or a possibility in traefik to alter artifactory’s forward -response in a way that the forward-url matches the path.
I'm also using Jenkins with traefik, there it was as simple as adding
JENKINS_OPTS: "--prefix=/jenkins"
The Artifactory CE-C++ opens up two ports: 8081 and 8082. I suggest your reverse proxy points to the format port 8081, does it? Whereas the UI endpoint is AFAIK served on 8082. Did you try this port?
I want to run a webapp and a db using Docker, is there any way to connect 2 dockers(webApp Docker Container in One Machine and DB Docker container in another Machine) using docker-compose file without docker-swarm-mode
I mean 2 separate server
This is my Mongodb docker-compose file
version: '2'
services:
mongodb_container:
image: mongo:latest
restart: unless-stopped
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
Here is my demowebapp docker-compose file
version: '2'
services:
demowebapp:
image: demoapp:latest
restart: unless-stopped
volumes:
- ./uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost
- MONGO_URL=mongodb://35.168.21.133/demodb
ports:
- 3000:3000
Can any one suggest me How to do
Using only one docker-compose.yml with compose version: 2 there is no way to deploy 2 services on two different machines. That's what version: 3 using a stack.yml and swarm-mode are used for.
You can however deploy to two different machines using two docker-compose.yml version 2, but will have to connect them using different hostnames/ips than the service-name from the compose-file.
You shouldn't need to change anything in the sample files you show: you have to connect to the other host's IP address (or DNS name) and the published ports:.
Once you're on a different machine (or in a different VM) none of the details around Docker are visible any more. From the point of view of the system running the Web application, the first system is running MongoDB on port 27017; it might be running on bare metal, or in a container, or port-forwarded from a VM, or using something like HAProxy to pass through from another system; there's literally no way to tell.
The configuration you have to connect to the first server's IP address will work. I'd set up a DNS system if you don't already have one (BIND, AWS Route 53, ...) to avoid needing to hard-code the IP address. You also might look at a service-discovery system (I have had good luck with Hashicorp's Consul in the past) which can send you to "the host system running MongoDB" without needing to know which one that is.
I am setting up a Spring application to run using compose. The application needs to establish a connection to ActiveMQ either running locally for developers or to existing instances for staging/production.
I setup the following which is working great for local dev:
amq:
image: rmohr/activemq:latest
ports:
- "61616:61616"
- "8161:8161"
legacy-bridge:
image: myco/myservice
links:
- amq
and in the application configuration I am declaring the AMQ connection as
broker-url=tcp://amq:61616
Running docker-compose up is working great, activeMQ is fired up locally and my application constiner starts and connects to it.
Now I need to set this up for staging/production where the ActiveMQ instances are running on existing hardware within the infrastructure. My thoughts are to either use spring profiles to handle a different configurations in which case the application configuration entry for 'broker-url=tcp://amq:61616' would become something like broker-url=tcp://some.host.here:61616 or find some way to create a dns entry within my production docker-compose.yml which will point an amq dns entry to the associated staging or production queues.
What is the best approach here and if it is DNS, how to I set that up in compose?
Thanks!
Using the extra_hosts flag
First thing that comes to mind is using Compose's extra_hosts flag:
legacy-bridge:
image: myco/myservice
extra_hosts:
- "amq:1.2.3.4"
This will not create a DNS record, but an entry in the container's /etc/hosts file, effectively allowing you to continue using tcp://amq:61616 as your broker URL in your application.
Using an ambassador container
If you're not content with directly specifying the production broker's IP address and would like to leverage existing DNS records, you can use the ambassador pattern:
amq-ambassador:
image: svendowideit/ambassador
command: ["your-amq-dns-name", "61616"]
ports:
- 61616
legacy-bridge:
image: myco/myservice
links:
- "amq-ambassador:amq"
/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.