I have few microservices, let's call it food-ms, receipt-ms, ingredients-ms and frontend with React App consumes those microservices.
Each of them exposes API at /api/[methods].
I would like to create environment for production and development using Docker and docker-compose with next properties:
Application should be available at single host. In production host should be for example http://food-app.test, for development it should be (ideally) localhost
Each microservice and frontend should be at single host, but at different paths. For example, food-ms API should be at localhost/food/api, receipt-ms API should be at localhost/receipt/api etc. Frontend should be at localhost root / path.
Ideally, I would like to be able to run some services outside container for easy debugging, but still be reverse proxied and available by localhost/{service}/api.
I found a traefik reverse proxy, and experimented with it a bit, but stuck in issues:
How to make app available at some predictable domain, like localhost. Currently I'm able to proxy requests to specific backend by specifying a strange host in Host header like <container-name>.<network-name>.docker.localhost
Seems frontends described in traefik.toml don't have an effect.
How to route requests from one frontend to different backends depending on path?
How to route request to an external IP and port (I would like to use this to run services outside container for debugging)? Should I use host network in docker for this?
Thanks in advance.
Here is my traefik.toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[file]
[frontends]
[frontends.food]
entrypoints = ["http"]
backend="food"
[frontends.receipts]
entrypoints = ["http"]
backend="receipts"
Those frontends seems doesn't get applied, because dashboards doesn't get changed if I commend them out.
After some time spent, I got a bit of success with my problem.
First of all, it is much easier to experiment with traefik running as local application rather than docker container.
So I installed traefik locally (brew install traefik) and run it with next command line:
traefik --web --configfile=./docker/traefik-local.toml --logLevel=INFO
There is a deprecated but working argument --web which meanwhile could be omitted.
Then I created a TOML file with configuration
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[file]
[frontends]
[frontends.fin]
entrypoints = ["http"]
backend="fin"
[frontends.fin.routes.matchUrl]
rule="PathPrefixStrip:/api/fin"
[frontends.fin.routes.rewriteUrl]
rule = "AddPrefix: /api"
[frontends.proj]
entrypoints = ["http"]
backend="proj"
[frontends.proj.routes.matchUrl]
rule="PathPrefixStrip: /api/proj"
[frontends.proj.routes.rewriteUrl]
rule = "AddPrefix: /api"
[backends]
[backends.fin]
#
[backends.fin.servers.main]
url = "http://localhost:81"
[backends.proj]
#
[backends.proj.servers.main]
url = "http://localhost:82"
Service names are different from initial answer, but idea should be clear.
First of all, there is mandatory [file] directive before describing frontends and backends. It doesn't work without it, arghh :(
Services are running in docker containers and exposes ports 81 for fin and 82 for proj. Since now traefik works outside docker isolated network, it supports either natively running application and application in container.
Then two frontends are described. Initially I also had an issue with rules: PathPrefixStrip is Matcher but it also modifies the path by removing path prefix.
So now it works as I want with local running, and it should be much easier to get it working in the Docker.
Well, a bit more info about running all that stuff in Docker.
First of all, the traefik has a concept of configuration provides, where it could get all that info about backends, frontends, rules, mappings etc.
For Docker there are at least two ways: use labels on services in docker-compose.yml or use file configuration provider.
Here I would consider using file configuration provider.
To use it you need to add [file] section and configuration below to traefik config or use separated file.
I used separated file, enabled and point it by adding --file --file.filename=/etc/traefik/traefik.file.toml command-line arguments.
Remember if you use Windows and docker-toolbox you need to add shared folder in Virtual Box and add mapping relative to that folder, that's a pain, yes.
After that another things are easy.
To address services in [backends] section of traefik config use service names from docker-compose.yml. To expose proxy use port mappings.
Here is my docker-compose.yaml:
version: "3"
services:
financial-service:
build:
context: .
dockerfile: ./docker/financial.Dockerfile
project-service:
build:
context: .
dockerfile: ./docker/project.Dockerfile
traefik:
image: traefik
command: --web --docker --file --file.filename=/etc/traefik/traefik.file.toml --docker.domain=docker.localhost --logLevel=INFO --configFile=/etc/traefik/traefik.toml
ports:
- "80:80"
- "8088:8080"
# - "44:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# On Windows with docker-toolbox:
# this should be mounted as a shared folder in VirtualBox.
# Mount via VB UI, don't forget to restart docker machine.
# - /rd-erp/docker:/etc/traefik/
# On normal OS
- ./docker:/etc/traefik/
depends_on:
- project-service
- financial-service
Here is traefik.file.toml for Docker:
[frontends]
[frontends.fin]
entrypoints = ["http"]
backend="fin"
[frontends.fin.routes.matchUrl]
rule="PathPrefixStrip:/api/fin"
[frontends.fin.routes.rewriteUrl]
rule = "AddPrefix: /api"
[frontends.proj]
entrypoints = ["http"]
backend="proj"
[frontends.proj.routes.matchUrl]
rule="PathPrefixStrip: /api/proj"
[frontends.proj.routes.rewriteUrl]
rule = "AddPrefix: /api"
[backends]
[backends.fin]
#
[backends.fin.servers.main]
url = "http://financial-service"
[backends.proj]
#
[backends.proj.servers.main]
url = "http://project-service"
Next step would be to run some services outside container and still be able to reverse-proxy it from localhost.
And probably the last part: connecting to the services running on the host machine from the Docker, and in our case, from the traefik container.
Run service on the host machine
In Docker 18.3+ use special domain host.docker.internal and don't forget to specify protocol and port.
In earlier Docker probably would need to use host network mode. This would involve extra configuration of services to don't overlap with busy ports, but probably wouldn't require changing configuration for running services outside container.
Run docker-compose without service you would like to debug:
docker-compose up --no-deps traefik financial-service
Enjoy
Don't forget to remove [file] section from traefik.toml if you use configuration in separated file provided by --file.filename, seems [file] section takes precedences.
Related
I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.
I am pretty new to Docker and trying to understand it. I have a docker-compose.yml file which contains certain things which I am unclear about. (I have recevied it from client and trying to run/understand it). Please note that I am using windows 10 and Docker version 3.0.
1) What does following piece of code in docker-compose.yml mean ? will it build the vvv.payara image and then start payara on port 4848 ? if yes then, should I be able to open admin page localhost:4848 after doing docker-compose up?
payara:
image: vvv.payara:rc1
build: payara
ports:
- 4848:4848
- 8080:8080
- 8181:8181
2) what is the point of aspecifying three ports for payara ? 4848, 8080 and 8181 ? does it say that if first is occupied start payara on other ?
3) what does line - ./deployments:/opt/payara41/deployments do ? why there is opt folder specified although I am using windows 10 ? I assume opt dir exists on Linux machines.
payara:
image: vvv.payara:rc1
build: payara
ports:
- 4848:4848
- 8080:8080
- 8181:8181
volumes:
- ./deployments:/opt/payara41/deployments
- ./logs:/opt/payara41/glassfish/domains/payaradomain/logs
- ./vvvConfiguration:/opt/vdz/config
working_dir: /opt/payara41/bin/
environment:
- PAYARA_DOMAIN=payaradomain
The build parameter specifies the folder docker will use to build the application (cf. doc).
The list of ports indicates a port exposure of the docker on the host system. This way, you should access ports 4848, 8080 and 8181 of the docker container on localhost
These three ports are required to access all components of payara. They will all be used for different services (of payara) if the ports are available on the host system. (The port 4848 is the admin HTTPS interface the 8080 the HTTP listener and the 8181 is the HTTPS listener)
Those lines are declaring mount points, that behaves like shared folders, between the host and the container. The part before the : refers to the folder on the host and the second part the folder inside the container that it will be linked to.
This means that your folder deployments will be accessible inside the container in the folder /opt/payara41/deployments
As far as I'm concerned, this is more of a development question than a server question, but it lies very much on the boundary of the two, so feel free to migrate to serverfault.com if that's the consensus).
I have a service, let's call it web, and it is declared in a docker-compose.yml file as follows:
web:
image: webimage
command: run start
build:
context: ./web
dockerfile: Dockerfile
In front of this, I have a reverse-proxy server running Apache Traffic Server. There is a simple mapping rule in the url remapping config file
map / http://web/
So all incoming requests are mapped onto the web service described above. This works just peachily in docker-compose, however when I move the service to kubernetes with the following service description:
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
clusterIP: None
ports:
- name: headless
port: 55555
targetPort: 0
selector:
io.kompose.service: web
status:
loadBalancer: {}
...traffic server complains because it cannot resolve the DNS name web.
I can resolve this by slightly changing the DNS behaviour of traffic server with the following config change:
CONFIG proxy.config.dns.search_default_domains INT 1
(see https://docs.trafficserver.apache.org/en/7.1.x/admin-guide/files/records.config.en.html#dns)
This config change is described as follows:
Traffic Server can attempt to resolve unqualified hostnames by expanding to the local domain. For example if a client makes a request to an unqualified host (e.g. host_x) and the Traffic Server local domain is y.com, then Traffic Server will expand the hostname to host_x.y.com.
Now everything works just great in kubernetes.
However, when running in docker-compose, traffic-server complains about not being able to resolve web.
So, I can get things working on both platforms, but this requires config changes to do so. I could fire a start-up script for traffic-server to establish if we're running in kube or docker and write the config line above depending on where we are running, but ideally, I'd like the DNS to be consistent across platforms. My understanding of DNS (and in particular, DNS default domains/ local domains) is patchy.
Any pointers? Ideally, a local domain for docker-compose seems like the way to go here.
The default kubernetes local domain is
default.svc.cluster.local
which means that the fully qualified name of the web service under kubernetes is web.default.svc.cluster.local
So, in the docker-compose file, under the trafficserver config section, I can create an alias for web as web.default.svc.cluster.local with the following docker-compose.yml syntax:
version: "3"
services:
web:
# ...
trafficserver:
# ...
links:
- "web:web.default.svc.cluster.local"
and update the mapping config in trafficserver to:
map / http://web.default.svc.cluster.local/
and now the web service is reachable using the same domain name across docker-compose and kubernetes.
I found the same problem but solved it in another way, after much painful debugging.
With CONFIG proxy.config.dns.search_default_domains INT 1 Apache Traffic Server will append the names found under search in /etc/resolve.conf one by one until it gets a hit.
In my case resolve.conf points to company.intra so I could name my services (all services used from Apache Traffic Server) according to this
version: '3.2'
services:
# this hack is ugly but we need to name this
# (and all other service called from ats), with
# the same name as found under search in /etc/resolve.conf)
web.company.intra:
image: web-image:1.0.0
With this change I don't need to make any changes to remap.config at all, the URL used can still be only "web", since it gets expanded to a name that matches both environments,
web.company.intra in docker.compose
web.default.svc.local.cluster in kubernetes
I am setting up a Spring application to run using compose. The application needs to establish a connection to ActiveMQ either running locally for developers or to existing instances for staging/production.
I setup the following which is working great for local dev:
amq:
image: rmohr/activemq:latest
ports:
- "61616:61616"
- "8161:8161"
legacy-bridge:
image: myco/myservice
links:
- amq
and in the application configuration I am declaring the AMQ connection as
broker-url=tcp://amq:61616
Running docker-compose up is working great, activeMQ is fired up locally and my application constiner starts and connects to it.
Now I need to set this up for staging/production where the ActiveMQ instances are running on existing hardware within the infrastructure. My thoughts are to either use spring profiles to handle a different configurations in which case the application configuration entry for 'broker-url=tcp://amq:61616' would become something like broker-url=tcp://some.host.here:61616 or find some way to create a dns entry within my production docker-compose.yml which will point an amq dns entry to the associated staging or production queues.
What is the best approach here and if it is DNS, how to I set that up in compose?
Thanks!
Using the extra_hosts flag
First thing that comes to mind is using Compose's extra_hosts flag:
legacy-bridge:
image: myco/myservice
extra_hosts:
- "amq:1.2.3.4"
This will not create a DNS record, but an entry in the container's /etc/hosts file, effectively allowing you to continue using tcp://amq:61616 as your broker URL in your application.
Using an ambassador container
If you're not content with directly specifying the production broker's IP address and would like to leverage existing DNS records, you can use the ambassador pattern:
amq-ambassador:
image: svendowideit/ambassador
command: ["your-amq-dns-name", "61616"]
ports:
- 61616
legacy-bridge:
image: myco/myservice
links:
- "amq-ambassador:amq"
/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.