As far as I'm concerned, this is more of a development question than a server question, but it lies very much on the boundary of the two, so feel free to migrate to serverfault.com if that's the consensus).
I have a service, let's call it web, and it is declared in a docker-compose.yml file as follows:
web:
image: webimage
command: run start
build:
context: ./web
dockerfile: Dockerfile
In front of this, I have a reverse-proxy server running Apache Traffic Server. There is a simple mapping rule in the url remapping config file
map / http://web/
So all incoming requests are mapped onto the web service described above. This works just peachily in docker-compose, however when I move the service to kubernetes with the following service description:
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
clusterIP: None
ports:
- name: headless
port: 55555
targetPort: 0
selector:
io.kompose.service: web
status:
loadBalancer: {}
...traffic server complains because it cannot resolve the DNS name web.
I can resolve this by slightly changing the DNS behaviour of traffic server with the following config change:
CONFIG proxy.config.dns.search_default_domains INT 1
(see https://docs.trafficserver.apache.org/en/7.1.x/admin-guide/files/records.config.en.html#dns)
This config change is described as follows:
Traffic Server can attempt to resolve unqualified hostnames by expanding to the local domain. For example if a client makes a request to an unqualified host (e.g. host_x) and the Traffic Server local domain is y.com, then Traffic Server will expand the hostname to host_x.y.com.
Now everything works just great in kubernetes.
However, when running in docker-compose, traffic-server complains about not being able to resolve web.
So, I can get things working on both platforms, but this requires config changes to do so. I could fire a start-up script for traffic-server to establish if we're running in kube or docker and write the config line above depending on where we are running, but ideally, I'd like the DNS to be consistent across platforms. My understanding of DNS (and in particular, DNS default domains/ local domains) is patchy.
Any pointers? Ideally, a local domain for docker-compose seems like the way to go here.
The default kubernetes local domain is
default.svc.cluster.local
which means that the fully qualified name of the web service under kubernetes is web.default.svc.cluster.local
So, in the docker-compose file, under the trafficserver config section, I can create an alias for web as web.default.svc.cluster.local with the following docker-compose.yml syntax:
version: "3"
services:
web:
# ...
trafficserver:
# ...
links:
- "web:web.default.svc.cluster.local"
and update the mapping config in trafficserver to:
map / http://web.default.svc.cluster.local/
and now the web service is reachable using the same domain name across docker-compose and kubernetes.
I found the same problem but solved it in another way, after much painful debugging.
With CONFIG proxy.config.dns.search_default_domains INT 1 Apache Traffic Server will append the names found under search in /etc/resolve.conf one by one until it gets a hit.
In my case resolve.conf points to company.intra so I could name my services (all services used from Apache Traffic Server) according to this
version: '3.2'
services:
# this hack is ugly but we need to name this
# (and all other service called from ats), with
# the same name as found under search in /etc/resolve.conf)
web.company.intra:
image: web-image:1.0.0
With this change I don't need to make any changes to remap.config at all, the URL used can still be only "web", since it gets expanded to a name that matches both environments,
web.company.intra in docker.compose
web.default.svc.local.cluster in kubernetes
Related
We are using Docker 18.9.8-dind. DinD — Docker-in-Docker — is running Docker in a separate container. This way, we send requests to this container to build our images, instead of executing Docker in the machine that wants the built image.
We needed to upgrade from 18.9.8-dind to 20.10.14-dind. Since we use Kubernetes, we just updated the image version in some YAML files:
spec:
containers:
- name: builder
- image: docker:18.09.8-dind
+ image: docker:20.10.14-dind
args: ["--storage-driver", "overlay2", "--mtu", "1460"]
imagePullPolicy: Always
resources:
Alas, things stopped working after that. Builds failed, and we could find these error messages in the code reaching for our Docker builder:
{"errno":-111,"code":"ECONNREFUSED","syscall":"connect","address":"123.456.789.10","port":2375}
Something went wrong and the entire build was interrupted due to an incorrect configuration file or build step,
check your source code.
What can be going on?
We checked the logs in the Docker pod, and found this message at the end:
API listen on [::]:2376
Well, our message in the question states we tried to connect to port 2375, which used to work. Why has the port changed?
Docker enables TLS as default from version 19.03 onwards. When Docker uses TLS, it listens on port 2376.
We had three alternatives here:
change the port to 2375 (which sounds like a bad idea: we would use the default plain port for TLS communication, a very confusing setup);
Connect to the new port; or
disable TLS.
In general, connecting to the new port is probably the best solution. However, for reasons specific to us, we choose to disable TLS, which only requires an environment variable in yet another YAML file:
- name: builder
image: docker:20.10.14-dind
args: ["--storage-driver", "overlay2", "--mtu", "1460"]
+ env:
+ - name: DOCKER_TLS_CERTDIR
+ value: ""
imagePullPolicy: Always
resources:
requests:
In most scenarios, though, it is probably better to have TLS enabled and change the port in the client.
(Sharing in the spirit of Can I answer my own questions? because it took us some time to piece the parts together. Maybe by sharing this information together with the error message, things can be easier for other affected people to find.)
I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default
I have few microservices, let's call it food-ms, receipt-ms, ingredients-ms and frontend with React App consumes those microservices.
Each of them exposes API at /api/[methods].
I would like to create environment for production and development using Docker and docker-compose with next properties:
Application should be available at single host. In production host should be for example http://food-app.test, for development it should be (ideally) localhost
Each microservice and frontend should be at single host, but at different paths. For example, food-ms API should be at localhost/food/api, receipt-ms API should be at localhost/receipt/api etc. Frontend should be at localhost root / path.
Ideally, I would like to be able to run some services outside container for easy debugging, but still be reverse proxied and available by localhost/{service}/api.
I found a traefik reverse proxy, and experimented with it a bit, but stuck in issues:
How to make app available at some predictable domain, like localhost. Currently I'm able to proxy requests to specific backend by specifying a strange host in Host header like <container-name>.<network-name>.docker.localhost
Seems frontends described in traefik.toml don't have an effect.
How to route requests from one frontend to different backends depending on path?
How to route request to an external IP and port (I would like to use this to run services outside container for debugging)? Should I use host network in docker for this?
Thanks in advance.
Here is my traefik.toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[file]
[frontends]
[frontends.food]
entrypoints = ["http"]
backend="food"
[frontends.receipts]
entrypoints = ["http"]
backend="receipts"
Those frontends seems doesn't get applied, because dashboards doesn't get changed if I commend them out.
After some time spent, I got a bit of success with my problem.
First of all, it is much easier to experiment with traefik running as local application rather than docker container.
So I installed traefik locally (brew install traefik) and run it with next command line:
traefik --web --configfile=./docker/traefik-local.toml --logLevel=INFO
There is a deprecated but working argument --web which meanwhile could be omitted.
Then I created a TOML file with configuration
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[file]
[frontends]
[frontends.fin]
entrypoints = ["http"]
backend="fin"
[frontends.fin.routes.matchUrl]
rule="PathPrefixStrip:/api/fin"
[frontends.fin.routes.rewriteUrl]
rule = "AddPrefix: /api"
[frontends.proj]
entrypoints = ["http"]
backend="proj"
[frontends.proj.routes.matchUrl]
rule="PathPrefixStrip: /api/proj"
[frontends.proj.routes.rewriteUrl]
rule = "AddPrefix: /api"
[backends]
[backends.fin]
#
[backends.fin.servers.main]
url = "http://localhost:81"
[backends.proj]
#
[backends.proj.servers.main]
url = "http://localhost:82"
Service names are different from initial answer, but idea should be clear.
First of all, there is mandatory [file] directive before describing frontends and backends. It doesn't work without it, arghh :(
Services are running in docker containers and exposes ports 81 for fin and 82 for proj. Since now traefik works outside docker isolated network, it supports either natively running application and application in container.
Then two frontends are described. Initially I also had an issue with rules: PathPrefixStrip is Matcher but it also modifies the path by removing path prefix.
So now it works as I want with local running, and it should be much easier to get it working in the Docker.
Well, a bit more info about running all that stuff in Docker.
First of all, the traefik has a concept of configuration provides, where it could get all that info about backends, frontends, rules, mappings etc.
For Docker there are at least two ways: use labels on services in docker-compose.yml or use file configuration provider.
Here I would consider using file configuration provider.
To use it you need to add [file] section and configuration below to traefik config or use separated file.
I used separated file, enabled and point it by adding --file --file.filename=/etc/traefik/traefik.file.toml command-line arguments.
Remember if you use Windows and docker-toolbox you need to add shared folder in Virtual Box and add mapping relative to that folder, that's a pain, yes.
After that another things are easy.
To address services in [backends] section of traefik config use service names from docker-compose.yml. To expose proxy use port mappings.
Here is my docker-compose.yaml:
version: "3"
services:
financial-service:
build:
context: .
dockerfile: ./docker/financial.Dockerfile
project-service:
build:
context: .
dockerfile: ./docker/project.Dockerfile
traefik:
image: traefik
command: --web --docker --file --file.filename=/etc/traefik/traefik.file.toml --docker.domain=docker.localhost --logLevel=INFO --configFile=/etc/traefik/traefik.toml
ports:
- "80:80"
- "8088:8080"
# - "44:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# On Windows with docker-toolbox:
# this should be mounted as a shared folder in VirtualBox.
# Mount via VB UI, don't forget to restart docker machine.
# - /rd-erp/docker:/etc/traefik/
# On normal OS
- ./docker:/etc/traefik/
depends_on:
- project-service
- financial-service
Here is traefik.file.toml for Docker:
[frontends]
[frontends.fin]
entrypoints = ["http"]
backend="fin"
[frontends.fin.routes.matchUrl]
rule="PathPrefixStrip:/api/fin"
[frontends.fin.routes.rewriteUrl]
rule = "AddPrefix: /api"
[frontends.proj]
entrypoints = ["http"]
backend="proj"
[frontends.proj.routes.matchUrl]
rule="PathPrefixStrip: /api/proj"
[frontends.proj.routes.rewriteUrl]
rule = "AddPrefix: /api"
[backends]
[backends.fin]
#
[backends.fin.servers.main]
url = "http://financial-service"
[backends.proj]
#
[backends.proj.servers.main]
url = "http://project-service"
Next step would be to run some services outside container and still be able to reverse-proxy it from localhost.
And probably the last part: connecting to the services running on the host machine from the Docker, and in our case, from the traefik container.
Run service on the host machine
In Docker 18.3+ use special domain host.docker.internal and don't forget to specify protocol and port.
In earlier Docker probably would need to use host network mode. This would involve extra configuration of services to don't overlap with busy ports, but probably wouldn't require changing configuration for running services outside container.
Run docker-compose without service you would like to debug:
docker-compose up --no-deps traefik financial-service
Enjoy
Don't forget to remove [file] section from traefik.toml if you use configuration in separated file provided by --file.filename, seems [file] section takes precedences.
I have multiple Java spring boot services (around 20 of them) using Amazon SDKs for S3, SQS, DynamoDB, etc..
Currently, to use Amazon Web Service I only need to specify my AWS key & secret.
ACCESS_AWS_KEY=<MY_KEY>
ACCESS_AWS_SECRET=<MY_SECRET>
However, I wanted to setup offline dev enviroment so I started to dockerize my services and set up a single multi-docker container with all my services dockerized and localstack should be used instead of remote AWS service to allow complete offline development.
docker-compose.yml looks something like this
version: '3'
services:
service_1:
build: ./repos/service_1
links:
- service_2:
- localstack
service_2:
build: ./repos/service_2
links:
- localstack
service_3:
build: ./repos/service_3
links:
- localstack
localstack:
image: localstack/localstack
Amazon SDK provides AWS_REGION env variable, but not an endpoint environment variable which I can easily use in all services.
I also don't want to make code changes in my services to accommodate the new non-default endpoint.
I want a generic solution to forward requests like this:
dynamodb.eu-west-1.amazonaws.com => localstack_1:4569
s3-eu-west-1.amazonaws.com => localstack_1:4572
where localstack_1 is a linked docker container of localstack and reachable by other containers.
I came across extra_hosts: in docker-compose, but it only redirects to IPs and has no hostname resolving.
Also, notice that I have dozens of ports exposed in localstack from 4569 to 4582.
I thought about running a script on each machine setting up a vhost in some way, or forwarding all outgoing connections from all containers to a centralized request forwarder service, but have no clue where to start.
This will only used as a offline dev environment and will not receive any real traffic.
Ok, I was able to finally find a solution for this. I had to actually go through localstack code base to be able to find the solution.
Couple of quick things :
Localstack is not integrated with IAM. So it just simply ignores the secret key or password.
If you're using IAM, you now need to have a flag to override the endpoint. You can probably have a flag to indicate localstack mode.
Couple of classes which I found helpful if you're debugging issues :
https://github.com/atlassian/localstack/blob/master/localstack/ext/java/src/test/java/com/atlassian/localstack/SQSMessagingTest.java
https://github.com/atlassian/localstack/blob/master/localstack/ext/java/src/test/java/com/atlassian/localstack/TestUtils.java
Now for the solution :
AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://localhost:4576/", awsRegion.getName());
AmazonSQSClient client = (AmazonSQSClient) AmazonSQSClientBuilder.standard()
.withCredentials(configuration.getSqsConfiguration().getCredentialsProvider())
.withEndpointConfiguration(endpoint)
.build();
Here http://localhost:4576/ is where localstacks runs SQS. Don't miss the trailing slash.
For using this in camel route is same as using AWS resources. Hopefully this helps!
I am setting up a Spring application to run using compose. The application needs to establish a connection to ActiveMQ either running locally for developers or to existing instances for staging/production.
I setup the following which is working great for local dev:
amq:
image: rmohr/activemq:latest
ports:
- "61616:61616"
- "8161:8161"
legacy-bridge:
image: myco/myservice
links:
- amq
and in the application configuration I am declaring the AMQ connection as
broker-url=tcp://amq:61616
Running docker-compose up is working great, activeMQ is fired up locally and my application constiner starts and connects to it.
Now I need to set this up for staging/production where the ActiveMQ instances are running on existing hardware within the infrastructure. My thoughts are to either use spring profiles to handle a different configurations in which case the application configuration entry for 'broker-url=tcp://amq:61616' would become something like broker-url=tcp://some.host.here:61616 or find some way to create a dns entry within my production docker-compose.yml which will point an amq dns entry to the associated staging or production queues.
What is the best approach here and if it is DNS, how to I set that up in compose?
Thanks!
Using the extra_hosts flag
First thing that comes to mind is using Compose's extra_hosts flag:
legacy-bridge:
image: myco/myservice
extra_hosts:
- "amq:1.2.3.4"
This will not create a DNS record, but an entry in the container's /etc/hosts file, effectively allowing you to continue using tcp://amq:61616 as your broker URL in your application.
Using an ambassador container
If you're not content with directly specifying the production broker's IP address and would like to leverage existing DNS records, you can use the ambassador pattern:
amq-ambassador:
image: svendowideit/ambassador
command: ["your-amq-dns-name", "61616"]
ports:
- 61616
legacy-bridge:
image: myco/myservice
links:
- "amq-ambassador:amq"