I'm looking for a way to change what the reverse DNS resolves to in Docker.
If I set my container's FQDN to foo.bar I expect a reverse DNS lookup for its IP to resolve to foo.bar, but it always resolves to <container_name>.<network_name>.
Is there a way I can change that?
Docker's DNS support is designed to support container discovery within a cluster. It's not an application traffic management solution, so features are limited.
For example it's possible to configure a DNS wildcard which resolves "*.foo.bar" urls to a server running a container savvy load balancer solution (A load balancer that knows where all the containers, associated with each application, are located and running).
That load balancer can then route traffic based on the incoming "Hostname" HTTP header:
"app1.foo.bar" -> "App1 Container1", "App1 Container2"
"app2.foo.bar" ->
"App2 Container1", "App2 Container2", "App2 Container3"
For a practical implementation take a look at how Kubernetes does load balancing (This is an advanced topic):
http://kubernetes.io/docs/user-guide/ingress/
Related
I am working backend server launched on ECS cluster, hosted on an EC2 instance using docker.
the ECS is running great, exposed by IP address and port, but to be used with my ios app it needs to be served over https.
How do serve my ECS container over https? I have read a couple of things regarding using a load balancer, but tutorials are outdated and I can't find one that shows configuration after the ecs cluster has already been created.
Please point me to the right direction so I can get it served over https.
You need to have the following resources:
DNS address
Valid SSL Certificate
Load Balancer
Load balancer security group
Target Group
The target group will mediate between your server and your load balancer.
Also, in the security group define all the rules you currently have in the server security group, and in the server's security group ad a rule that open is open to all traffic in all ports with the security group instead of id.
This guide can help you:https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html
(look at Create an HTTPS/SSL load balancer using the console)
I have several docker containers with some web applications running via docker compose. One of the containers is a custom DNS server with Bind and Webmin installed. Webmin gives a nice web UI allowing me to update Bind DNS configuration without directly modifying the files or SSHing into the container. I have docker setup to lookup DNS in this order:
my docker dns server
my companies internal dns server
google dns server
I have one master zone file for top level domain "example.com" defined in dns server 1. I added an address for server1.example.com and dns resolves correctly. I want other subdomains to be resolved from my companies internal dns server.
server1.example.com - resolves correctly
server2.example.com - this host is not referenced in the zone file for docker dns server. I would like to somehow delegate this to my companies dns server (server 2)
The goal is I should be able to do software development for web applications and deploy them on my docker containers. The code makes internal calls to other "example.com" hosts. I want some of those calls to get directed back to other docker containers rather than the real server because I am developing code on both and want to test it end to end.
I don't want to (and can't) modify my companies dns configuration. I am not an expert in bind or dns setup and looking for the simplest solution.
What configuration can achieve this?
I guess the workaround is to use fully qualified name when creating the zone file. Instead of creating a master zone example.com and listing server1 inside that zone I am creating a master zone with server1.example.com. It means I have to create a zone file for every server but I guess its ok to manage with a smaller number of hosts. server2.example.com then doesnt fall inside of a zone and gets resolved using the next dns server in the chain.
We operate a docker cluster with several workers and a manager.
Our current problem:
We have the Jwilery Nginx proxy running on all nodes which does not cause any problems. What causes us problems is, if we operate a service e.g. grav.
This is only then available, if the domain points to the IP address on which the service is running at that time.
My question now:
Is there a way to route the domain in such a way that we only have to set an A-record and Docker does the internal routing on the respective node where the website is running?
If yes, how would we realize this or are there other alternatives with which this is easier to implement?
Useful information:
1 manager
4 workers
a total of 5 ip addresses (Public)
All Barebone Server with Docker (Without Kubernetes etc.)
1 decentralized data server with NVME
Website can be called if the domain points to the judge Worker Target 1 Public IP for all domains with failover incl. Internal routing to the respective workers.
Resources:
To implement this, no resources are a shame. Other servers could also be used for this scenario.
ps: You could also contact me in other ways for testing purposes etc.pp.
I'm right now using GKE (kubernetes) with an nginx container to proxy different services. My goal is to block some countries. I'm used to do that with nginx and its useful geoip module, but as of now, kubernetes doesn't forward the real customer ip to the containers, so I can't use it.
What would be the simplest/cheapest solution to filter out countries until kubernetes actually forward the real IP?
External service?
Simple google server with only nginx, filtering countries, forwarding to kubernetes (not great in terms of price and reliability)?
Modify the kube-proxy (as I've seen here and there, but it seems a bit odd)?
Frontend geoip filtering (hmm, worse idea by far)?
thank you!
You can use a custom nginx image and use a map to create a filter
// this in http section
map $geoip_country_code $allowed_country {
default yes;
UY no;
CL no;
}
and
// this inside some location where you want to apply the filter
if ($allowed_country = no) {
return 403;
}
First on GKE if you're using the nginx ingress controller, you should turn off the default GCE controller: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc, otherwise they'll fight.
kubernetes doesn't forward the real customer ip to the containers
That's only true if you're going through kube-proxy with a service of type NodePort and/or LoadBalancer. With the nginx ingress controller you're running with hostPort, so it's actually the docker daemon that's hiding the source ip. I think later versions of docker default to the iptables mode, which shows you the source ip once again.
In the meanwhile you can get source ip by running the nginx controller like: https://gist.github.com/bprashanth/a4b06004a0f9c19f9bd41a1dcd0da0c8
That, however, uses host networking, not the greatest option. Inserted you can use the proxy protocol to get src ip: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#proxy-protocol
Also (in case you didn't already realize) the nginx controller has the geoip module enabled by default: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#nginx-status-page
Please open an issue if you need more help.
EDIT: proxy protocol is possible through the ssl proxy which is in alpha currently: https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/#proxy_protocol_for_retaining_client_connection_information
The new version of Docker (version 1.10) includes a DNS server to pass alias information from other hosts on the same network. There used to be hosts file entries for resolving linked containers (or containers on the same network). I am wondering if it is possible to use this embedded DNS server on an overlay network? I have looked in the documentation (and in issues) and cannot find information about this.
So the way the new embedded DNS "server" works is that it isn't a formal server. It's just an embedded listener for traffic to 127.0.0.11:53 (udp of course). When docker sees that query traffic on the container's network interface, it steps in with its embedded DNS server and replies with any answers it might have to the query. The documentation has some options you can set to affect how this DNS server behaves, but since it only listens for query traffic on that localhost address, there is no way to expose this to an overlay network in the way that you are thinking. However this seems to be a moving target, and I have seen this question before in IRC, so it may one day be the case that this embedded DNS server at least becomes pluggable, or possibly exposable in the way you would like.