Can anyone tell me in the course, it is possible to override the parameters of individual box.cfg on a running instance. For example, add a replica, for several days I have been trying to deploy three replicas on three hosts via the docker service stack.
When I raise my hands on each server, everything works, through deploy they do not see each other and fall. I've tried all sorts of ways. hung up the endpoint on the target nodes, when requested, it gives the ip of the machine on which the container rises, if the ip matches one of those indicated in SEED, then substitutes the internal ip of the container instead (otherwise it cannot connect to itself).
In theory, it all works as I described, but there are suspicions that everything is not much different, I suppose that the problem is that before the declaration of box.cfg the instance does not reserve the address. Alas, I can not go inside the container because it cannot rise. I got the idea that if all three nodes are declared at the minimum settings and as soon as they rise to listen to the subnet, as soon as the node finds another, it will write it to replication and override box.cfg. Correct me please who had experience.
Some of the box.cfg parameters are dynamic. For example, the box.cfg{listen=}. You can set this one from the Lua code as you wish. In your case, if the container gets its IP address later, you need to specify only the port in listen. This way, Tarantool will listen on all possible interfaces.
The replication_source is a bit trickier. You can set it dynamically, but your first (initializing) call to box.cfg should be with the replication_source. This is because all instances that are initialized without this parameter will create their own replicaset, and it will make it impossible to join them to another replicaset.
You can read more about Tarantool replication architecture here: https://www.tarantool.io/en/doc/latest/book/replication/repl_architecture/
Related
I'm trying to get Erlang and Docker Swarm to play together, and I'm having a hard time. The main issue appears to be that Erlang requires my nodes to have fully qualified domain names, or at least something that looks like it (hostnames without periods tend to cause errors).
Previously, I did use a docker overlay network, but not stack/services, and I would run containers manually named "xxx.yyy". Docker would resolve "xxx.yyy", also if nodes were on different hosts, and Erlang was happy. Now, I would like to switch to using docker stack, but in this case Docker no longer resolves by container name but by service name. There is just one issue: periods are illegal in service names, but if I try to name my Erlang nodes as 'nodename#hostname' (without periods) this causes problems on the Erlang side.
I noticed that I can set the "hostname:" property for a service, and there I can include periods, but this name only resolves inside the container itself, it is not recognized over the network.
Is there some way I can set a custom address for a Docker service, which can contain periods, which will resolve properly on nodes of other services, and which remains stable as the service replaces its containers?
NB: I have tried using Erlang's "-sname", but this seems to use the container id as hostname, which still needs to be included when connecting to other nodes over the network, and this is not stable as the service updates/restarts.
NB2: I also found info here that it might be possible to use the "links:" property, but it looks like that doesn't work on swarm (perhaps only on compose?).
I'm encountering a strange issue with docker-compose on one of my systems.
I have two TICK (Telegraf, InfluxDB, Chronograf, Kapacitor) docker-compose "projects" on the same machine. Using the following docker-compose.yml
Since both services are proxied behind the same NGINX SSL instance, they both join a common nginx_proxy external network.
Issue is that as soon as I start the second compose stack. Somehow the first stack starts misbehaving: a few (like 20%) requests from the first chronograf intance targeting the influxdb service via the influxdb hostname are somehow redirected to the influxb instance from the second stack.
I understand that since they are on the same nginx network, they can communicate, but how can I force the first instance to always target it's own service and not cross-compose? Tried to specify links but it did not work.
Any configuration I could setup to achieve this isolation without having to rename all my services?
I wouldn't expect the embedded DNS server to do what you want automatically as it seems ambiguous to me (the docker-compose file doesn't express what you expressed in natural language here, which is perfectly fine and non-ambiguous). In fact I'm even surprised it allows it to run (seems to be doing round-robin across the containers, which would explain why only some requests are misrouted).
To answer your question, I would simply use non-ambiguous aliases or just non-ambiguous service names, though there might be other solutions.
I'm totally new to docker and started yesterday to do some tutorials. I want to build a small test application consisting of several different services (replicated and so on) that interact with each other and encountered a problem regarding 'service-discovery'. I started with the get-started tutorials on docker.com and at the moment i'm not really sure what's best practice in the world of docker to let the different containers in a network get to know each other...
As this is a rather vague 'problem description', i try to make this more precise. I want to use a few independent services (e.g. with stuff like postgre, mongodb, redis and rabbitmq...) together with a set of worker nodes to which work is assigned by a dedicated master node. Since it seems to be quite convenient, I wanted to use a docker-composer.yml file to define all my services and deploy them as a stack.
Moreover, I created a custom network and since it seems not to be possible to attach a stacked service to a bridge network, I created an attachable overlay network.
To finally get to the point: even though the services are deployed correctly, their actual container-name is random and without using somekind of service registry I'm not able to resolve their addresses.
A simple solution would be to use single containers with fixed container names - however this does not seem to be a best practice solution (even though it is actually just a docker-based DNS that is based on container names rather than domain names). Another problem are the randomly generated container names that contain underscores, and hence these names are not valid addresses that can be resolved...
best regards
Have you looked at something like Kubernetes? To quote from the home page:
It groups containers that make up an application into logical units for easy management and discovery.
Can I launch a bastion host through auto-scaling-group, so that I set "MinSize": 1 and "DesiredCapacity": 1.
I understand that normally ASG is used along with ELB or SQS and Cloudwatch from load balancing or scaling purpose. And I feel my purpose here is different -- I want to make my bastion machine up and running, and once it's down, I want to bring it back as soon as possible. (I don't need my bastion host to be "HA", but I'd like it to be able to automatically recover, say within 3 mins)
Is there such an use case for auto scaling group?
Yes, using an Auto Scaling Group in this fashion will ensure that a failed host will be replaced automatically if it fail EC2 health checks.
However, this is not the best and up to date way to solve your problem. EC2 supports Auto Recovery as of last year. Recovery can be configured to perform a variety of actions on an instance that fails EC2 health checks. The advantage it has over Auto Scaling is that things like Elastic IPs can be migrated over to the new instance. The docs contain all the information you'll need to set this up.
Yes, that's a valid use case.
Auto scaling groups force you to setup automatically creatable instances: you define a launch configuration that specifies stuff like instance type and the image you want to launch, and the number of instances in the group.
When you set the desired instances to '1', the autoscaling group (AG) will start enforcing that one instance will be running.
Problem: the instances get assigned a different IP when they boot so you won't know where to reach it.
There are two ways around this:
- use an ELB so you can always reach it at the ELB's address. When only running one instance, this is kind of an overkill
- make the instance assign an elastic ip when it boots. I don't think that Amazon supports this out-of-the box yet, but you can find scripts that do this for you on the web.
Note that this setup won't prevent failure. But once an instance fails, it's a matter of terminating it and a new one will be backup in 5 minutes or so.
Refer following link from amazon on the architecture and best practice for Bastion host - http://docs.aws.amazon.com/quickstart/latest/linux-bastion/architecture.html
I'm having great success so far using Mesos, Marathon, and Docker to manage a fleet of servers, and the containers I'm placing on them. However, I'd now like to go a bit further and start doing things like automatically linking an haproxy container to each main docker service that starts, or provide other daemon based and containerized services that are linked and only available to the single parent container.
Normally, I'd start up the helper service first with some name, then when I started the real service, I'd link it to the helper and everything would be fine. How does this model fit in to Marathon and Mesos though? It seems for now at least that the containerization assumes a single container.
I had one idea to start the helper service first, on whatever host it could find, then add a constraint to the real service that the hostname = helper service's hostname, but that seems like it'd cause issues with resource offers and race conditions for those resources.
I've also thought to provide an "embed", or "deep-link" functionality to docker, or to the executor scripts that start the docker containers.
Before I head down any of these paths, I wanted to find out if someone else had solved this problem, or if I was just horribly over thinking things.
Thanks!
you're wandering in uncharted territory! ☺
There are multiple approaches here; and none of them is perfect, but the situation will improve in future versions of Docker, thanks to orchestration hooks.
One way is to use good old service discovery and registration. I.E., when a service starts, it will figure out its publicly available address, and register itself in e.g. Zookeeper, Etcd, or even Redis. Since it's not trivial for a service to figure out its publicly available address (unless you adopt some conventions, e.g. always mapping port X:X instead of letting Docker assing random ports), you might want to do the registration from outside. That means that your orchestration layer (Mesos in that case) would start the container, then figure out the host and port, and put that in your service discovery system. I'm not extremely familiar with Marathon, but you should be able to register a hook for that. Then, other containers will just look up the endpoint address in the service discovery registry, plain and simple.
You could also look at Skydock, which automatically registers DNS names for your containers with Skydns. However, it's currently single-host, so if you like that idea, you'll have to extend it somehow to support multiple hosts, and maybe SRV records.
Another approach is to use "well-known entry points". This is actually is simplified case of service discovery. It means that you will make sure that your services will always run on pre-set hosts and ports, so that you can use those addresses statically. Of course, this is bad (because it will make your life harder when you will want to reproduce the environment for testing/staging purposes), but if you have no clue at all about service discovery, well, it could be a start.
You could also use Pipework to create one (or multiple) virtual network spanning across multiple hosts, and binding your containers together. Pipework will let you assign IP addresses manually, or automatically through DHCP. This approach is not recommended, though, but it's a good fit if you also want to plug your containers into an existing network architecture (e.g. VLANs...).
No matter which solution you decide to use, I highly recommend to "pretend" that you're using links. I.e. instead of hard-coding your app configuration to connect to (random example) my-postgresql-db:5432, use environment variables DB_PORT_5432_TCP_ADDR and DB_PORT_5432_TCP_PORT (as if it were a link), and set those variables when starting the container. That way, if you "fold down" your containers into a simpler environment without service discovery etc., you can easily fallback on links without efforts.