Docker services running on some locally and others remotely - docker

How can I configure docker-compose to use multiple containers where some containers (especially in active development) to be running on your local host computer and other services are containers in remote servers?
In docker-compose.yml
rails:
build: some_path
volumes: some_volumes
mysql:
image: xxx
build: xxxx
nginx:
image: xxx
build: xxxx
other_services:
Currently I have all containers running locally and it works fine, but noticed that performance is slow; what if I have, for example, nginx and other_services running remotely - how do I do that? If there is a tutorial link, kindly let me know since didn't find one with google.

Using docker swarm. You can create a swarm with many nodes (one your local machine one the remote server) and then using docker stack deploy you can deploy your application to those machines.
This is the tutorial.

Related

Spark master isn't accessible remotely when cluster is published using docker stack deploy

I have a Spark cluster running on a remote server, that is set up using a docker-compose.yml on top of bitnami's Spark image.
When I spin up the containers using docker-compose up, I can submit jobs to the remote cluster from my machine using the spark://host-ip:port, and everything works fine. However, if I use docker stack deploy to deploy the spark cluster as a swarm service, I get connection refused when I try to submit to the remote cluster.
Here's my partial docker-compose.yml file, which includes the parts that seem relevant to the problem.
version: '3.7'
services:
spark:
image: bitnami/spark:3.2.0
ports:
- 10001:8080
- 10000:7077
spark-worker:
image: bitnami/spark:3.2.0
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
The Spark UI port mapping works fine in either method; I can access the Spark UI both when deployed as a stack and when I've started it using docker-compose up. The workers also connect to the master in both scenarios.
By the way, I haven't defined any network in the docker-compose file, and it's using the default network in both cases.

How should I connect to a container using the host rather than the service name?

I'd like to be able to connect to localstack using the host rather than the service name. I have added the localstack image to my docker-compose file and set network_mode: "host". I can connect to http://localhost:8080 from my other containers. But, I can not connect to: http://localhost:8080 from my host machine. How can I connect to a container using localhost rather than the service name? Not sure if I have misunderstood what network_mode: "host" does.
version: "3"
services:
localstack:
image: localstack/localstack:latest
network_mode: "host"
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- AWS_REGION=us-east-1
- SERVICES=sqs
Problem is I'm using CircleCI to run some component tests, but it seems that in CircleCI you can only reference other services on localhost and not via the service name. This means there are some difference between my local environment and test environment configs. I tried running docker-compose in CircleCI but it seems to freak out locally when doing that. So I wanted to see if I can reference localhost between services in docker-compose.
This happens because Docker for Mac runs inside a virtual machine using the xhyve hypervisor not natively on macOS.
When you run the container with net=host you are actually using the network of the VM and not the one from your local machine.
This is a known limitation of Docker for mac given the nature of how it works.
The only way to access a container is by using port mapping, so if you remove the network_mode:"host" from your docker-compose file it should work as you are already mapping ports.

How do I access Mopidy running in Docker container from another container

To start, I am more familiar running Docker through Portainer than I am with doing it through the console.
What I'm Doing:
Currently, I'm running Mopidy through a container, which is being accessed by other machines through the default Mopidy port. In another container, I am running a Slack bot using the Limbo repo as a base. Both of them are running on Alpine Linux.
What I Need:
What I want to do is for my Slack bot to be able to call MPC commands, such as muting the volume, etc. This is where I am stuck. What is the best way for this to work
What I've tried:
I could ssh into the other container to send a command, but it doesn't make sense to do this since they're both running on the same server machine.
The best way to connect a bunch of containers is to define a service stack using docker-compose.yml file and launch all of them using docker-compose up. This way all the containers will be connected via single user-defined bridge network which will make all their ports accessible to each other without you explicitly publishing them. It will also allow the containers to discover each other by the service name via DNS-resolution.
Example of docker-compose.yml:
version: "3"
services:
service1:
image: image1
ports:
# the following only necessary to access port from host machine
- "host_port:container_port"
service2:
image: image2
In the above example any application in the service2 container can reach some port on service1 just by using service2:port address.

Docker workflow

I am developing a small social-media project using nodejs, postgresql and nginx on a backend.
Locally, I worked with Docker as a replacement for a Vagrant, I have all entities split between separate containers and combined them via docker-compose.
I do not have production experience with Docker. How should I pack result of docker-compose, and deploy?
You can build and publish the individual docker images, and do the same docker-compose on your production servers. Of course, the servers have to be logged into the registry if it is a private one.
Sample:
version: '2'
services:
application1:
image: your.docker.registry/image-application1
application2:
image: your.docker.registry/image-application2
depends_on:
- application1
The images can be built and pushed to a registry as part of your regular build process.
You do not need to modifiy containers to make them production ready, other than what is described here. What you need to do is ensure you are deploying them to a High Availability system that can respond to failures by respawning processes. Here are some examples:
Amazon Elastic Container Service
Kubernetes
Google Container Engine
Weave

How to put docker container for database on a different host in production?

Let's say we have a simple web app stack, something like the one described in docker-compse docs. Its docker-compose.yml looks like this:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
This is great for development on a laptop. In production, though, it would be useful to require the db container to be on its own host. Tutorials I'm able to find use docker-swarm to scale out the web container, but pay no attention to the fact that the instance of db and one instance of web run on the same machine.
Is it possible to require a specific container to be on its own machine (or even better, on a specific machine) using docker ? If so, how? If not, what is the docker way to deal with database in multi-container apps?
In my opinion, databases sit on the edge of the container world, they're useful for development and testing but production databases are often not very ephemeral or portable things by nature. Flocker certainly
helps as do scalable types of databases, like Cassandra, but databases can have very specific requirements that might be better treated as a service that sits behind your containerised app (RDS, Cloud SQL etc).
In any case you will need a container orchestration tool.
You can apply manual scheduling constraints for Compose + Swarm to dictate the docker host a container can run on. For your database, you might have:
environment:
- "constraint:storage==ssd"
Otherwise you can setup a more static Docker environment with Ansible, Chef, Puppet
Use another orchestration tool that supports docker: Kubernetes, Mesos, Nomad
Use a container service: Amazon ECS, Docker Cloud/Tutum

Resources