How to expose files from a docker container through a webserver - docker

I have this website which uses angular for the frontend and has a NodeJs backend. The backend serves the angular files and handles client calls.
As it is now, they are both packages and deployed as one docker image. Meaning, if I change the frontend, I also need to build the backend in order to create a new image. So it makes sense to seperate them.
But if I create an image for the backend and frontend, how can the backend serve files from the frontend container?
Is this the right approach?
I think I would like to have the frontend inside a docker image, so I can do stuff like rollback easily (which is not possible with docker volumes for example)!

Yes! Containerize them to have their own containers is the way to go! This make us deploy/deliver faster and also separate build pipelines to make steps clearer to everyone involved.
I won't bother having backend serving frontend files. I usually create my frontend image with a webserver (eg nginx:alpine), since frontend and backend can be separately deployed to different machines or systems. And don't forget to use multi-stage builds to minimize image size.
But if you must do that, I guess you can use docker-compose to have them in one network, and then, forward requests of those static files from backend to the frontend webserver. (Just some hacks, there must be a better way to handle this from more advanced people here :P)

I have something similar, an Emberjs running in one docker container that connects to nodejs that is running in its own container (not to mention the DB that runs on a third container). It all works rather well.
I would recommend that you create your containers using docker-compose which will automatically create the network so that both containers can talk to each other using :.
Also I set it up so that the code is mapped from a folder in my machine to a folder in the container. This allows me to easily change stuff, work with Git , etc...
Here is a snippet of my docker-compose file as an example:
version: "3"
services:
....
ember_gui:
image: danlynn/ember-cli
container_name: ember_dev
depends_on:
- node_server
volumes:
- ./Ember:/myapp
command: ember server
ports:
- "4200:4200"
- "7020:7020"
- "7357:7357"
Here I create an ember_gui service which creates a container named ember_dev based on an existing image from docker hub. Then it tells docker that this container is dependent on another container that needs to be compiled first and which I do not show in the snippet but that is defined in the same docker-compose file (node_server). After that, I map the ./Ember directory to the /myapp folder in the container so that I can share the code. Finally I start the ember server and open some ports

Related

Containers with pipelines: should/can you keep your data separate from the container

I am very new to containers and I was wondering if there is a "best practice" for the following situation:
Let's say I have developed a general pipeline using multiple software tools to analyze next generation sequencing data (I work in science). I decided to make a container for this pipeline so I can share it easily with colleagues. The container would have the required tools and their dependencies installed, as well as all the scripts to run the pipeline. There would be some wrapper/master script to run the whole pipeline, something like: bash run-pipeline.sh -i input data.txt
My question is: if you are using a container for this purpose, do you need to place your data INSIDE the container OR can you run the pipeline one your data which is place outside your container? In other words, do you have to place your input data inside the container to then run the pipeline on it?
I'm struggling to find a case example.
Thanks.
To me the answer is obvious - the data belongs outside the image.
The reason is that if you build an image with the data inside, how are your colleagues going to use it with their data?
It does not make sense to talk about the data being inside or outside the container. The data will be inside the container. The only question is how did it get there?
My recommended process is something like:
Create an image with all your scripts, required tools, dependencies, etc; but not data. For simplicity let us name this image pipeline.
Bind mount data in volumes to the container. docker container create --mount type=bind,source=/path/to/data/files/on/host,target=/srv/data,readonly=true pipeline
Of course, replace /path/to/data/files/on/host with the appropriate path. You can store your data in one place and your colleagues in another. You make a substitution appropriate for you and they will have to make a substitution appropriate for them.
However inside the container, the data will be at /srv/data. Your scripts can just assume that it will be there.
To handle the described scenario I would recommend files to exchange data between your processing steps. To bring the files into your container you could mount a local directory into your container. That also enables some kind of persistence for your containers. The way how to mount local file system into your container is displayed in the following example.
version: '3.2'
services:
container1:
image: "your.image1"
volumes:
- "./localpath:/container/internal"
container2:
image: "your.image2"
volumes:
- "./localpath:/container/internal"
container3:
image: "your.image3"
volumes:
- "./localpath:/container/internal"
The example uses a docker compose file to describe the dependencies between your containers. You can implement the same without docker-compose. Then you have to specify your container mounts in your docker run command.
https://docs.docker.com/engine/reference/commandline/run/

centos/apache/php/mongodb - can't get this to work together

It's been a few days since I've been trying to get docker container up and running, and always something goes wrong.
I need (mostly) LAMP stack, only instead MySQL -> mongoDb.
Of course I started by looking on docker hub and trying to compose some image from others. Googled after configs. Simplest one couldn't go past the stage of setting MONGODB_ADMIN_USER and MONGODB_ADMIN_PASSWORD and always returned with code 1, though mentioned variables were set in yml.
I tried to start with just centos/mongodb image, install apache, php and whatnot, commit it, and work on my own image, but without kernel it's hard to properly install and run apache within docker container.
So i tried once more, found promising project here: https://github.com/akhomy/docker-compose-lamp
but can't attach to the container, can't run localhost with default settings, though apparently composing stage goes ok.
Has anyone of You, by chance, working set of docker files / docker-compose?
Or some helpful hint? Really, looks like a straightforward task, take two images from docker hub, make docker-compose.yml, run docker-compose up, case closed. I can't wrap my head around this :|
Docker approach is not to put all services in one container but to have a single container for a single service. All Docker tools are aligned to this.
For your LAMP stack to start, you just have to download docker-compose, create docker-compose.yml file with 3 services defined and run docker-compose up
Docker compose is an orchestrating tool for containers, suited for single machine.
You need to have at least small tour over this tool, just for an example I provide sample config file:
docker-compose.yml
version: '3'
services:
apache:
image: bitnami/apache:latest
.. here goes apache config ...
db:
image: mongo
.. here goes apache config ...
php:
image: php
.. here goes php config ...
After you start this with docker-compose up you will get network created automatically for you and all services will join it. They will see each other under their names (lets say to connect to database from php you will use db as host name).
To connect to this stuff from host PC, you will need to expose ports explicitly.

Simple docker containers: Build dedicated image or mount config as volume?

I'm putting together a docker-compose.yml file to run the multiple services for a project I'm working on. This project has a Magento and Wordpress website residing under the same domain, with that "same domain" aspect requiring a very simple nginx container to route requests to either service.
So I have this architected as 4 containers (visualisation):
A "magento" container, using an in-house project-specific image.
A "wordpress" container, using an in-house project-specific image.
A "db" container running mysql:5.6, with the init db dumps mounted at /docker-entrypoint-initdb.d.
A "router" container running nginx:alpine with a custom config mounted at /etc/nginx/nginx.conf. This functions as a reverse-proxy with two location directives set up. location / routes to "magento", and location /blog routes to "wordpress".
I want to keep things simple and avoid building unnecessary custom images, but in the context of the "router" I'm not sure what I'm doing is the best approach, or if that would be better off as a project-specific image.
I'm leaning toward my current approach of mounting a custom config into the nginx:alpine container, because the configuration is specific to the stack that's running – it wouldn't make sense as a single standalone container.
So the two methods, without a custom image we have the following in docker-compose.yml
router:
image: nginx:alpine
networks:
- projectnet
ports:
- "80:80"
volumes:
- "./router/nginx.conf:/etc/nginx/nginx.conf"
Otherwise, we have a Dockerfile containing the following, as I've seen suggested across the internet and in other StackOverflow responses.
FROM nginx:alpine
ADD nginx.conf /etc/nginx/
Does anybody have arguments for/against either approach?
If you 'bake in' the nginx config (your second approach)
ADD nginx.conf /etc/nginx/
it makes your docker containers more portable - i.e. they can be downloaded and run on any server capable of running docker and it will just work.
If you use option 1, mounting the config file at run time, then you are transferring one of your dependencies to outside of your container. This makes it a dependency that must be managed outside of docker.
In my opinion it is best to put as many dependencies inside your dockerfiles as possible because it makes them more portable and more automated (great for CI Pipelines for example)
There are reasons for mounting files at run time and these are usually centred around environmentally specific settings (although these can largely be overcome within docker too) or 'sensitive' files that application developers shouldn't or couldn't have access to. For example ssl certificates, database passwords, etc

Use docker-compose with multiple repositories

I'm currently struggling with the deployment of my services and I wanted to ask, what's the proper way when you have to deal with multiple repositories. The repositories are independent, but to run in production, everything needs to be launched.
My Setup:
Git Repository Backend:
Backend Project Rails
docker-compose: backend(expose 3000), db and redis
Git Repository Frontend
Express.js server
docker-compose: (expose 4200)
Both can be run independently and test can be executed by CI
Git Repository Nginx for Production
Needs to connect to the other two services (same docker network)
forwards requests to the right service
I have already tried to include the two services as submodules into the Nginx repository and use the docker-compose of the nginx repo, but I'm not really happy with it.
You can have your CI build and push images for each service you want to run, and have the production environment run all 3 containers.
Then, your production docker-compose.yml would look like this:
lb:
image: nginx
depends_on:
- rails
- express
ports: 80:80
rails:
image: yourorg/railsapp
express:
image: yourorg/expressapp
Be noted that docker-compose isn't recommended for production environments; you should be looking at using Distributed Application Bundles (this is still an experimental feature, which will be released to core in version 1.13)
Alternatively, you can orchestrate your containers with a tool like ansible or a bash script; just make sure you create a docker network and attach all three containers to it so they can find each other.
Edit: since Docker v17 and the deprecation of DABs in favour of the Compose file v3, it seems that for single-host environments, docker-compose is a valid way for running multi-service applications. For multi-host/HA/clusterised scenarios you may want to look into either Docker Swarm for a self-managed solution, or Docker Cloud for a more PaaS approach. In any case, I'd advise you to try it out in Play-with-Docker, the official online sandbox where you can spin out multiple hosts and play around with a swarm cluster without needing to spin out your own boxes.

docker-compose: where to store configuration for services?

I'm building an ELK (elasticsearch/logstash/kibana) stack using docker-compose/docker-machine. The plan is to deploy it to a digitalocean droplet and, if needed, use Swarm to scale it.
It works really well, but I'm a bit confused where I should store configuration for the services (e.g. configuration files for logstash, or the SSL certs for nginx).
At first, I just mounted a host directory as volume. The problem with that is that all the configuration files have to be available on the docker host, so I have to sync them to the digitalocean droplet.
Then I thought I had a very smart idea: create a data container with all the configuration, and let the other services access it using volumes_from:
config:
volumes:
- /conf
build:
context: .
# this just copies the conf folder into the image
dockerfile: /dockerfiles/config/Dockerfile
logstash:
image: logstash:2.2
volumes_from:
- config
The problem with this approach became obvious quite fast: every time I change any configuration, I need to stop all containers that are linked to the config container, recreate the config image and container, and then start up the services again. Not great for uptime :(.
So, what's the best way? Ideally, the configuration files would be inside a container, so I can just ship it to wherever.
One common solution to this problem is to put a load balancer in front of the services. That way when you want to change the configuration you can start a new container and the load balancer will pick it up, then stop the old container. No downtime, and it lets you reload the config.
Another option might be to use a named volume. Then you can just modify the contents of the named volume and any containers using it will see the new files. However if you are using multiple nodes with swarm, you'll need to use a volume driver that supports multi-host volumes.
Did you consider to use the extension mechanism and override a settings file? Put a second docker-compose.override.yml in the same directory as the main compose file, or use explicit extension within the compose file. See
https://docs.docker.com/compose/extends/
That way you could integrate a configuration file in a transparent way, or control the parameters you want to change via environment variables that are different in the overriding composition.

Resources