Context
I want to run a Docker Compose application on a Windows 8. I made it under a Ubuntu 16.04 and it's perfectly working on it.
This Docker Compose run:
nginx
php-fpm
The two containers use volumes.
Files
My .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/Users/my_user/Documents/Development/my_application
My docker-compose.yml file:
version: '2'
services:
web:
build: ../application-web/
ports:
- "80:80"
tty: true
# Add a volume to link php code on the host and inside the container
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
# Add hostnames to allow devs to call special url to open sites
extra_hosts:
- "localhost:127.0.0.1"
- "assistant.docker:127.0.0.1"
- "application.dev:127.0.0.1"
depends_on:
- custom-php
links:
- custom-php:custom-php
custom-php:
build: ../application-php/
ports:
- "50:50"
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
Problem
When I run docker-compose up, everything goes well. Containers start.
But when I try to reach http://192.168.99.100 in my web browser, I got a 403 error.
My investigations show that there is no mounted volumes in the nginx and the php containers:
docker exec -it compose_web_1 bash
ls -la /usr/share/nginx/html/assistant/
shows
drwxr.xr.x 2 root root 80 May 18 15:30 .
drwxr.xr.x 2 root root 4096 May 18 16:10 ..
It seems that Docker cannot mount volumes. Why?
Other information
I am using the Docker Toolbox: https://www.docker.com/products/docker-toolbox
I know that's the good IP address because when I try to reach it in my web browser, I see my nginx container displaying logs.
The environment variable APPLICATION_PATH set as //C:/Users/my_user/Documents/Development/my_application cannot work because Docker use the ":" character as separator for volume declaration:
ERROR: Volume //C:/Users/my_user/Documents/Development/my_application://C:/Users/my_user/Documents/Development/my_application has incorrect format, should be external:internal[:mode]
It's not a nginx problem because when I create an index.phtml file in the folder, I am able to run it:
<?php
echo 'Hello world!';
Ok, I finally did it!
TL;DR
Follow those instructions to be able to access C:\ inside your containers.
1. Install the Docker Toolbox
Go get it here: https://www.docker.com/products/docker-toolbox
Install it.
2. Run a Hello world
Open a Docker Quickstart Terminal.
Run in it:
docker run hello-world
3. Share C:\ with Docker
Open Virtualbox
Open configuration of the default virtual machine and go to shared folders
Modify or create a new shared folder by clicking on buttons to the right. Set options to:
C:\
C
Auto mount
Permanent configuration
Then validate.
4. Activate sharing
Shutdown the default virtual machine then restart it.
5. Set your paths
e.G. if you have a .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/path_from_C_to_the_folder_you_want_to_share_on_the_volume
/!\ you need to set COMPOSE_CONVERT_WINDOWS_PATHS to 1!
6. Start your Compose
In the Docker Quickstart Terminal:
Go to your Docker Compose folder, then start it:
cd /path_to_your_compose_folder
docker-compose up
Why have I to do that? It's so complicated!
The Docker technology rely on Linux namespaces. Without Linux, it can't work. To allow use of Docker on a Windows, Docker needs to install a Linux virtual machine. All the containers will run inside it.
The default virtual machine is now created and running within Virtualbox, that's why you have to share your folders using Virtualbox.
After sharing, the default virtual machine will have a mounted folder in it with a custom name (in the above example, it's C but it could be elephant or whatever).
Finally, Docker will mount volumes from the default virtual machine to the container: you have to use the name of the default machine shared folder in your volume declaration (in the above example, it's C but it could be elephant or whatever).
Related
I've docker container build from debian:latest image.
I need to execute a bash script that will start several services.
My host machine is Windows 10 and I'm using Docker Desktop, I've found configuration files in
docker-desktop-data wsl2 drive in data\docker\containers\<container_name>
I've 2 config files there:
config.v2.json and hostcongih.json
I've edited the first of them and replaced:
"Entrypoint":null with "Entrypoint":["/bin/bash", "/opt/startup.sh"]
I have done it while the container was down, when I restarted it the script was not executed. When I opened config.v2.json file again the Entrypoint was set to null again.
I need to run this script at every container start.
Additional strange thing is that this container doesn't have any volume appearing in docker desktop. I can checkout this container and start another one, but I need to preserve current state of this container (installed packages, files, DB content). How can I change the entrypoint or run the script in other way?
Is there anyway to export the container to image alongside with it's configuration? I need to expose several ports and run the startup script. Is there anyway to make every new container made from the image exported from current container expose the same ports and run same startup script?
Docker's typical workflow involves containers that only run a single process, and are intrinsically temporary. You'd almost never create a container, manually set it up, and try to persist it; instead, you'd write a script called a Dockerfile that describes how to create a reusable image, and then launch some number of containers from that.
It's almost always preferable to launch multiple single-process containers than to try to run multiple processes in a single container. You can use a tool like Docker Compose to describe the multiple containers and record the various options you'd need to start them:
# docker-compose.yml
# Describe the file version. Required with the stable Python implementation
# of Compose. Most recent stable version of the file format.
version: '3.8'
# Persistent storage managed by Docker; will not be accessible on the host.
volumes:
dbdata:
# Actual containers.
services:
# The database.
db:
# Use a stock Docker Hub image.
image: postgres:15
# Persist its data.
volumes:
- dbdata:/var/lib/postgresql/data
# Describe how to set up the initial database.
environment:
POSTGRES_PASSWORD: passw0rd
# Make the container accessible from outside Docker (optional).
ports:
- '5432:5432' # first port any available host port
# second port MUST be standard PostgreSQL port 5432
# Reverse proxy / static asset server
nginx:
image: nginx:1.23
# Get static assets from the host system.
volumes:
- ./static:/usr/share/nginx/html
# Make the container externally accessible.
ports:
- '8000:80'
You can check this file into source control with your application. Also consider adding a third container that build: an image containing the actual application code; that probably will not have volumes:.
docker-compose up -d will start this stack of containers (without -d, in the foreground). If you make a change to the docker-compose.yml file, re-running the same command will delete and recreate containers as required. Note that you are never running an unmodified debian image, nor are you manually running commands inside a container; the docker-compose.yml file completely describes the containers, their startup sequences (if not already built into the images), and any required runtime options.
Also see Networking in Compose for some details about how to make connections between containers: localhost from within a container will call out to that same container and not one of the other containers or the host system.
For a service I've defined a volume as (an extract of my yml file)
services:
wordpress:
volumes:
- wp_data:/var/www/html
networks:
- wpsite
networks:
wpsite:
volumes:
wp_data:
driver: local
I'm aware on a Windows 10 filesystem that the WP volumes won't be readily visible to me as they'll exist within the linux VM. Alternatively I'd have to provide a path argument to be able view my WP installation e.g.
volumes:
- ./mysql:/var/lib/mysql
But my question is what is the point of the 'driver: local' option, is this default. I've tried with & without this option and can't see the difference.
Secondly what does this do? In my yml file I've commented it out to no ill effect that I can see!?
networks:
wpsite:
First question:
The --driver or -d option defaults to local. driver: local is redundant. On Windows, the local driver does not support any options. If you were running docker on a Linux machine, you would've had some options: Official documentation here - https://docs.docker.com/engine/reference/commandline/volume_create/
Second question:
In each section networks:/volumes:/services: you basically declare the resources you need for your deployment.
In other words, creating an analogy with a virtual machine, you can think about it like this: you need to create a virtual disk named wp_data and a virtual network named wpsite.
Then, you want your wordpress service, to mount the the wp_data disk under /var/www/html and to connect to the wpsite subnet.
You can use the following docker commands to display the resources that are created behind the scenes by your compose file:
docker ps - show containers
docker volume ls - show docker volumes
docker network ls - show docker networks
Hint: once you created a network or a volume, unless you manually delete it, it will not be destroyed automatically. You can clean-up manually the resources and experiment yourself by removing/adding more resources from your compose file.
Updated to answer question in comment:
If you run your docker on a Windows host, you probably enabled hyper-v. This allowed Windows to create a Linux VM, on top of which your docker engine is running.
With the docker engine installed, docker can then create "virtual resources" such as virtual networks, virtual disks(volumes), containers(people often compare containers to VMs), services, containers etc.
Let's look at the following section from your compose file:
volumes:
wp_data:
driver: local
This will create a virtual disk managed by docker, named wp_data. The volume is not created directly on your Windows host file system, but instead it is being created inside the Linux VM that is running on top of the HyperV that is running on your Windows host. If you want to know precisely where, you can either execute docker inspect <containerID> and look for the mounts that you have on that container, or docker volume ls then docker volume inspect <volumeID> and look for the key "Mountpoint" to get the actual location.
Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.
I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".
I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.