I'm trying to configure a simple LAMP app.
Here is my Dockerfile
FROM ubuntu
# ...
RUN apt-get update
RUN apt-get -yq install apache2
# ...
WORKDIR /data
And my docker-compose.yml
db:
image: mysql
web:
build: .
ports:
- 80:80
volumes:
- .:/data
links:
- db
command: /data/run.sh
After docker-compose build & up I was expecting to find db added to my /etc/hosts (into the web container), but it's not there.
How can this be explained ? What am I doing wrong ?
Note1: At up time, I see only Attaching to myapp_web_1, shouldn't I see also myapp_db_1 ?
Note2: I'm using boot2docker
Following #Alexandru_Rosianu's comment, I checked
$ docker-compose logs db
error: database is uninitialized and MYSQL_ROOT_PASSWORD not set
Did you forget to add -e MYSQL_ROOT_PASSWORD=... ?
Since I now set the variable MYSQL_ROOT_PASSWORD
$ docker-compose up
Attaching to myapp_db_1, myapp_web_1
db_1 | Running mysql_install_db
db_1 | ...
I can see the whole db log and the db host effectively set in web's /etc/hosts
Related
My docker-compose.yml:
solr:
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
After bringing up the docker with docker-compose up -d --build, the solr container is down and the log (docker logs myproject-solr) shows this:
Copying solr.xml
cp: cannot create regular file '/var/solr/data/solr.xml': Permission denied
I've noticed that if I give full permissions on my machine to the data directory sudo chmod 777 ./data/solr/ -R and I run the Docker again, everything is fine.
I guess the issue comes when the solr user is not my machine, because Docker creates the data/solr folder with root:root. Having my ./data folder gitignored, I cannot manage these folder permissions.
I'd like to know a workaround to manage permissions properly with the purpose of persisting data
It's a known "issue" with docker-compose: all files created by Docker engine are owned by root:root. Usually it's solved in one of the two ways:
Create the volume in advance. In your case, you can create the ./data/solr directory in advance, with appropriate permissions. You might make it accessible to anyone, or, better, change its owner to the solr user. The solr user and group ids are hardcoded inside the solr image: 8983 (Dockerfile.template)
mkdir -p ./data/solr
sudo chown 8983:8983 ./data/solr
If you want to avoid running additional commands before docker-compose, you can create additional container which will fix the permissions:
version: "3"
services:
initializer:
image: alpine
container_name: solr-initializer
restart: "no"
entrypoint: |
/bin/sh -c "chown 8983:8983 /solr"
volumes:
- ./data/solr:/solr
solr:
depends_on:
- initializer
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
There is docker-compose-only solution :)
Problem
Docker mounts local folders with root permissions.
In Solr's docker image, the default user is solr - for a good reason: Solr commands should be run with this user (you can force to run them with root but that is not recommended).
Most Solr commands require write permissions to /var/solr/, for data and logs storage.
In this context, when you run a solr command as the solr user, you are rejected because you don't have write permission to /var/solr/.
Solution
What you can do is to first start the container as root to change the permissions of /var/solr/. And then switch to solr user to run all necessary solr commands. You can't start our Solr server.
In the example below, we use solr-precreate to create a default core and start solr.
version: '3.7'
services:
solr:
image: solr:8.5.2
volumes:
- ./mnt/solr:/var/solr
ports:
- 8983:8983
user: root # run as root to change the permissions of the solr folder
# Change permissions of the solr folder, create a default core and start solr as solr user
command: bash -c "
chown -R 8983:8983 /var/solr
&& runuser -u solr -- solr-precreate default-core"
Set with a Dockerfile
It's possibly not exactly what you wanted as the files aren't persisted when rebuilding the container, but it solves the 'rights' problem. Copy the files over and chown them with a Dockerfile:
FROM solr:8.7.0
COPY --chown=solr ./data /var/solr/data
This is more useful if you're trying to initialise a single core:
FROM solr:8.7.0
COPY --chown=solr ./core /var/solr/data/someCollection
It also has the advantage that you can create an image for reuse.
With a named volume
For persistence, you can also create a volume (in this case core) and copy the contents of a directory (also called core here), assigning the rights to the files on the way:
docker container create --name temp -v core:/data tianon/true || exit $?
tar -cf - --directory core --owner 8983 --group 8983 . | docker cp - temp:/data
docker rm temp
This was adapted from these answers:
https://github.com/moby/moby/issues/25245#issuecomment-365980572
https://stackoverflow.com/a/52446394
Then you can mount the named volume in your Docker Compose file:
version: '3'
services:
solr:
image: solr:8.7.0
networks:
- internal
ports:
- 8983:8983
volumes:
- core:/var/solr/data/someCollection
volumes:
core:
external: true
This solution persists the data without overriding the data on the host. And it doesn't need the extra build step. And can obviously be adapted for mounting the entire /var/solr/data folder.
It doesn't seem to matter that the mounted volume/directory doesn't have the correct rights (/var/solr/data/someCollection has owner root:root).
There are several issues similar to this one, such as:
Redis is configured to save RDB snapshots, but it is currently not able to persist on disk - Ubuntu Server
MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled
but none of these solves my problem.
The problem is that I am running my redis in docker-compose, and just cannot understand how to fix this at docker-compose startup.
The redis docs say this is the fix:
echo 1 > /proc/sys/vm/overcommit_memory
And this works when Redis is installed outside of docker. But how do I run this command with docker-compose?
I tried the following:
1) adding the command:
services:
cache:
image: redis:5-alpine
command: ["echo", "1", ">", "/proc/sys/vm/overcommit_memory", "&&", "redis-server"]
ports:
- ${COMPOSE_CACHE_PORT:-6379}:6379
volumes:
- cache:/data
this doesn't work:
docker-compose up
Recreating constructor_cache_1 ... done
Attaching to constructor_cache_1
cache_1 | 1 > /proc/sys/vm/overcommit_memory && redis-server
constructor_cache_1 exited with code 0
2) Mounting /proc/sys/vm/ directory.
This failed: turns out I cannot mount to /proc/ directory.
3) Overriding the entrypoint:
custom-entrypoint.sh:
#!/bin/sh
set -e
echo 1 > /proc/sys/vm/overcommit_memory
# first arg is `-f` or `--some-option`
# or first arg is `something.conf`
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$#"
fi
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec su-exec redis "$0" "$#"
fi
exec "$#"
docker-compose.yml:
services:
cache:
image: redis:5-alpine
ports:
- ${COMPOSE_CACHE_PORT:-6379}:6379
volumes:
- cache:/data
- ./.cache/custom-entrypoint.sh:/usr/local/bin/custom-entrypoint.sh
entrypoint: /usr/local/bin/custom-entrypoint.sh
This doesn't work too.
How to fix this?
TL;DR Your redis is not secure
UPDATE:
Use expose instead of ports so the service is only available to linked services
Expose ports without publishing them to the host machine - they’ll
only be accessible to linked services. Only the internal port can be
specified.
expose
- 6379
ORIGINAL ANSWER:
long answer:
This is possibly due to an unsecured redis-server instance. The default redis image in a docker container is unsecured.
I was able to connect to redis on my webserver using just redis-cli -h <my-server-ip>
To sort this out, I went through this DigitalOcean article and many others and was able to close the port.
You can pick a default redis.conf from here
Then update your docker-compose redis section to(update file paths accordingly)
redis:
restart: unless-stopped
image: redis:6.0-alpine
command: redis-server /usr/local/etc/redis/redis.conf
env_file:
- app/.env
volumes:
- redis:/data
- ./app/conf/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
the path to redis.conf in command and volumes should match
rebuild redis or all the services as required
try to use redis-cli -h <my-server-ip> to verify (it stopped working for me)
I'm a beginner with docker and I created a docker-compose file that can provide our production environment and I want to use it for our client servers for production environment also I want to use it locally and without internet.
Now, I have binaries of docker and docker compose and saved images that I want to load to a server without internet. this is my init bash script on Linux :
#!/bin/sh -e
#docker
tar xzvf docker-18.09.0.tgz
sudo cp docker/* /usr/bin/
sudo dockerd &
#docker-compose
cp docker-compose-Linux-x86_64 /ussr/local/bin/docker-compose
chmod +x /ussr/local/bin/docker-compose
#load images
docker load --input images.tar
my structure :
code/*
nginx/
site.conf
logs/
phpfpm/
postgres/
data/
custom.ini
.env
docker-compose.yml
docker-compose file:
version: '3'
services:
web:
image: nginx:1.15.6
ports:
- "8080:80"
volumes:
- ./code:/code
- ./nginx/site.conf:/etc/nginx/conf.d/default.conf
- ./nginx/logs:/var/log/nginx
restart: always
depends_on:
- php
php:
build: ./phpfpm
restart: always
volumes:
- ./phpfpm/custom.ini:/opt/bitnami/php/etc/conf.d/custom.ini
- ./code:/code
db:
image: postgres:10.1
volumes:
- ./postgres/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5400:5432
There are some questions :
Why docker doesn't exist in Linux services? but when I install docker by apt-get it goes to Linux services list. How can I set docker as a service and enable it for loading on startup?
How can I set docker-compose in Linux services to run when system startup?
Install docker with package sudo dpkg -i /path/to/package.deb that you can download from https://download.docker.com/linux/ubuntu/dists/.
Then do post install, sudo systemctl enable docker. This will start docker at system boots, combined with restart: always your previous compose will be restarted automatically.
I think that dockerd is creating a daemon, but you have to enable it.
$ sudo systemctl enable docker
Add restart: always to your db container.
How the docker restart policies work
I'm having an issue with my travis-ci before_script while trying to connect to my docker postgres container:
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
I've seen this problem raised but never fully addressed around SO and github issues, and i'm not clear whether it is specific to docker or travis. One linked issue (below) works around it by using 5433 as the host postgres address but i'd like to know for sure what is going on before i jump into something.
my travis.yml:
sudo: required
services:
- docker
env:
DOCKER_COMPOSE_VERSION: 1.7.1
DOCKER_VERSION: 1.11.1-0~trusty
before_install:
# list docker-engine versions
- apt-cache madison docker-engine
# upgrade docker-engine to specific version
- sudo apt-get -o Dpkg::Options::="--force-confnew" install -y docker-engine=${DOCKER_VERSION}
# upgrade docker-compose
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
before_script:
- echo "Before Script:"
- docker-compose -f docker-compose.ci.yml build
- docker-compose -f docker-compose.ci.yml run app rake db:setup
- docker-compose -f docker-compose.ci.yml run app /bin/sh
script:
- echo "Running Specs:"
- rake spec
my docker-compose.yml for ci:
postgres:
image: postgres:9.4.5
environment:
POSTGRES_USER: web
POSTGRES_PASSWORD: yourpassword
expose:
- '5432' # added this as an attempt to open the port
ports:
- '5432:5432'
volumes:
- web-postgres:/var/lib/postgresql/data
redis:
image: redis:3.0.5
ports:
- '6379:6379'
volumes:
- web-redis:/var/lib/redis/data
web:
build: .
links:
- postgres
- redis
volumes:
- ./code:/app
ports:
- '8000:8000'
# env_file: # setting these directly in the environment
# - .docker.env # (they work fine locally)
sidekiq:
build: .
command: bundle exec sidekiq -C code/config/sidekiq.yml
links:
- postgres
- redis
volumes:
- ./code:/app
Docker & Postgres: Failed to bind tcp 0.0.0.0:5432 address already in use
How to get Docker host IP on Travis CI?
It seems that Postgres service is enabled by default in Travis CI.
So you could :
Try to disable the Postgres service in your Travis config. See How to stop services on Travis CI running by default?. See also https://docs.travis-ci.com/user/database-setup/#PostgreSQL .
Or
Map your postgres container to another host port (!= 5432). Like -p 5455:5432.
It could also be useful to check if the service is already running : Check If a Particular Service Is Running on Ubuntu
Do you use Travis' Postgres?
services:
- postgresql
Would be easier if you provide travis.yml
I have a rails application with mongodb, in development environment.
Unable to connect mongodb with docker. Can connect to local mongodb with same mongoid config. I tried changing host as localhost to 0.0.0.0 but did not work.
What is missing in the settings ?
My doubt is mongo in Docker hasn't started or binded. If i make changes in mongoid config to read: :nearest, it says no nodes found.
error message is,
Moped::Errors::ConnectionFailure in Product#index
Could not connect to a primary node for replica set #]>
Dockerfile
#FROM ruby:2.2.1-slim
FROM rails:4.2.1
MAINTAINER Sandesh Soni, <my#email.com>
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
RUN mkdir /gmv
WORKDIR /gmv
# Add db directory to /db
ADD Gemfile /gmv/Gemfile
RUN bundle install
ADD ./database /data/db
ADD . /gmv
docker-compose.yml
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/gmv
ports:
- "3000:3000"
links:
- db
db:
image: mongo
command: "--smallfiles --bind_ip 0.0.0.0 --port 27027 -v"
volumes:
- data/mongodb:/data/db
ports:
- "27017:27017"
On your host machine execute docker run yourapp env, then in output look for ip address related to your database. That ip address & port you need to use to connect to your database running in container.
Similar question asked here