Using the official redmine docker guide it is possible to start a redmine server connected to a postgresql database. I used the following commands:
docker run -d --name redmine_db -p 6543:5432 -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=redmine postgres:9.5.1
docker run -d --name redmine_web -p 3001:3000 --link redmine_db:postgres redmine:3.2.3
But I have difficulties to run the same configuration with docker compose. Here is the first docker-compose.yml file I used:
version: '2'
services:
webserver:
image: redmine:3.2.3
ports:
- "3001:3000"
links:
- database:postgres
database:
image: postgres:9.5.1
ports:
- "6543:5432"
environment:
- POSTGRES_PASSWORD="secret"
- POSTGRES_USER="redmine"
With docker compose the redmine server starts correctly but it ignores the postgres database container and uses the internal SQLite database instead.
I've also tried with the following environments in the webserver configuration:
environment:
- POSTGRES_PORT_5432_TCP="5432"
- POSTGRES_ENV_POSTGRES_USER="redmine"
- POSTGRES_ENV_POSTGRES_PASSWORD="secret"
But without success. This time the redmine container does not start at all and displays the following error message:
[!] There was an error parsing `Gemfile`: (<unknown>): did not find expected key while parsing a block mapping at line 2 column 3. Bundler cannot continue.
# from /usr/src/redmine/Gemfile:64
# -------------------------------------------
# if File.exist?(database_file)
> database_config = YAML::load(ERB.new(IO.read(database_file)).result)
# adapters = database_config.values.map {|c| c['adapter']}.compact.uniq
# -------------------------------------------
From docker-compose version 2 docs:
links with environment variables: As documented in the environment
variables reference, environment variables created by links have been
deprecated for some time. In the new Docker network system, they have
been removed. You should either connect directly to the appropriate
hostname or set the relevant environment variable yourself, using the
link hostname:
web:
links:
- db
environment:
- DB_PORT=tcp://db:5432
In docker-compose v. 2 you should use networks instead. All services in docker-compose in one common network by default. In your config your webserver container can resolve database by database host without any links.
Related
I'm trying to dockerize an existing Rails app that uses Postgresql 9.5 as its database. In my docker-compose.yml. After a successful "docker-compose build" I can run the "docker-compose up" command and see the connection but when I navigate to localhost I get the following error.
PG::ConnectionBad
could not connect to server: No such file or directory Is the server running >locally and accepting connections on Unix domain socket >"/var/run/postgresql/.s.PGSQL.5432"?
Here is what is in my docker-compose.yml
version: '2'
services:
db:
image: postgres:9.5
restart: always
volumes:
- ./tmp/db:/var/lib/postgresql/9.5/main
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: password
POSTGRES_DB: hardware_development
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
From what I've seen I need to do some specification somewhere in my dockerfile or the docker-compose.yml but I either don't see a change or end up back at the same error.
I've been able to use Docker's own docs to use Docker Compose to create a new rails app with postgres where I see the "yay you're on rails!" page but now with my own code I can't see anything. Running the app outside of docker shows me the test page as well so its not the code within my rails app or the Postgres evnironment outside of Docker.
Your db docker-compose entry isn't exposing any ports. It needs to expose 5432. Add a ports line for that just like you have for web.
Edit: also I don't know why you added restart: always to your database container, but I wouldn't recommend that for rails or pretty much anything.
The docker-compose.yml example contained in the docs works great for local development where people probably have multiple services run with docker:
version: '3'
services:
prisma:
image: prismagraphql/prisma:__LATEST_PRISMA_VERSION__
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
managementApiSecret: __YOUR_MANAGEMENT_API_SECRET__
port: __YOUR_PRISMA_SERVER_PORT__
databases:
default:
connector: __YOUR_DATABASE_CONNECTOR__
migrations: __ENABLE_DB_MIGRATIONS__
host: __YOUR_DATABASE_HOST__
port: __YOUR_DATABASE_PORT__
user: __YOUR_DATABASE_USER__
password: __YOUR_DATABASE_PASSWORD__
mongo:
image: mongo:__LATEST_PRISMA_VERSION__
restart: always
ports:
- "27017:27017"
volumes:
- mongo:/var/lib/mongo
However, in production there is no db running in a container. Indeed we're using a DBaaS to host our db. The db we're using is Mongo.
Thus we need to switch from docker-compose up to docker run... for the Prisma server.
The problem we're facing is how to set all the environment variables needed for the Prisma container to run.
I see two possible options:
Pass down single variables to the container
Pass down just the variable PRISMA_CONFIG_PATH that points to an yml config file, for example, prisma_config.yml
Option 1
In the docker-compose.yml there the PRISMA_CONFIG variable is passed as multi-line string.
Searching on the Internet I found that the corresponding list of the single variables should be:
PORT: $PORT
SCHEMA_MANAGER_SECRET: $SCHEMA_MANAGER_SECRET
SCHEMA_MANAGER_ENDPOINT: $SCHEMA_MANAGER_ENDPOINT
SQL_CLIENT_HOST_CLIENT1: $SQL_CLIENT_HOST
SQL_CLIENT_HOST_READONLY_CLIENT1: $SQL_CLIENT_HOST
SQL_CLIENT_HOST: $SQL_CLIENT_HOST
SQL_CLIENT_PORT: $SQL_CLIENT_PORT
SQL_CLIENT_USER: $SQL_CLIENT_USER
SQL_CLIENT_PASSWORD: $SQL_CLIENT_PASSWORD
SQL_CLIENT_CONNECTION_LIMIT: 10
SQL_INTERNAL_HOST: $SQL_INTERNAL_HOST
SQL_INTERNAL_PORT: $SQL_INTERNAL_PORT
SQL_INTERNAL_USER: $SQL_INTERNAL_USER
SQL_INTERNAL_PASSWORD: $SQL_INTERNAL_PASSWORD
SQL_INTERNAL_DATABASE: $SQL_INTERNAL_DATABASE
CLUSTER_ADDRESS: $CLUSTER_ADDRESS
SQL_INTERNAL_CONNECTION_LIMIT: 10
CLUSTER_PUBLIC_KEY: $CLUSTER_PUBLIC_KEY
BUGSNAG_API_KEY: ""
ENABLE_METRICS: "0"
JAVA_OPTS: "-Xmx1G"
docker
All those var are set in an env file
docker run -p 4466:4466 --env-file prisma.env prismagraphql/prisma:1.25
They seem to work fine for SQL databases, but not for Mongo.
Either I set the wrong values or the variables names when using Mongo differ.
Option 2
Passing a single env var like PRISMA_CONFIG_PATH=prisma.config.yml when running the container:
docker run -p 4466:4466 -e PRISMA_CONFIG_PATH=prisma.config.yml prismagraphql/prisma:1.25
I'm gettting the following error
Exception in thread "main" java.lang.RuntimeException: Unable to load Prisma config: java.io.FileNotFoundException: prisma_config.yml (No such file or directory)
I do not know what is the working directory for the prisma docker image. I guess that would solve it.
EDIT
I was able to make it work with Option#2
docker run \
-e PRISMA_CONFIG_PATH="/prisma.yml" \
-v "$(pwd)/env/prisma_config.yml":"/prisma.yml" \" \
-p 4466:4466 \
prismagraphql/prisma:1.25
prisma_config.yml
port: 4466
databases:
default:
connector: mongo
uri: __YOUR_MONGO_URI__
database: __YOUR_MONGO_DB__
I am stuck trying to configure docker volumes to share files between my host and make able in my container to use this files. let me explain.
I have a rails docker app with puma as a web server, I want to make able to puma to view and use the ssl .key and .crt files, so for this project also I am using docker-compose in "production mode", but I do not know how to make this work.
My setup is this:
Ubuntu 18.04 server host for production has the ssl files inside /home/ubuntu/my_app_keys, the containers are also in my host.
/home/ubuntu/docker-compose.yml
version: '3'
services:
postgres:
image: postgres:10.5
environment:
POSTGRES_DB: my_app_production
env_file:
-~/production.env
redis:
image: redis:4.0.11
web:
image: my_app:latest
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' -e production
ports:
- '3000:3000'
volumes:
- /home/ubuntu/my_app_keys
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
sidekiq:
image: my_app_sidekiq:latest
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
so, as you can see: command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' is looking for ssl files in /home/ubuntu/my_app_keys, when I execute docker-compose up puma can not find the ssl files and exits with:
/usr/local/bundle/gems/puma-3.9.1/lib/puma/minissl.rb:180:in `key=': No such key file '/home/ubuntu/my_app_keys/server.key' (ArgumentError)
I think is because key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt are pointing in the container context but I have the cert and key in my host context
so, I include in docker compose volume in order to bind-mount the files:
volumes:
- /home/ubuntu/my_app_keys
but without luck, same error.
In the container context my app lives in /var/www/my_app directory, so I tried to specify an absolute path (for some reason I imagined that it was because the ssl files were not in the same directory where my app lived could not be shared), so I add as compose-file docs say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
and change in compose file:
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=server.key&cert=server.crt' -e
when I execute the compose up my web service exit with error:
web | Could not locate Gemfile or .bundle/ directory
only way that web service run is (but no ssl files exist):
volumes:
- /home/ubuntu/my_app_keys
so, I do not know what to do now. any help?
When your Docker Compose YAML file says:
volumes:
- /home/ubuntu/my_app_keys
It means, "make /home/ubuntu/my_app_keys in container space persist across restarts of the container; it will start off empty unless the Dockerfile did something special; it's not connected to any specific host content".
When you say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
It means, "totally replace the contents of /var/www/my_app in container space with the contents of /home/ubuntu/my_app_keys on the host". (The path names in host and container space don't need to be the same.)
As a bonus question, when you say:
rails server -b 'ssl://127.0.0.1:3000?...'
It means, "only listen for inbound connections on port 3000 initiated from within this Docker container; don't accept any connections from outside the container at all, whether from the same physical host, other containers, or elsewhere."
I am running a Java app inside a Docker container which is supposed to connect MySQL inside the other container. Trying multiple options suggested in the forms, nothing really works. Here is my Docker Compose file:
version: "3"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: app1
environment:
- DB_HOST=Imrans-MacBook-Pro.local
- DB_PORT=3306
ports:
- 8080:8080
networks:
- backend
depends_on:
- mysql
mysql:
image: mysql:5.7.20
hostname: mysql
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=app1
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp
networks:
- backend
networks:
backend:
driver: bridge
Where DB_HOST=Imrans-MacBook-Pro.local is my laptop's name. This did not work. Some suggest that the container name can be used so tried DB_HOST= mysql, never worked.
The only thing works from times to time when I pass the laptop's IP address, which is not I want to do. So, what is a good way to create communication between those containers?
The mysql is running in the container so there are two things that you should consider here:
If the mysql is running in the container then you will need to link the app container to the mysql container. This will allow them to talk to
each other using docker's inter container communication. The containers talk to each other using hostnames to resolve their respective internal IP addresses. See later in my answer I will show you how to get the two containers to communicate with each other using a compose file.
The mysql container should make use of a docker volume to store the database. This will allow you to store the database and related files on the file system of the host (server or machine where the containers are running on). The docker volume will then be mounted as a directory in the container. Thus the container can now read and write to a directory on the machine where the docker containers are running on. This means that even if the containers are all deleted or removed you will still have the database data persist. Here is a nice beginner friendly article on docker volumes and using them with MySQL:
https://severalnines.com/blog/mysql-docker-containers-understanding-basics
Container communication using only docker without compose:
You have container "app" and "mysql", you want to be able to access "app" on localhost and you want "app" to be able to connect to mysql. How are you gonna do this?
1. You need to expose a port for container "app" so we can access it on localhost. The docker containers have their own internal network and it is closed to you unless you expose some ports with docker.
You need to link the "mysql" container to "app" without exposing "mysql" 's ports to the rest of the world.
This config should work for what you want to achieve:
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: app1:latest
links:
- mysql
environment:
- DB_HOST=mysql
# This is the hostname that app will reach the mysql container on.
# If you do with app container:
# docker exec -it <app container id> bash
# # apt-get update -y && apt-get install iputils-ping -y
#
# Then you should be able to ping mysql container with:
#
# # ping -c 2 mysql
- DB_PORT=3306
ports:
- 8080:8080
# You will access "app" on localhost:8080 in your browser. If this is running on your own machine.
mysql: #hostname actually gets set here so no need to set it later
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=app1
# Remember to use a volume if you would like this container's data to persist or if you would like
# to restore a database backup.
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp
Now you can just start it up with:
$ docker-compose up
If you ran this before then just make sure to run this first before running docker-compose up:
$ docker-compose down
Let me know if that helps.
I have, in the past, gotten this to work without explicitly setting the host networking part in Docker Compose. Because Docker images inside a Docker Compose File are put into a Docker Network with each other, you really shouldn't have to do anything to get this to work: by default you should be able to attach into the container for your Spring app and be able to ping mysql and have it work out.
DB host should be localhost or 127.0.0.1
I have found the official sentry image in dockerhub. But the document is incomplete and I can't setup the environment step by step.
We have to setup the database container first but none of them tell how to setup it at first. Specifically I don't know what are the username and password that sentry will use.
And I also get the following error when I run the sentry container:
sudo docker run --name some-sentry --link some-mysql:mysql -d sentry
e888fcf2976a9ce90f80b28bb4c822c07f7e0235e3980e2a33ea7ddeb0ff18ce
sudo docker logs some-sentry
Traceback (most recent call last):
File "/usr/local/bin/sentry", line 9, in <module>
load_entry_point('sentry==6.4.4', 'console_scripts', 'sentry')()
File "/usr/local/lib/python2.7/site-packages/sentry/utils/runner.py", line 310, in main
initializer=initialize_app,
File "/usr/local/lib/python2.7/site-packages/logan/runner.py", line 167, in run_app
configure_app(config_path=config_path, **kwargs)
File "/usr/local/lib/python2.7/site-packages/logan/runner.py", line 89, in configure_app
raise ValueError("Configuration file does not exist at %r" % (config_path,))
ValueError: Configuration file does not exist at '/.sentry/sentry.conf.py'
UPDATE circa version 21
They don't seem to want to build the official image for us any more as per the deprecation notice on Docker Hub. However, good news, in https://develop.sentry.dev/self-hosted/#getting-started
They supply an install script
There is an official docker-compose included
Seems Kafka and Zookeeper are now required too. Follow the docs to stay up to date.
This is a moving target. I suggest checking https://hub.docker.com/_/sentry/ for updates as their documentation is pretty good.
Circa version 8 you can easily convert those instructions to use docker-compose
docker-compose.yml
version: "2"
services:
redis:
image: redis:3.0.7
networks:
- sentry-net
postgres:
image: postgres:9.6.1
environment:
- POSTGRES_USER:sentry
- POSTGRES_PASSWORD:sentry
# volumes:
# - ./data:/var/lib/postgresql/data:rw
networks:
- sentry-net
sentry:
image: sentry:${SENTRY_TAG}
depends_on:
- redis
- postgres
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
ports:
- 9000:9000
networks:
- sentry-net
sentry_celery_beat:
image: sentry:${SENTRY_TAG}
depends_on:
- sentry
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
command: "sentry run cron"
networks:
- sentry-net
sentry_celery_worker:
image: sentry:${SENTRY_TAG}
depends_on:
- sentry
environment:
- SENTRY_REDIS_HOST=redis
- SENTRY_SECRET_KEY=${SECRET}
- SENTRY_POSTGRES_HOST=postgres
command: "sentry run worker"
networks:
- sentry-net
networks:
sentry-net:
.env
SENTRY_TAG=8.10.0
Run docker run --rm sentry:8.10.0 config generate-secret-key and add the secret
.env updated
SENTRY_TAG=8.10.0
SECRET=somelongsecretgeneratedbythetool
First boot:
docker-compose up -d postgres
docker-compose up -d redis
docker-compose run sentry sentry upgrade
Full boot
docker-compose up -d
Debug
docker-compose ps
docker-compose logs --tail=10
Take a look at the sentry.conf.py file that is part of the official sentry docker image. It gets a bunch of properties from the environment e.g. SENTRY_DB_NAME, SENTRY_DB_USER. Below is an excerpt from the file.
os.getenv('SENTRY_DB_PASSWORD')
or os.getenv('MYSQL_ENV_MYSQL_PASSWORD')
or os.getenv('MYSQL_ENV_MYSQL_ROOT_PASSWORD')
So as for your question about how to sepcify database password it must be set in environment variables. You can do this by running:
sudo docker run --name some-sentry --link some-mysql:mysql \
-e SENTRY_DB_USER=XXX \
-e SENTRY_DB_PASSWORD=XXX \
-d sentry
As for your issue with the exception you seem to be missing a config file Configuration file does not exist at '/.sentry/sentry.conf.py' That file is copied to /home/user/.sentry/sentry.conf.py inside the container. I am not sure why your sentry install is looking for it at /.sentry/sentry.conf.py. There may be an environment variable or a setting that controls this or this may just be a bug in the container.
This works for me https://github.com/slafs/sentry-docker and we don't have to setup database or others. I will learn more about the configuration in detail later.
Here my docker compose yml, with official image from https://hub.docker.com/_/sentry/:
https://gist.github.com/ebuildy/270f4ef3abd41e1490c1
Run:
docker-compose -p sw up -d
docker exec -ti sw_sentry_1 sentry upgrade
Thats it!