I'm going through getting-started tutorial (https://www.docker.com/101-tutorial - Docker Desktop) from docker and they have this docker-compose here:
version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
The problem is that MySQL is not creating the "todos" database.
And then my application can't connect to it giving me this error:
app_1 | Error: ER_HOST_NOT_PRIVILEGED: Host '172.26.0.2' is not allowed to connect to this MySQL server
app_1 | at Handshake.Sequence._packetToError (/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:47:14)
app_1 | at Handshake.ErrorPacket (/app/node_modules/mysql/lib/protocol/sequences/Handshake.js:123:18)
app_1 | at Protocol._parsePacket (/app/node_modules/mysql/lib/protocol/Protocol.js:291:23)
app_1 | at Parser._parsePacket (/app/node_modules/mysql/lib/protocol/Parser.js:433:10)
app_1 | at Parser.write (/app/node_modules/mysql/lib/protocol/Parser.js:43:10)
app_1 | at Protocol.write (/app/node_modules/mysql/lib/protocol/Protocol.js:38:16)
app_1 | at Socket.<anonymous> (/app/node_modules/mysql/lib/Connection.js:91:28)
app_1 | at Socket.<anonymous> (/app/node_modules/mysql/lib/Connection.js:525:10)
app_1 | at Socket.emit (events.js:310:20)
app_1 | at addChunk (_stream_readable.js:286:12)
app_1 | --------------------
app_1 | at Protocol._enqueue (/app/node_modules/mysql/lib/protocol/Protocol.js:144:48)
app_1 | at Protocol.handshake (/app/node_modules/mysql/lib/protocol/Protocol.js:51:23)
app_1 | at PoolConnection.connect (/app/node_modules/mysql/lib/Connection.js:119:18)
app_1 | at Pool.getConnection (/app/node_modules/mysql/lib/Pool.js:48:16)
app_1 | at Pool.query (/app/node_modules/mysql/lib/Pool.js:202:8)
app_1 | at /app/src/persistence/mysql.js:35:14
app_1 | at new Promise (<anonymous>)
app_1 | at Object.init (/app/src/persistence/mysql.js:34:12)
app_1 | at processTicksAndRejections (internal/process/task_queues.js:97:5) {
app_1 | code: 'ER_HOST_NOT_PRIVILEGED',
app_1 | errno: 1130,
app_1 | sqlMessage: "Host '172.26.0.2' is not allowed to connect to this MySQL server",
app_1 | sqlState: undefined,
app_1 | fatal: true
app_1 | }
If I run this command alone to spin MySQL, the "todos" database is created:
docker run -d --network todo-app --network-alias mysql -v todo-mysql-data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=todos mysql:5.7
Is there any command that was updated or that doesn't work properly on windows with docker-compose?
TL;DR;
Run the command
docker-compose down --volumes
to remove any problematic volume created during the tutorial early phases, then, resume your tutorial at the step Running our Application Stack.
I suppose that the tutorial you are following is this one.
If you did follow it piece by piece and tried some docker-compose up -d in the step 1 or 2, then you've probably created a volume without your todos database.
Just going docker-compose down with your existing docker-compose.yml won't suffice because volumes is exactly made for this, the volume is the permanent storage layer of Docker.
By default all files created inside a container are stored on a writable container layer. This means that:
The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.
A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.
Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts. If you’re running Docker on Linux you can also use a tmpfs mount. If you’re running Docker on Windows you can also use a named pipe.
Source: https://docs.docker.com/storage/
In order to remove that volume, you probably created without your database there is an extra flag you can add to docker-compose down: the flag --volumes or, in short -v
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
Source: https://docs.docker.com/compose/reference/down/
So your fix should be as simple as:
docker-compose down --volumes
docker-compose up -d, so back in your tutorial at the step Running our Application Stack
docker-compose logs -f as prompted in the rest of the tutorial
Currently you're database todo is created inside your mysql container when you launch docker-compose start.
In fact, your issue come from mysql user permission.
Add the line below, at the end of your file which initialize todo database
CREATE USER 'newuser'#'%' IDENTIFIED BY 'user_password';
That line will create a user : newuser and give it access from any host (%) with the password user_password
Follow by this line
GRANT ALL PRIVILEGES ON *.* TO 'newuser'#'%';
It'll grant all permissions on every database and every table you have to newuser from any host
Finally, change your mysql environment variable MYSQL_USER and MYSQL_PASSWORD with the new one you just create
version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: newuser
MYSQL_PASSWORD: user_password
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
Related
I have this docker compose file:
version: "2.4"
services:
mysql:
image: mysql:8.0
environment:
- MYSQL_ROOT_PASSWORD=mypasswd
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 8080:80
environment:
- PMA_HOST=mysql
depends_on:
mysql:
condition: service_healthy
app:
stdin_open: true
tty: true
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./src:/usr/app/src
depends_on:
mysql:
condition: service_healthy
The app service is just a node image running some tests with jest. The CMD of that image is jest --watchAll
I would like it to be interactive and respond to my key presses, but I cannot get it to work. This is the output I get when I spin up the containers with docker-compose up:
PASS src/test.test.ts
Can connect to the database
✓ Can connect to the database (1 ms)
app_1 |
Test Suites: 1 passed, 1 total
app_1 | Tests: 1 passed, 1 total
app_1 | Snapshots: 0 total
app_1 | Time: 0.314 s
app_1 | Ran all test suites.
app_1 |
app_1 | Watch Usage
app_1 | › Press f to run only failed tests.
app_1 | › Press o to only run tests related to changed files.
app_1 | › Press p to filter by a filename regex pattern.
app_1 | › Press t to filter by a test name regex pattern.
app_1 | › Press q to quit watch mode.
app_1 | › Press Enter to trigger a test run.
aaaaaaaaaaaaaaaffffffffooooooo
ppppp
p
As you can see, it's ignoring my key presses, and just appends the letters to the output.
You can run your test suite from the host, connecting to a database running in Docker.
You need to add ports: to your database container to make it accessible from outside Docker:
services:
mysql:
ports:
# The first number can be any unused port on your host.
# The second number MUST be the standard MySQL port 3306.
- '3306:3306'
You don't show how you configure your application to connect to the database, but you will need to set something like MYSQL_HOST=127.0.0.1 (required; the standard MySQL libraries misinterpret localhost to mean "a Unix socket") and MYSQL_PORT=3306 (with the first number from ports:, optional that is the default port 3306 and required otherwise).
Once you've done this, you can run your tests:
# Start the database, but not the application
docker-compose up -d mysql
# Run the tests from the host, outside of Docker
npx jest --watchAll
This last command is a totally normal test-runner invocation. You do not need to do anything to cause source code to sync with the test runner or to pass keypresses through, because you are actually running your local source code with a local process.
what you expect is nonsense. this output is just for showing you the logs of your services and by docker-compose up: -d command this no longer showing for you.
for interact with you service you must dive into your container.
docker exec -it [CONTAINER_NAME] bash
I wish to expand a web-application using docker (I'm beginner in Docker). My application demands PostgreSQL. Therefore I decided to use the docker-compose.
The docker-compose.yaml looks like:
version: '3.8'
services:
db:
image: postgres
command: "postgres -c listen_addresses='*'"
environment:
- POSTGRES_DB=CatsQMS
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345qwerty
application:
build: .
ports:
- "8080:8080"
depends_on:
- db
Also I have the file config.yaml which configures my web-application, it looks like this:
database_config:
host: db
user: postgres
password: 12345qwerty
port: 5432
database: CatsQMS
# Unimportant stuff
And when I lunch the docker-compose, using docker-compose up the build is freezing at this point:
Recreating cats_queue_management_system_db_1 ... done
Recreating cats_queue_management_system_application_1 ... done
Attaching to cats_queue_management_system_db_1, cats_queue_management_system_application_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-05-20 13:42:51.628 UTC [1] LOG: starting PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-05-20 13:42:51.628 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-05-20 13:42:51.628 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-05-20 13:42:51.635 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-05-20 13:42:51.660 UTC [24] LOG: database system was shut down at 2020-05-20 13:39:45 UTC
db_1 | 2020-05-20 13:42:51.673 UTC [1] LOG: database system is ready to accept connections
Perhaps it's an important thing, my Dockerfile:
FROM python:3.8
RUN mkdir /docker-entrypoint-initdb.d/
COPY ./application/sources/database/init.sql /docker-entrypoint-initdb.d/
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
RUN python3 setup.py develop
ENTRYPOINT ["start_app", "-d"]
Where have I admitted a mistake?
Your application and PostgreSQL database are running and should work as expected. But Docker attached after running the containers to the db container.
You can avoid this by using the option -d or --detach on docker-compose up:
The docker-compose up command aggregates the output of each container (essentially running docker-compose logs -f). When the command exits, all containers are stopped. Running docker-compose up -d starts the containers in the background and leaves them running.
So your command looks like this:
docker-compose up -d
I don't think the issue is with the database container. It looks like it starts up normally and is listening for connections.
What is the ENTRYPOINT in your Dockerfile executing, "start_app"?
The -d argument looks like it may be starting the website as a daemon (background process) which doesn't give docker a process to hook into.
Maybe try
ENTRYPOINT ["start_app"]
Is the issue that your web app cant talk to the database? If so it might be that a) they don't share a network, and b) you're trying to connect on port 5432, but the postgres db container is not exposing or publishing this port.
I've added a few things to your docker-compose file that might make it work...
version: '3.8'
networks:
test:
services:
db:
image: postgres
command: "postgres -c listen_addresses='*'"
environment:
- POSTGRES_DB=CatsQMS
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345qwerty
ports:
- 5432:5432
networks:
- test
application:
build: .
ports:
- 8080:8080
networks:
- test
depends_on:
- db
I have an Nginx container set up which serves assets for a static website. The idea is for the webserver to always stay up, and overwrite the assets whenever they are recompiled. Currently the docker setup looks like this:
docker-compose.yml:
version: '3'
services:
web:
build: ./app
volumes:
- site-assets:/app/dist:ro
nginx:
build: ./nginx
ports:
- 80:80
- 443:443
volumes:
- site-assets:/app:ro
- https-certs:/etc/nginx/certs:ro
depends_on:
- web
volumes:
site-assets:
https-certs:
Web (asset-builder) Dockerfile:
FROM node:latest
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY ./ .
RUN npm run generate
Nginx Dockerfile:
FROM nginx:latest
RUN mkdir /app
COPY nginx.conf /etc/nginx/nginx.conf
The certbot container is managed separately and is not relevant to the problem I'm having, but the Nginx container does need to be able to mount the https-certs volume.
This setup seemed good, until I realized the site-assets volume would not be updated after first creation. The volume would need to be destroyed and re-created on each app deployment for this to work, requiring the Nginx container to be stopped to unmount the volume. So much for that approach.
Is there a way to manage application data in this setup without bringing the Nginx container down? Preferably, I would want to do this declaratively with a docker-compose file, avoid multiple application instances as this doesn't need to scale, and avoid using docker inspect to find the volume on the filesystem and modify it directly.
I hope there is a sane answer to this other than "It's a static site, why aren't you using Netlify or GitHub Pages?" :)
Here is an example that would move your npm run generate from image build time to container run time. It is a minimal example to illustrate how moving the process to the run time makes the volume available to both the running container at startup and future ones at run time.
With the following docker-compose.yml:
version: '3'
services:
web:
image: ubuntu
volumes:
- site-assets:/app/dist
command: bash -c "echo initial > /app/dist/file"
restart: "no"
nginx:
image: ubuntu
volumes:
- site-assets:/app:ro
command: bash -c "while true; do cat /app/file; sleep 5; done"
volumes:
site-assets:
We can launch it with docker-compose up in a terminal. Our nginx server will initially miss the data but the initial web service will launch and generate our asset (with contents initial):
❯ docker-compose up
Creating network "multivol_default" with the default driver
Creating volume "multivol_site-assets" with default driver
Creating multivol_web_1 ... done
Creating multivol_nginx_1 ... done
Attaching to multivol_nginx_1, multivol_web_1
nginx_1 | cat: /app/file: No such file or directory
multivol_web_1 exited with code 0
nginx_1 | initial
nginx_1 | initial
nginx_1 | initial
nginx_1 | initial
In another terminal we can update our asset (your npm run generate command):
❯ docker-compose run web bash -c "echo updated > /app/dist/file"
And now we can see our nginx service serving the updated content:
❯ docker-compose up
Creating network "multivol_default" with the default driver
Creating volume "multivol_site-assets" with default driver
Creating multivol_web_1 ... done
Creating multivol_nginx_1 ... done
Attaching to multivol_nginx_1, multivol_web_1
nginx_1 | cat: /app/file: No such file or directory
multivol_web_1 exited with code 0
nginx_1 | initial
nginx_1 | initial
nginx_1 | initial
nginx_1 | initial
nginx_1 | updated
nginx_1 | updated
nginx_1 | updated
nginx_1 | updated
^CGracefully stopping... (press Ctrl+C again to force)
Stopping multivol_nginx_1 ... done
Hope this was helpful to illustrate a way to take advantage of volume mounting at container run time.
How share users between services in docker-compose? I can create volume and mount it in /etc/ container directory, but it will hide another files/directories. Is exist any smarter idea to achieve goal?
You could use volumes + bind mount to mount one container's passwd & group to another container.
Next is an example:
If not use volume, just verify original no mysql user in test service:
docker_compose.yaml:
version: "3"
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
test:
image: ubuntu:16.04
command: id mysql
depends_on:
- db
Execute as next:
$ docker-compose up
Creating network "23_default" with the default driver
Creating 23_db_1 ... done
Creating 23_test_1 ... done
Attaching to 23_db_1, 23_test_1
test_1 | id: 'mysql': no such user
db_1 | Initializing database
23_test_1 exited with code 1
From above, you could see the container from ubuntu:16.04 not have the user mysql which is a default user in mysql:
test_1 | id: 'mysql': no such user
Use volumes to make user mysql visible to test container:
docker_compose.yaml:
version: "3"
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- my_etc:/etc
test:
image: ubuntu:16.04
command: id mysql
depends_on:
- db
volumes:
- /tmp/etc-data/passwd:/etc/passwd
- /tmp/etc-data/group:/etc/group
volumes:
my_etc:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/tmp/etc-data'
Execute as next, NOTE: we need to new /tmp/etc-data before up:
$ mkdir -p /tmp/etc-data
$ docker-compose up
Creating network "23_default" with the default driver
Creating 23_db_1 ... done
Creating 23_test_1 ... done
Attaching to 23_db_1, 23_test_1
db_1 | Initializing database
test_1 | uid=999(mysql) gid=999(mysql) groups=999(mysql)
23_test_1 exited with code 0
From above, you can see test service already could have the user mysql:
test_1 | uid=999(mysql) gid=999(mysql) groups=999(mysql)
A little explanation:
Above solution first use named volume to pop the /etc folder of first container to the folder /tmp/etc-data on docker host machine, then the second container will use bind mount to separately mount passwd & group to the second container. As you see, the second container just mount the 2 files (passwd, group), so it won't hide any other files.
You can mount only file in a docker container.
volumes:
- /etc/mysql.cnf:/etc/mysql.cnf
I have a docker-compose stack launched on a remote machine, through gitlab CI/CD (a runner connects to the docker engine on the remote machine and performs the deploy with docker-compose up -d).
When I connect to that machine from my laptop, using eval docker-machine env REMOTE_ADDRESS, I can see the docker processes running (with docker ps), while the services stack appears to be empty (docker-compose ps).
I am not able to use docker-compose down to stop the stack, and trying docker-compose up -d gives me the error
ERROR: for feamp_postgres Cannot create container for service postgres: Conflict. The container name "/feamp_postgres" is already in use by container "40586885...". You have to remove (or rename) that container to be able to reuse that name.
The reverse is also true, I can start the stack from my local laptop (using docker-machine), but then the CI/CD pipeline fails when trying to execute docker-compose up -d with the same error.
This happens using the latest versions of docker and docker-compose, both on the laptop (OSX) and on the runner (ubuntu 18.04).
In other circumstances (~10 other projects) this has worked smoothly.
This is the docker-compose.yml file I am using.
version: "3.7"
services:
web:
container_name: feamp_web
restart: always
image: guglielmo/fpa/feamp:latest
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
environment:
- ...
volumes:
- public:/app/public
- data:/app/data
- uwsgi_spooler:/var/lib/uwsgi
- weblogs:/var/log
command: /usr/local/bin/uwsgi --socket=:8000 ...
nginx:
container_name: feamp_nginx
restart: always
...
postgres:
container_name: feamp_postgres
restart: always
image: postgres:11-alpine
...
redis:
container_name: feamp_redis
restart: always
image: redis:latest
...
volumes:
...
networks:
default:
external:
name: webproxy
Normally I can up the stack from my local laptop and manage it from the CI/CD pipeline on gitlab, or vice-versa.
This diagram should help visualise the situation.
+-----------------+
| |
| Remote server |
| |
+----|--------|---+
| |
| |
docker-compose ps| |docker-compose up -d
| |
| |
+-------------------+ | | +--------------------+
| | | | | |
| Docker client 1 ---------+ +--------- Docker client 2 |
| | | |
+-------------------+ +--------------------+
Connection to the remote server's docker engine are executed through docker-machine.
It appears that specifying the project name when invoking docker-compose commands, solves the issue.
This can be done using the -p parameter in the command line or the COMPOSE_PROJECT_NAME environment variable, on both clients.
For some reasons, this was not needed in previous projects.
It may be a change in docker (I changed from 18 to 19), or something else, I still do not know the details.
Instead of using docker-compose ps you can try docker ps -a and work from there.
Assuming you are ok simply discarding the containers, you can brute-force removal by calling:
docker rm -f 40586885
docker network rm webproxy
In my case, the problem was the project name into .env file
docker compose 3.7, docker latest version (2022-07-17)
COMPOSE_PROJECT_NAME = demo.sitedomain-testing.com
The compose project name do not support dots (.).
I just replaced by hifen and it worked well.
COMPOSE_PROJECT_NAME = demo-sitedomain-testing-com
In addition, container name also, do not support dots.