I have this docker compose file:
version: "2.4"
services:
mysql:
image: mysql:8.0
environment:
- MYSQL_ROOT_PASSWORD=mypasswd
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 8080:80
environment:
- PMA_HOST=mysql
depends_on:
mysql:
condition: service_healthy
app:
stdin_open: true
tty: true
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./src:/usr/app/src
depends_on:
mysql:
condition: service_healthy
The app service is just a node image running some tests with jest. The CMD of that image is jest --watchAll
I would like it to be interactive and respond to my key presses, but I cannot get it to work. This is the output I get when I spin up the containers with docker-compose up:
PASS src/test.test.ts
Can connect to the database
✓ Can connect to the database (1 ms)
app_1 |
Test Suites: 1 passed, 1 total
app_1 | Tests: 1 passed, 1 total
app_1 | Snapshots: 0 total
app_1 | Time: 0.314 s
app_1 | Ran all test suites.
app_1 |
app_1 | Watch Usage
app_1 | › Press f to run only failed tests.
app_1 | › Press o to only run tests related to changed files.
app_1 | › Press p to filter by a filename regex pattern.
app_1 | › Press t to filter by a test name regex pattern.
app_1 | › Press q to quit watch mode.
app_1 | › Press Enter to trigger a test run.
aaaaaaaaaaaaaaaffffffffooooooo
ppppp
p
As you can see, it's ignoring my key presses, and just appends the letters to the output.
You can run your test suite from the host, connecting to a database running in Docker.
You need to add ports: to your database container to make it accessible from outside Docker:
services:
mysql:
ports:
# The first number can be any unused port on your host.
# The second number MUST be the standard MySQL port 3306.
- '3306:3306'
You don't show how you configure your application to connect to the database, but you will need to set something like MYSQL_HOST=127.0.0.1 (required; the standard MySQL libraries misinterpret localhost to mean "a Unix socket") and MYSQL_PORT=3306 (with the first number from ports:, optional that is the default port 3306 and required otherwise).
Once you've done this, you can run your tests:
# Start the database, but not the application
docker-compose up -d mysql
# Run the tests from the host, outside of Docker
npx jest --watchAll
This last command is a totally normal test-runner invocation. You do not need to do anything to cause source code to sync with the test runner or to pass keypresses through, because you are actually running your local source code with a local process.
what you expect is nonsense. this output is just for showing you the logs of your services and by docker-compose up: -d command this no longer showing for you.
for interact with you service you must dive into your container.
docker exec -it [CONTAINER_NAME] bash
Related
I am running a Rails 7 app through docker-compose. When I try to use binding.break in the code, my attached terminal shows something like the following:
web | Started POST "/media_uploads" for 172.19.0.1 at 2022-07-02 20:57:26 +0000
web | Cannot render console from 172.19.0.1! Allowed networks: 127.0.0.0/127.255.255.255, ::1
web | [29, 38] in /web/config/initializers/rack_attack.rb
web | 29| end
web | 30|
web | 31| # Intended to prevent bulk upload overloading, but may have other consequences
web | 32| throttle('posts/ip', limit: 1, period: 1) do |req|
web | 33| if req.post?
web | => 34| binding.break
web | 35| req.ip
web | 36| end
web | 37| end
web | 38| ### Prevent Brute-Force Login Attacks ###
web | =>#0 block {|req=#<Rack::Attack::Request:0x00007fa9990ffc...|} in <class:Attack> at /web/config/initializers/rack_attack.rb:34
web | #1 Rack::Attack::Throttle#discriminator_for(request=#<Rack::Attack::Request:0x00007fa9990ffc...) at /usr/local/bundle/gems/rack-attack-6.6.1/lib/rack/attack/throttle.rb:53
web | # and 50 frames (use `bt' command for all frames)
but doesn't provide me with an input buffer to enter commands. I have to kill the process in order to do anything on the server. My docker-compose includes
tty: true
stdin_open: true
but it still doesn't work. Any ideas what to try?
EX: I have a docker-composer.yml
version: '3.3'
services:
db:
image: postgres
container_name: anywork_db
env_file:
- .db.env
ports:
- "5432:5432"
app:
image: anywork:latest
entrypoint:
- bash
- entrypoint.sh
build: .
container_name: anywork_app
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/AnyWork
env_file:
- .app.env
links:
- db
ports:
- "3000:3000"
stdin_open: true
tty: true
depends_on:
- db
# command: puma
# sudo chown -R $USER:$USER .
If you debug, you must access container_name of app with docker attach container_name.
EX: docker attach anywork_name. It is work for you.
I have the follow docker compose file:
version: '3.9'
services:
db-production:
container_name: mysql-production
image: mysql:latest
restart: always
environment:
MYSQL_HOST: localhost
MYSQL_DATABASE: dota2learning-db
MYSQL_ROOT_PASSWORD: toor
ports:
- "3306:3306"
volumes:
- ./data/db-production:/home/db-production
db-testing:
container_name: mysql-testing
image: mysql:latest
restart: always
environment:
MYSQL_HOST: localhost
MYSQL_ROOT_PASSWORD: toor
ports:
- "3307:3306"
volumes:
- ./data/db-testing:/home/db-testing
volumes:
data:
I also have a sql script to dump my database. The problem that's docker take a long time to start mysql and the script don't work.
I tried add the follow command on docker compose file:
command: mysql --user=root --password=toor dota2learning-db < /home/db-production/dumb-db-production.sql
This command does not work because it tries to run before the mysql server is working.
I know because as soon as I created the container I got into it and tried to log into mysql and it wasn't available yet:
sudo docker exec -it mysql-production bash
on container:
mysql --user=root --password=toor
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
I also tried to start MySQL manually:
root#4c91b5407561:/# mysqld start
2022-06-20T14:56:18.448123Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.29) starting as process 97
2022-06-20T14:56:18.451281Z 0 [ERROR] [MY-010123] [Server] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!
2022-06-20T14:56:18.451346Z 0 [ERROR] [MY-010119] [Server] Aborting
2022-06-20T14:56:18.451514Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.29) MySQL Community Server - GPL.
That's add the follow command on docker compose don't work:
command: mysqld start
NOTE:
But I know that if I wait 1 or 2 minutes mysql will be available to run the script though, I want to run this script automatically and not manually.
When I add the commands on docker compose the docker container keeps restarting forever, because it keeps trying to execute the commands and mysql is not available yet.
I'm going through getting-started tutorial (https://www.docker.com/101-tutorial - Docker Desktop) from docker and they have this docker-compose here:
version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
The problem is that MySQL is not creating the "todos" database.
And then my application can't connect to it giving me this error:
app_1 | Error: ER_HOST_NOT_PRIVILEGED: Host '172.26.0.2' is not allowed to connect to this MySQL server
app_1 | at Handshake.Sequence._packetToError (/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:47:14)
app_1 | at Handshake.ErrorPacket (/app/node_modules/mysql/lib/protocol/sequences/Handshake.js:123:18)
app_1 | at Protocol._parsePacket (/app/node_modules/mysql/lib/protocol/Protocol.js:291:23)
app_1 | at Parser._parsePacket (/app/node_modules/mysql/lib/protocol/Parser.js:433:10)
app_1 | at Parser.write (/app/node_modules/mysql/lib/protocol/Parser.js:43:10)
app_1 | at Protocol.write (/app/node_modules/mysql/lib/protocol/Protocol.js:38:16)
app_1 | at Socket.<anonymous> (/app/node_modules/mysql/lib/Connection.js:91:28)
app_1 | at Socket.<anonymous> (/app/node_modules/mysql/lib/Connection.js:525:10)
app_1 | at Socket.emit (events.js:310:20)
app_1 | at addChunk (_stream_readable.js:286:12)
app_1 | --------------------
app_1 | at Protocol._enqueue (/app/node_modules/mysql/lib/protocol/Protocol.js:144:48)
app_1 | at Protocol.handshake (/app/node_modules/mysql/lib/protocol/Protocol.js:51:23)
app_1 | at PoolConnection.connect (/app/node_modules/mysql/lib/Connection.js:119:18)
app_1 | at Pool.getConnection (/app/node_modules/mysql/lib/Pool.js:48:16)
app_1 | at Pool.query (/app/node_modules/mysql/lib/Pool.js:202:8)
app_1 | at /app/src/persistence/mysql.js:35:14
app_1 | at new Promise (<anonymous>)
app_1 | at Object.init (/app/src/persistence/mysql.js:34:12)
app_1 | at processTicksAndRejections (internal/process/task_queues.js:97:5) {
app_1 | code: 'ER_HOST_NOT_PRIVILEGED',
app_1 | errno: 1130,
app_1 | sqlMessage: "Host '172.26.0.2' is not allowed to connect to this MySQL server",
app_1 | sqlState: undefined,
app_1 | fatal: true
app_1 | }
If I run this command alone to spin MySQL, the "todos" database is created:
docker run -d --network todo-app --network-alias mysql -v todo-mysql-data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=todos mysql:5.7
Is there any command that was updated or that doesn't work properly on windows with docker-compose?
TL;DR;
Run the command
docker-compose down --volumes
to remove any problematic volume created during the tutorial early phases, then, resume your tutorial at the step Running our Application Stack.
I suppose that the tutorial you are following is this one.
If you did follow it piece by piece and tried some docker-compose up -d in the step 1 or 2, then you've probably created a volume without your todos database.
Just going docker-compose down with your existing docker-compose.yml won't suffice because volumes is exactly made for this, the volume is the permanent storage layer of Docker.
By default all files created inside a container are stored on a writable container layer. This means that:
The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.
A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.
Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts. If you’re running Docker on Linux you can also use a tmpfs mount. If you’re running Docker on Windows you can also use a named pipe.
Source: https://docs.docker.com/storage/
In order to remove that volume, you probably created without your database there is an extra flag you can add to docker-compose down: the flag --volumes or, in short -v
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.
Source: https://docs.docker.com/compose/reference/down/
So your fix should be as simple as:
docker-compose down --volumes
docker-compose up -d, so back in your tutorial at the step Running our Application Stack
docker-compose logs -f as prompted in the rest of the tutorial
Currently you're database todo is created inside your mysql container when you launch docker-compose start.
In fact, your issue come from mysql user permission.
Add the line below, at the end of your file which initialize todo database
CREATE USER 'newuser'#'%' IDENTIFIED BY 'user_password';
That line will create a user : newuser and give it access from any host (%) with the password user_password
Follow by this line
GRANT ALL PRIVILEGES ON *.* TO 'newuser'#'%';
It'll grant all permissions on every database and every table you have to newuser from any host
Finally, change your mysql environment variable MYSQL_USER and MYSQL_PASSWORD with the new one you just create
version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: newuser
MYSQL_PASSWORD: user_password
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
I currently have three docker containers running:
Docker container for the front-end web app (exposed on port 8080)
Docker container for the back-end server (exposed on port 5000)
Docker container for my MongoDB database.
All three containers are working perfectly and when I visit http://localhost:8080, I can interact with my web application with no issues.
I'm trying to set up a fourth Cypress container that will run my end to end tests for my app. Unfortunately, this Cypress container throws the below error, when it attempts to run my Cypress tests:
cypress | Cypress could not verify that this server is running:
cypress |
cypress | > http://localhost:8080
cypress |
cypress | We are verifying this server because it has been configured as your `baseUrl`.
cypress |
cypress | Cypress automatically waits until your server is accessible before running tests.
cypress |
cypress | We will try connecting to it 3 more times...
cypress | We will try connecting to it 2 more times...
cypress | We will try connecting to it 1 more time...
cypress |
cypress | Cypress failed to verify that your server is running.
cypress |
cypress | Please start this server and then run Cypress again.
First potential issue (which I've fixed)
The first potential issue is described by this SO post, which is that when Cypress starts, my application is not ready to start responding to requests. However, in my Cypress Dockerfile, I'm currently sleeping for 10 seconds before I run my cypress command as shown below. These 10 seconds are more than adequate since I'm able to access my web app from the web browser before the npm run cypress-run-chrome command executes. I understand that the Cypress documentation has some fancier solutions for waiting on http://localhost:8080 but for now, I know for sure that my app is ready for Cypress to start executing tests.
ENTRYPOINT sleep 10; npm run cypress-run-chrome
Second potential issue (which I've fixed)
The second potential issue is described by this SO post, which is that the Docker container's /etc/hosts file does not contain the following line. I've also rectified that issue and it doesn't seem to be the problem.
127.0.0.1 localhost
Does anyone know why my Cypress Docker container can't seem to connect to my web app that I can reach from my web browser on http://localhost:8080?
Below is my Dockerfile for my Cypress container
As mentioned by the Cypress documentation about Docker, the cypress/included image already has an existing entrypoint. Since I want to sleep for 10 seconds before running my own Cypress command specified in my package.json file, I've overridden ENTRYPOINT in my Dockerfile as shown below.
FROM cypress/included:3.4.1
COPY hosts /etc/
WORKDIR /e2e
COPY package*.json ./
RUN npm install --production
COPY . .
ENTRYPOINT sleep 10; npm run cypress-run-chrome
Below is the command within my package.json file that corresponds to npm run cypress-run-chrome.
"cypress-run-chrome": "NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome",
Below is my docker-compose.yml file that coordinates all 4 containers.
version: '3'
services:
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
container_name: web
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
depends_on:
- server
environment:
- NODE_ENV=testing
networks:
- app-network
db:
build:
context: .
dockerfile: ./docker/db/Dockerfile
container_name: db
restart: unless-stopped
volumes:
- dbdata:/data/db
ports:
- "27017:27017"
networks:
- app-network
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
container_name: server
restart: unless-stopped
ports:
- "5000:5000"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
depends_on:
- db
command: ./wait-for.sh db:27017 -- nodemon -L server.js
cypress:
build:
context: .
dockerfile: Dockerfile
container_name: cypress
restart: unless-stopped
volumes:
- .:/e2e
depends_on:
- web
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
Below is what my hosts file looks like which is copied into the Cypress Docker container.
127.0.0.1 localhost
Below is what my cypress.json file looks like.
{
"baseUrl": "http://localhost:8080",
"integrationFolder": "cypress/integration",
"fileServerFolder": "dist",
"viewportWidth": 1200,
"viewportHeight": 1000,
"chromeWebSecurity": false,
"projectId": "3orb3g"
}
localhost in Docker is always "this container". Use the names of the service blocks in the docker-compose.yml as hostnames, i.e., http://web:8080
(Note that I copied David Maze's answer from the comments)
I want to be able to restart a golang docker file on failure to connect to rabbitmq as outined here: (Docker Compose wait for container X before starting Y see answer by: svenhornberg).
Unfortunately my golang container will exit but never restart and I don't know why.
Docker-compose:
version: '3.3'
services:
mongo:
image: 'mongo:3.4.1'
container_name: 'datastore'
ports:
- '27017:27017'
rabbitmq:
restart: always
tty: true
image: rabbitmq:3.7-management-alpine
hostname: "rabbit"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq"
volumes:
- ./rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 3s
timeout: 5s
retries: 20
api:
restart: always
tty: true
container_name: 'api'
build: '.'
working_dir: /go/src/github.com/patientplatypus/project
ports:
- '8000:8000'
volumes:
- './:/go/src/github.com/patientplatypus/project'
- './uploads:/uploads'
- './scripts:/scripts'
- './templates:/templates'
depends_on:
- "mongo"
- "rabbitmq"
Docker file:
FROM golang:latest
WORKDIR /go/src/github.com/patientplatypus/project
COPY . .
RUN go get github.com/imroc/req
<...more go gets...>
RUN go get github.com/joho/godotenv
EXPOSE 8000
ENTRYPOINT [ "fresh" ]
Here is my golang code:
package main
import (
"fmt"
"log"
"os"
"os/exec"
"net/http"
)
func main() {
fmt.Println("Golang server started")
godotenv.Load()
fmt.Println("now doing healthcheck on rabbit")
exec.Command("docker-compose restart api")
os.Exit(1)
<...>
And here is my terminal output (golang never restarts after rabbit called):
api | 23:23:00 app | Golang server started
api | 23:23:00 app | now doing healthcheck on rabbit
rabbitmq_1 |
rabbitmq_1 | ## ##
rabbitmq_1 | ## ## RabbitMQ 3.7.11. Copyright (C) 2007-2019 Pivotal Software, Inc.
rabbitmq_1 | ########## Licensed under the MPL. See http://www.rabbitmq.com/
rabbitmq_1 | ###### ##
rabbitmq_1 | ########## Logs: <stdout>
<...more rabbit logging...>
I'm very confused on how to get this to work. What am I doing wrong?
EDIT:
The exec.Command was incorrectly implemented, however os.Exit(1), log.Fatal, and log.Panic exit the container, but the container does not restart. Still confused.
The Docker documentation says:
A restart policy only takes effect after a container starts successfully. In this case, starting successfully means that the container is up for at least 10 seconds and Docker has started monitoring it. This prevents a container which does not start at all from going into a restart loop.
Since the Go code you show exits basically immediately, it never meets this 10-second-minimum rule.
You can force Go to wait until the process has been alive a minimum of 10 seconds by using time.After somewhat like:
ch := time.After(10 * time.Second)
defer (func() { fmt.Println("waiting"); <-ch; fmt.Println("waited") })()
That is, create a channel that will receive an event after 10 seconds, and then actually receive it (immediately if it's happened, waiting if not) before main returns. From playing with https://play.golang.org/p/zGY5jFWbXyk, the one trick is that there needs to be some observable effect after receiving from the channel or else it doesn't actually wait.