This question already has answers here:
docker: executable file not found in $PATH
(14 answers)
Closed 1 year ago.
I am new to docker. I am trying to containerise my Go application using docker-compose.
Technology used
Golang, Docker 20.10.8 and Air (for live reloading).
My Dockerfile looks like this.
FROM base as dev
WORKDIR /opt/app/api
RUN apk update
RUN apk add git gcc musl-dev
RUN apk add curl
RUN curl -sSfL https://raw.githubusercontent.com/cosmtrek/air/master/install.sh | sh -s -- -b $(go env GOPATH)/bin
# RUN go get
# RUN go mod tidy
CMD ["air"]
My docker-compose.yml is this.
version: "3.9"
services:
app:
build:
dockerfile: Dockerfile.local
context: .
target: dev
container_name: 'server'
volumes:
- .:/opt/app/api
env_file:
- .env
ports:
- "8080:8080"
restart:
always
depends_on:
- db
- rabbitmq
db:
image: postgres:13-alpine
volumes:
- data:/var/lib/postgresql/data
container_name: 'postgres'
ports:
- 5432:5432
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_PASSWORD: postgres
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- rabbitmq:/var/lib/rabbitmq
- rabbitmq-log:/var/log/rabbitmq
migrate: &basemigrate
profiles: ["tools"]
image: migrate/migrate
entrypoint: "migrate -database postgresql://thursday:postgres#db/postgres?sslmode=disable -path /tmp/migrations"
command: up
depends_on:
- db
volumes:
- ./migrations:/tmp/migrations
create-migration:
<<: *basemigrate
entrypoint: migrate create -dir /tmp/migrations -ext sql
command: ""
depends_on:
- db
down-migration:
<<: *basemigrate
entrypoint: migrate -database postgresql://thursday:postgres#db/postgres?sslmode=disable -path /tmp/migrations
command: down
depends_on:
- db
volumes:
data:
rabbitmq:
rabbitmq-log:
On running command sudo docker-compose up -d I am getting the following error
Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "air": executable file not found in $PATH: unknown
As mentioned in "docker: executable file not found in $PATH":
When you use the exec format for a command (in your case: CMD ["air"], a JSON array with double quotes) it will be executed without a shell.
This means that most environment variables will not be present.
CMD air should work, provided:
air is an executable (chmod 755)
air was cross-compiled to Linux (unless your host running docker is already Linux)
Related
I'm really confused why I'm unable to make API requests to any site. for example, I want to run :
HTTParty.get("https://fakerapi.it/api/v1/persons")
It runs well on my machine. (without docker).
But if I run it inside docker, I got :
SocketError (Failed to open TCP connection to fakerapi.it:443 (getaddrinfo: Name does not resolve))
It happens not only for this site. But for all sites.
So I guess there's something wrong with my docker settings. But I'm not sure where to start.
I'm new to docker. So any advice means a lot to me.
Below is my docker-compose.yaml
version: '3.4'
services:
db:
image: mysql:8.0.17 #using official mysql image from docker hub
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- db_data:/var/lib/mysql
ports:
- "3307:3306"
backend:
build:
context: .
dockerfile: backend-dev.Dockerfile
ports:
- "3001:3001"
volumes:
#the host repos are mapped to the container's repos
- ./backend:/my-project
#volume to cache gems
- bundle:/bundle
depends_on:
- db
stdin_open: true
tty: true
env_file: .env
command: /bin/sh -c "rm -f tmp/pids/server.pid && rm -f tmp/pids/delayed_job.pid && bundle exec bin/delayed_job start && bundle exec rails s -p 3001 -b '0.0.0.0'"
frontend:
build:
context: .
dockerfile: frontend-dev.Dockerfile
ports:
- "3000:3000"
links:
- "backend:bb"
depends_on:
- backend
volumes:
#the host repos are mapped to the container's repos
- ./frontend/:/my-project
# env_file: .env
environment:
- NODE_ENV=development
command: /bin/sh -c "yarn dev --port 3000"
volumes:
db_data:
driver: local
bundle:
driver: local
How I try to run:
docker-compose run backend /bin/sh
rails c
HTTParty.get("https://fakerapi.it/api/v1/persons")
Any idea how can I fix this?
I try i setup a Shopware Docker Container for development. I setup a Dockerfile for the Shopware initialize process but every time i run the build process shopware return this error message:
mysql -u 'root' -p'root' -h 'dbs' --port='3306' -e "DROP DATABASE IF EXISTS `shopware6dev`"
ERROR 2005 (HY000): Unknown MySQL server host 'dbs' (-2)
i think docker setup the default network after all build processes are done but i need to connect before all containers are ready. The depends_on option brings nothing. I hope anyone have a idea to solve this problem.
This is my docker-compose file:
version: '3'
services:
shopwaredev:
build:
context: ./docker/web
dockerfile: Dockerfile
volumes:
- ./log:/var/log/apache2
environment:
- VIRTUAL_HOST=shopware6dev.test,www.shopware6dev.test
- HTTPS_METHOD=noredirect
restart: on-failure:10
depends_on:
- dbs
adminer:
image: adminer
restart: on-failure:10
ports:
- 8080:8080
dbs:
image: "mysql:5.7"
volumes:
- ./mysql57:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=shopware6dev
restart: on-failure:10
nginx-proxy:
image: solution360/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./ssl:/etc/nginx/certs
restart: on-failure:10
and this is my dockerfile for web shopwaredev container:
FROM solution360/apache24-php74-shopware6
WORKDIR /var/www/html
RUN rm index.html
RUN git clone https://github.com/shopware/development.git .
RUN cp .psh.yaml.dist .psh.yaml
RUN sed -i 's|DB_USER: "app"|DB_USER: "root"|g' .psh.yaml
RUN sed -i 's|DB_PASSWORD: "app"|DB_PASSWORD: "root"|g' .psh.yaml
RUN sed -i 's|DB_HOST: "mysql"|DB_HOST: "dbs"|g' .psh.yaml
RUN sed -i 's|DB_NAME: "shopware"|DB_NAME: "shopware6dev"|g' .psh.yaml
RUN sed -i 's|APP_URL: "http://localhost:8000"|APP_URL: "http://shopware6dev.test"|g' .psh.yaml
RUN ./psh.phar install
I am on the Mac with docker install version 2.0.0.3 (31259)
docker-compose up -d
Removing ab-insight_postgres_1
Starting ab-insight_data_1 ... done
Recreating 31d36fb9c48a_ab-insight_postgres_1 ... error
ERROR: for 31d36fb9c48a_ab-insight_postgres_1 Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: for postgres Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: Encountered errors while bringing up the project.
Here is my docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
expose:
- "5432"
and here is my Dockerfile
FROM python:3.6.1
MAINTAINER Ka So <kanel.soeng#kso.com>
# Create the group and user to be used in this container
RUN groupadd flaskgroup && useradd -m -g flaskgroup -s /bin/bash flask
# Create the working directory (and set it as the working directory)
RUN mkdir -p /home/flask/app/web
WORKDIR /home/flask/app/web
# Install the package dependencies (this step is separated
# from copying all the source code to avoid having to
# re-install all python packages defined in requirements.txt
# whenever any source code change is made)
COPY requirements.txt /home/flask/app/web
RUN pip install --no-cache-dir -r requirements.txt
# Copy the source code into the container
COPY . /home/flask/app/web
RUN chown -R flask:flaskgroup /home/flask
USER flask
run docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This is happening due to postges running locally on your machine on the same port you have mentioned in your docker-compose.yml for postges service.
Either stop the sevice running on your local machine.(not recommended)
Or use other port to map to 5432 port of docker. To do so replace the
expose
-5432
in postgresa service with the following code
ports:
- "5433:5432"
The whole docker compose file will look like:
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
ports:
- "5433:5432"
I have below docker-compose.yml
version: "2"
services:
api:
build:
context: .
dockerfile: ./build/dev/Dockerfile
container_name: "project-api"
volumes:
# 1. mount your workdir path
- .:/app
depends_on:
- mongodb
links:
- mongodb
- mysql
nginx:
image: nginx:1.10.3
container_name: "project-nginx"
ports:
- 80:80
restart: always
volumes:
- ./build/dev/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/app
links:
- api
depends_on:
- api
mongodb:
container_name: "project-mongodb"
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- "27018:27017"
command: mongod --smallfiles --logpath=/dev/null # --quiet
mysql:
container_name: "gamestore-mysql"
image: mysql:5.7.23
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
And below .gitlab-ci.yml
test:
stage: test
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
script:
- docker-compose up -d
- docker-compose exec -T api ls -la
- docker-compose exec -T api composer install
- docker-compose exec -T api php core/init --env=Development --overwrite=y
- docker-compose exec -T api vendor/bin/codecept -c core/common run
- docker-compose exec -T api vendor/bin/codecept -c core/rest run
When i running my gitlab pipeline it's become field because i think gitlab can't work with services runned by docker-compose.
The error says that mysql refuse the connection.
I need this connection because my test written by codeception will test my models and api actions.
I want test my branches every time any one push in them and if pass just in develop deploy into test server and in master deploy on production server.
What is best way to run my test in gitlab ci/cd and then deploy them in my server?
You should use GitLab CI services instead of docker-compose.
You have to pick one image as your main, in which your commands will be run, and other containers just as services.
Sadly CI services cannot have mounted files in gitlab, you have to be able to configure them with env variables, or you need to create you own image with files in it (you can do that CI stage)
I would suggest you to don't use nginx, and use built-in php server for tests. It that's not possible (you have spicifix nginx config), you will need to build yourself nginx image with copied files inside it.
Also for PHP (the api service in docker-compose.yaml i assume), you need to either build the image ahed or copy command from your dockerfile to script.
So the result should be something like:
test:
stage: test
image: custom-php-image #build from ./build/dev/Dockerfile
services:
- name: mysql:5.7.23
alias: gamestore-mysql
- name: mongo:latest
alias: project-mongodb
command: mongod --smallfiles --logpath=/dev/null
variables:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
script:
- api ls -la
- composer install
- php core/init --env=Development --overwrite=y
- php -S localhost:8000 # You need to configure your built-in php server probably here
- vendor/bin/codecept -c core/common run
- vendor/bin/codecept -c core/rest run
I don't know your app, so you will probably have to made some tweaks.
More on that:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-image-and-services-from-gitlab-ciyml
https://docs.gitlab.com/ee/ci/services/
http://php.net/manual/en/features.commandline.webserver.php
I am trying to create a mysql database schema during the docker-compose.yml file is getting executed
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
links:
- web
onrun:
command: "docker exec -i test_mysql_1 mysql -uroot -proot test <dummy1.sql"
I tried onrun but this is not working .
i am building the first image but pulling the second image from the docker hub.
kindly help in how to execute the following command after the docker-compose up
There is nothing like onrun in docker-compose. It will only bring up the containers and execute the command. Now you have few possible options
Use mysql Image Initialization
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
volumes:
- ./dummy1.sql:/docker-entrypoint-initdb.d/dummy1.sql
ports:
- "3306:3306"
You may your sql files inside /docker-entrypoint-initdb.d inside the container
Use bash script
docker-compose up -d
# Give some time for mysql to get up
sleep 20
docker-compose exec mysql mysql -uroot -proot test <dummy1.sql
Use another docker service to initialize the DB
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
mysqlinit:
image: mysql:latest
volumes:
- ./dummy1.sql:/dump/dummy1.sql
command: bash -c "sleep 20 && mysql -h mysql -uroot -proot test < /dump/dummy1.sql"
You run another service which will init the DB for you, like mysqlinit in the above one
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order.
From https://hub.docker.com/_/mysql/
That is the convenient way how many databases (postgresql, mysql, ...) are initializing themselves on container-creation. You should create a *.sql / *.sh file and bind it via volume into the new container:
db:
image: mysql:latest
volumes:
- ./db/entrypoint:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=iamgroot
- MYSQL_DATABASE=gotg
This loads all your sql / sh files into the container which are then automatically executed.