Permission denied when running docker-compose up on Mac OS - docker

I'm having some issues with permissions in my docker-compose, Dockerfile scripts.
This is the error I have when running docker-compose up:
As you can see I have a "Permission denied" error that prevents my API to be up and running.
This is what my docker-compose.yml file looks like (I skipped the database part because it's not relevant to the problem I have here):
version: '3'
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "1338:1337"
links:
- postgres
environment:
- DATABASE_URL=postgres://postgres:postgres#postgres:5432/postgres
- POSTGRES_PASSWORD=postgres
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
command: [
"docker/api/wait-for-postgres.sh",
"postgres",
"docker/api/start.sh"
]
And my Dockerfile:
FROM node:14
RUN apt-get update && apt-get install -y postgresql-client
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install -g nodemon
COPY . /usr/src/app
EXPOSE 1337
What I've tried so far is changing the permissions and switching to root user inside my container but it didn't change a thing (still have the same error as the one shown in the screenshot above).
FROM node:14
RUN apt-get update && apt-get install -y postgresql-client
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install -g nodemon
COPY . /usr/src/app
USER root
RUN chmod +x docker/api/start.sh
RUN chmod +x docker/api/wait-for-postgres.sh
EXPOSE 1337
EDIT:
Content of wait-for-postgres.sh script:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 10
done
>&2 echo "Postgres is up - executing command"
exec "$#"
Any thoughts on this ? Thanks for your help !

Related

npm ERR! code ERR_SOCKET_TIMEOUT When I Want Install node_modules with docker-compose

I'm using Dockerfile to run and install node_modules for a gatsby project. The Dockerfile has the below structure:
FROM node:alpine
EXPOSE 8000
RUN apk add --update --no-cache build-base python3-dev python3 libffi-dev libressl-
dev bash git gettext curl \
&& curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& pip install --upgrade six awscli awsebcli
WORKDIR /app
COPY ./package.json .
RUN npm install
COPY . .
RUN yarn install && yarn cache clean
CMD ["yarn", "develop", "-H", "0.0.0.0" ]
And Here is the code of docker-compose.yml :
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
volumes:
- app/node_modules
- .:/app
After running docker-compose build command I'm getting the below error:
How can I solve this problem and install node_modules with docker?
Usually these problems are related to internet Censorship.
Run this command docker container prune and docker image prune
Then use a proxy.
These commands remove cached and useless containers and images. But be careful if you have an important image or container.

GCP Cloud Run error on deploying my Docker image to Google Container Registry / Cloud Run

i´m quite new to Docker and GCP and try to find a working way, to deploy my Laravel App on GCP.
I already set up CI and and selected "cloudbuild.yaml" as build configuration. I followed innumerable tutorials and read the Google Container Docs, so i created a "cloudbuild.yaml" which includes arguments to use my docker-composer.yaml, to create the stack of my app (app code, database, server).
During the Google Cloud Build process i get:
Step #0: Creating workspace_app_1 ...
Step #0: Creating workspace_web_1 ...
Step #0: Creating workspace_db_1 ...
Step #0: Creating workspace_app_1 ... done
Step #0: Creating workspace_web_1 ... done
Step #0: Creating workspace_db_1 ... done
Finished Step #0
Starting Step #1
Step #1: Already have image (with digest): gcr.io/cloud-builders/docker
Step #1: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #1
ERROR
ERROR: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
docker-compose.yml:
version: "3.8"
volumes:
php-fpm-socket:
db-store:
services:
app:
build:
context: .
dockerfile: ./infra/docker/php/Dockerfile
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
environment:
- DB_CONNECTION=mysql
- DB_HOST=db
- DB_PORT=3306
- DB_DATABASE=${DB_NAME:-laravel_local}
- DB_USERNAME=${DB_USER:-phper}
- DB_PASSWORD=${DB_PASS:-secret}
web:
build:
context: .
dockerfile: ./infra/docker/nginx/Dockerfile
ports:
- ${WEB_PORT:-80}:80
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
db:
build:
context: .
dockerfile: ./infra/docker/mysql/Dockerfile
ports:
- ${DB_PORT:-3306}:3306
volumes:
- db-store:/var/lib/mysql
environment:
- MYSQL_DATABASE=${DB_NAME:-laravel_local}
- MYSQL_USER=${DB_USER:-phper}
- MYSQL_PASSWORD=${DB_PASS:-secret}
- MYSQL_ROOT_PASSWORD=${DB_PASS:-secret}
cloudbuild.yaml
steps:
# running docker-compose
- name: 'docker/compose:1.28.4'
args: ['up', '-d']
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/MY_PROJECT_ID/laravel-docker-1']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'laravel-docker-1', '--image', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '--region', 'europe-west3', '--platform', 'managed']
images:
- gcr.io/MY_PROJECT_ID/laravel-docker-1
What is wrong in this configuration?
I solved this issue to deploy a running Laravel 8 application to Google Cloud with the following Dockerfile. PS: Any optimization regarding the FROM and RUN steps are appreciated:
#
# PHP Dependencies
#
FROM composer:2.0 as vendor
WORKDIR /app
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install \
--no-interaction \
--no-plugins \
--no-scripts \
--no-dev \
--prefer-dist
COPY . .
RUN composer dump-autoload
#
# Frontend
#
FROM node:14.9 as frontend
WORKDIR /app
COPY artisan package.json webpack.mix.js package-lock.json ./
RUN npm audit fix
RUN npm cache clean --force
RUN npm cache verify
RUN npm install -f
COPY resources/js ./resources/js
COPY resources/sass ./resources/sass
RUN npm run development
#
# Application
#
FROM php:7.4-fpm
WORKDIR /app
# Install PHP dependencies
RUN apt-get update -y && apt-get install -y build-essential libxml2-dev libonig-dev
RUN docker-php-ext-install pdo pdo_mysql opcache tokenizer xml ctype json bcmath pcntl
# Install Linux and Python dependencies
RUN apt-get install -y curl wget git file ruby-full locales vim
# Run definitions to make Brew work
RUN localedef -i en_US -f UTF-8 en_US.UTF-8
RUN useradd -m -s /bin/zsh linuxbrew && \
usermod -aG sudo linuxbrew && \
mkdir -p /home/linuxbrew/.linuxbrew && \
chown -R linuxbrew: /home/linuxbrew/.linuxbrew
USER linuxbrew
RUN /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
USER root
#RUN chown -R $CONTAINER_USER: /home/linuxbrew/.linuxbrew
ENV PATH "$PATH:/home/linuxbrew/.linuxbrew/bin"
#Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt install -y ./google-chrome-stable_current_amd64.deb
# Install Python modules (dependencies) of scraper
RUN brew install python3
RUN pip3 install selenium
RUN pip3 install bs4
RUN pip3 install pandas
# Copy Frontend build
COPY --from=frontend app/node_modules/ ./node_modules/
COPY --from=frontend app/public/js/ ./public/js/
COPY --from=frontend app/public/css/ ./public/css/
COPY --from=frontend app/public/mix-manifest.json ./public/mix-manifest.json
# Copy Composer dependencies
COPY --from=vendor app/vendor/ ./vendor/
COPY . .
RUN cp /app/drivers/chromedriver /usr/local/bin
#COPY .env.prod ./.env
COPY .env.local-docker ./.env
# Copy the scripts to docker-entrypoint-initdb.d which will be executed on container startup
COPY ./docker/ /docker-entrypoint-initdb.d/
COPY ./docker/init_db.sql .
RUN php artisan config:cache
RUN php artisan route:cache
CMD php artisan serve --host=0.0.0.0 --port=8080
EXPOSE 8080

Postgres database, Error : NetworkError when attempting to fetch resource

I am trying to do a Docker image but I have some problems. Here is my docker-compose.yml :
version: '3.7'
services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
stdin_open: true
depends_on:
- db
db:
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASS=pass
- POSTGRES_DB=mydb
- POSTGRES_PORT=5432
- POSTGRES_HOST=localhost
- POSTGRES_HOST_AUTH_METHOD=trust
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
And there my Dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# copy project
COPY . .
RUN python -m pip install -U --force-reinstall pip
RUN python -m pip install Pillow
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN pip install Pillow
# run entrypoint.sh
ENTRYPOINT ["sh", "./entrypoint.sh"]
Anf finally my entrypoint.sh :
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
exec "$#"
When I do that :
docker-compose up -d --build
It works perfectly. Then I type that :
docker-compose exec web npm start --prefix ./front/
It looks ok but when I type in my browser http://localhost:3000/
I got that kind of messages : Error : NetworkError when attempting to fetch resource.
I thought the front is ok but I am not able to communicate with the back and so the database.
Could you help me please ?
Thank you very much !
As I can see in docker-compose.yml file you did not define the environment variables for Postgres in the web container. Please define the environment variables for the below :
DATABASE
SQL_HOST
SQL_PORT
Then bring down the docker and bring up the docker hopefully it will help you.

Docker: ./entrypoint.sh not found

I am trying to setup a django project and dockerize it.
I'm having trouble running the container.
As far as I know, it's successfully abe to build it, but fails to run.
This is the error I get:
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./entrpoint.sh\": stat ./entrpoint.sh: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
This is the dockerfile:
FROM python:3.6
RUN mkdir /backend
WORKDIR /backend
ADD . /backend/
RUN pip install -r requirements.txt
RUN apt-get update \
&& apt-get install -yyq netcat
RUN chmod 755 entrypoint.sh
ENTRYPOINT ["./entrpoint.sh"]
This is the compose file:
version: '3.7'
services:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=django
- POSTGRES_PASSWORD=password
- POSTGRES_DB=database
web:
restart: on-failure
build: .
container_name:backend
volumes:
- .:/backend
env_file:
- ./api/.env
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
hostname: web
depends_on:
- db
volumes:
postgres_data:
And there is an entrypoint file which runs automatic migrations, if any:
Here is the script:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
exec "$#"
Where am I going wrong?
The problem is that you it's not the entrypoint.sh missing but the nc command.
To solve this you have to install the netcat package.
Since python:3.6 is based on debian buster, you can simply add the following command after the FROM directive:
RUN apt-get update \
&& apt-get install -yyq netcat
EDIT for further improvements:
copy only the requirements.txt, install the packages then copy the rest. This will improve the cache usage and every build (after the first) will be faster (unless you touch the requirements.txt)
replace the ADD with COPY unless you're exploding a tarball
The result should look like this:
FROM python:3.6
RUN apt-get update \
&& apt-get install -yyq netcat
RUN mkdir /backend
WORKDIR /backend
COPY requirements.txt /backend/
RUN pip install -r requirements.txt
COPY . /backend/
ENTRYPOINT ["./entrypoint.sh"]

ERROR: Service 'redis' failed to build. When building redis image by docker-compose

I'm dockerizing an application which based on nodejs, redis and mysql. I already installed redis server and its running fine, but I'm enable to dockerize all three by using docker-compose.yml
$ docker-compose up --build
Building redis
Step 1/11 : FROM node:alpine
---> e079048502ec
Step 2/11 : FROM redis:alpine
---> da2b86c1900b
Step 3/11 : RUN mkdir -p /usr/src/app
---> Using cache
---> 28b2f837b54c
Step 4/11 : WORKDIR /usr/src/app
---> Using cache
---> d1147321eec4
Step 5/11 : RUN apt-get install redis-server
---> Running in 2dccd5689663
/bin/sh: apt-get: not found
ERROR: Service 'redis' failed to build: The command '/bin/sh -c apt-get install redis-server' returned a non-zero code: 127
This is my dockerfile.
Dockerfile:
FROM node:alpine
FROM redis:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
## Install Redis ##
RUN apt-get install redis-server
## Install nodejs on ubuntu ##
RUN sudo apt-get update && wget http://nodejs.org/dist/v0.6.9/node-v0.6.9.tar.gz \
&& tar -xvzf node-v0.6.9.tar.gz \
&& cd node-v0.6.9 \
&& ./configure && make && sudo make install \
&& mkdir myapp && cd myapp \
&& npm init \
&& npm install express --save \
&& npm install express \
&& npm install --save path serve-favicon morgan cookie-parser body-parser \
&& npm install --save express jade \
&& npm install --save debug \
COPY package.json /usr/src/app/
COPY redis.conf /usr/local/etc/redis/redis.conf
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf", "npm", "start" ]
This is docker-compose.yml file
docker-compose.yml
version: '2'
services:
db:
build: ./docker/mysql
# image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
#- ./mysql:/docker-entrypoint-initdb.d
# restart: always
environment:
MYSQL_ROOT_PASSWORD: root
# MYSQL_DATABASE: cg_apiserver
# MYSQL_USER: root
# MYSQL_PASSWORD: root
redis:
build: ./docker/redis
image: "redis:alpine"
node:
build: ./docker/node
ports:
- '3000:80'
restart: always
volumes:
- .:/usr/src/app
depends_on:
- db
- redis
command: npm start
volumes:
db_data:
It seems that you have tried to merge two Dockerfile's in one
First, your multiple FROM has no sense here. The basic concept is to base FROM only one base image. See this
Second, you have a docker-compose looking good, but seeing the Dockerfile, it shows that you are trying to build both applications (redis and node app) in the same image.
So take redis stuff out from ./docker/node/Dockerfile:
FROM node:alpine
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
## Install nodejs on ubuntu ##
RUN wget http://nodejs.org/dist/v0.6.9/node-v0.6.9.tar.gz \
&& tar -xvzf node-v0.6.9.tar.gz \
&& cd node-v0.6.9 \
&& ./configure && make && sudo make install \
&& mkdir myapp && cd myapp \
&& npm init \
&& npm install express --save \
&& npm install express \
&& npm install --save path serve-favicon morgan cookie-parser body-parser \
&& npm install --save express jade \
&& npm install --save debug \
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD ["npm", "start" ]
Use this ./docker/redis/Dockerfile:
FROM redis:alpine
COPY redis.conf /usr/local/etc/redis/redis.conf
# No need to set a custom CMD
And, I recommend to remove the "image:" line from redis (docker-compose.yml). It is not necessary:
redis:
build: ./docker/redis
image: "redis:alpine" <----
Edit. Also, you don't need apt-get update anymore. I've removed this sudo apt-get update &&
It is working now after having the below changes:
Create a folder in root docker
Inside the docker create folder redis
Create Dockerfile having the below contents:
docker >> redis >> Dockerfile
FROM smebberson/alpine-base:1.0.0
'#MAINTAINER Scott Mebberson <scott#scottmebberson.com>
VOLUME ["/data"]
'#Expose the ports for redis
EXPOSE 6379
There was no change in the docker-compose.yml file.
Run the below command and see the output
Run this command to build the container
sudo docker-compose up --build -d
Run this command to check the running container
sudo docker ps
Run this command to check the network and get the IP
sudo docker inspect redis_container_name
sudo docker inspect node_container_name
I'v solved this problem (COPY don't work) easy in my project: just add "context" - path to Dockerfile directory in your YML file (version 3), example:
build:
context: Starkman.Backend.Storage/Redis
dockerfile: Dockerfile
"Starkman.Backend.Storage/Redis" - its path to directory. And an unknown temporary directory for command "COPY" will be inside your "context".
This is my Dockerfile:
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
EXPOSE 6379
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

Resources