I have a laravel project that runs on a single docker container.
I use docker-composer.yml to configure the container. I use nginx:latest base image in my Dockerfile. For some reason when I tried to launch my project with command docker-compose up I got this error:
web_1 | 2019-05-10 15:51:14,035 INFO spawnerr: can't find command '/usr/sbin/nginx'
web_1 | 2019-05-10 15:51:15,037 INFO spawnerr: can't find command '/usr/sbin/nginx'
web_1 | 2019-05-10 15:51:17,040 INFO spawnerr: can't find command '/usr/sbin/nginx'
web_1 | 2019-05-10 15:51:20,050 INFO spawnerr: can't find command '/usr/sbin/nginx'
web_1 | 2019-05-10 15:51:20,050 INFO gave up: nginx entered FATAL state, too many start retries too quickly
I was suprised so I took a look inside the container with docker exec -ti mycontainername bash and I couldn't find nginx anywhere. I tried with nginx -v,
whereis nginx , cd /etc/nginx <-- directory didn't exist.
So i tried to create a simple container, which containts only nginx. I should theoretically be able to go to localhost:80 and see the Nginx welcome message, right?
docker run --rm -d -p 80:80 --name my-nginx nginx
well there was no message and when I took a look inside the container
docker exec my-nginx I couldn't find nginx anywhere, but if I ran the command apt-get nginx it showed that nginx is already the latest version.
FULL DOCKER-COMPOSE.YML
version: '1'
services:
web:
build:
context: ./
# dockerfile: web.dockerfile
working_dir: /var/www/html
# volumes_from:
# - app
ports:
- 8080:80
volumes:
- ./:/var/www/html
- /var/run/docker.sock:/var/run/docker.sock
environment:
full Dockerfile:
FROM nginx:latest
RUN apt-get update && apt-get install -y php-gd
RUN apt-get -y install php7.2-zip
#COPY app /var/www/html/app
#COPY artisan /var/www/html/app/artisan
#COPY bootstrap /var/www/html/bootstrap
#COPY config /var/www/html/config
#COPY database /var/www/html/database
#COPY public /var/www/html/public
#COPY resources /var/www/html/resources
#COPY routes /var/www/html/routes
#COPY storage /var/www/html/storage
#COPY vendor /var/www/html/vendor
#COPY artisan /var/www/html/artisan
#COPY composer.json /var/www/html/composer.json
COPY entrypointcust.sh /entrypointcust.sh
RUN chmod +x /entrypointcust.sh
EXPOSE 80
WORKDIR /var/www/html/old
# Add crontab file in the cron directory
ADD cron /etc/cron.d/appcron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/appcron
RUN /usr/bin/crontab /etc/cron.d/appcron
# Create the log file to be able to run tail
#RUN touch /var/log/cron.log
#RUN touch /var/www/html/storage/logs/laravel.log
#RUN chown -R www-data:www-data /var/www/html
#RUN chmod -R 777 /var/www/html/storage
ENTRYPOINT ["/bin/bash", "-c", "/entrypointcust.sh"]
What am I missing?
Related
I'm running K8s deployment and trying to harden the security of one of my pod and because of that I started using the following docker image:
nginxinc/nginx-unprivileged:alpine
The problem is that I need to create a symlink and cannot get it done.
Here is the structure of my dockerfile
FROM nginxinc/nginx-unprivileged:alpine
ARG name
ARG ver
USER root
COPY ./outbox/${name}-${ver}.tgz ./
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./mime.types /etc/nginx/mime.types
COPY ./about.md ./
RUN mv /${name}-${ver}.tgz /usr/share/nginx/html
WORKDIR /usr/share/nginx/html
RUN tar -zxf ${name}-${ver}.tgz \
&& mv ngdist/* . \
&& mv /about.md ./assets \
&& rm -fr ngdist web-ui-${ver}.tgz \
&& mkdir -p /tmp/reports
RUN chown -R 1001 /usr/share/nginx/html/
COPY ./entrypoint.sh.${name} /bin/entrypoint.sh
RUN chown 1001 /bin/entrypoint.sh
USER 1001
EXPOSE 8080
CMD [ "/bin/entrypoint.sh" ]
and here my entrypoint.sh
#!/bin/sh
ln -s /tmp/reports /usr/share/nginx/html/reports
and here is my container in the pod deployment yaml file
containers:
- name: web-ui
image: "myimage"
imagePullPolicy: Always
ports:
- containerPort: 8080
name: web-ui
volumeMounts:
- name: myvolume
mountPath: /tmp/reports
I tried to set the entrypoint under the root execution but that did not help either, the error i'm getting is this:
Error: failed to start container "web-ui": Error response from daemon:
OCI runtime create failed: container_linux.go:380: starting container
process caused: exec: "/bin/entrypoint.sh": permission denied: unknown
Like other Linux commands, a Docker container's main CMD can't run if the program it names isn't executable.
Most source-control systems will track whether or not a file is executable, and Docker COPY will preserve that permission bit. So the best way to address this is to make the scripts executable on the host:
chmod +x entrypoint.sh.*
git add entrypoint.sh.*
git commit -m 'make entrypoint scripts executable'
docker-compose build
docker-compose up -d
If that's not an option, you can fix this up in the Dockerfile too.
COPY ./entrypoint.sh.${name} /bin/entrypoint.sh
RUN chmod 0755 /bin/entrypoint.sh
Like other things in /bin, the script should usually be owned by root, executable by everyone, and writable only by its owner; you do not generally want the application to have the ability to overwrite its own code.
So what I wanted to do is using "COPY script.sh script.sh" (copy script from hos to container and execute) but when executing script in container , seems this script is also executing on host machine.
Below is Dockerfile :
FROM almalinux/almalinux:latest
RUN mkdir /opt/confluent
RUN mkdir /opt/confluent-hub
#Confluent Home
ENV CONFLUENT_HOME=/opt/confluent
ENV KAFKA_CONFIG=$KAFKA_CONFIG
ENV ZOOKEEPER_CONFIG=$ZOOKEEPER_CONFIG
ENV SCHEMA_REGISTRY_CONFIG=$ZOOKEEPER_CONFIG
ENV CONNECT_CONFIG=$CONNECT_CONFIG
# Zookeeper
ENV ZOOKEEPER_DATA_DIR=$ZOOKEEPER_DATA_DIR
ENV ZOOKEEPER_CLIENT_PORT=$ZOOKEEPER_CLIENT_PORT
#Kafka
ENV BOOTSTRAP_SERVERS=$BOOTSTRAP_SERVERS
ENV KAFKA_SERVER_BROKER_ID=$KAFKA_SERVER_BROKER_ID
ENV ZOOKEEPER_CONNECT_IP_PORT=$ZOOKEEPER_CONNECT_IP_PORT
ENV KAFKA_SERVER_LOG_DIR=$KAFKA_SERVER_LOG_DIR
# schmea registry
ENV KAFKASTORE_TOPIC=$KAFKASTORE_TOPIC
ENV PROTOCOL_BOOTSTRAP_SERVERS=$PROTOCOL_BOOTSTRAP_SERVERS
ENV SCHEMA_REGISTRY_GROUP_ID=$SCHEMA_REGISTRY_GROUP_ID
ENV SCHEMA_REGISTRY_LEADER_ELIGIBILITY=$SCHEMA_REGISTRY_LEADER_ELIGIBILITY
# Kafka connect
ENV CONNECT_REST_PORT=$CONNECT_REST_PORT
ENV CONNECT_OFFSETS=$CONNECT_OFFSETS
ENV CONNECT_KEY_CONVERTER=$CONNECT_KEY_CONVERTER
ENV SCHEMA_REGISTRY_URL=$SCHEMA_REGISTRY_URL
ENV CONNECT_VALUE_CONVERTER=$CONNECT_VALUE_CONVERTER
ENV SCHEMA_REGISTRY_LISTENER=$SCHEMA_REGISTRY_LISTENER
ENV CONNECT_PLUGIN_PATH=/usr/share/java/,$CONFLUENT_HOME/share/confluent-hub-components/
# install openjdk8
RUN dnf update -y && dnf install epel-release -y
RUN dnf install wget zip moreutils gettext unzip java-1.8.0-openjdk.x86_64 -y
# install conflunet
WORKDIR $CONFLUENT_HOME
RUN wget https://packages.confluent.io/archive/6.1/confluent-community-6.1.1.tar.gz -P .
RUN tar -xvzf confluent-community-6.1.1.tar.gz
RUN mv confluent-6.1.1/* .
RUn rm -rf confluent-6.1.1 confluent-community-6.1.1.tar.gz
# install confluent hub
RUN wget http://client.hub.confluent.io/confluent-hub-client-latest.tar.gz -P /opt/confluent-hub
WORKDIR /opt/confluent-hub
RUN tar -xvzf confluent-hub-client-latest.tar.gz
RUN rm -rf confluent-hub-client-latest.tar.gz
ENV CONFLUENT_HUB /opt/confluent-hub/bin
# Export path
ENV PATH $PATH:$CONFLUENT_HOME:$CONFLUENT_HUB
# install jdbc connector
COPY confluentinc-kafka-connect-jdbc-10.1.0.zip $CONFLUENT_HOME/share/confluent-hub-components/
RUN unzip $CONFLUENT_HOME/share/confluent-hub-components/confluentinc-kafka-connect-jdbc-10.1.0.zip
RUN rm -rf confluentinc-kafka-connect-jdbc-10.1.0.zip
# Copy confleunt config to docker
WORKDIR $CONFLUENT_HOME
COPY config/* config/
# startup
COPY startup.sh ./startup.sh
RUN chmod +x ./startup.sh
CMD ./startup.sh
Below is startup.sh which replaces environment variables in config files and starts kafka service but this script when run in container is replacing values on host config file :
#!/bin/bash
# Substitue environment variables in actual $CONFLUENT_HOME/configs
envsubst < $CONFLUENT_HOME/config/zookeeper.properties | sponge $CONFLUENT_HOME/config/zookeeper.properties
envsubst < $CONFLUENT_HOME/config/server.properties | sponge $CONFLUENT_HOME/config/server.properties
envsubst < $CONFLUENT_HOME/config/schema-registry.properties | sponge $CONFLUENT_HOME/config/schema-registry.properties
envsubst < $CONFLUENT_HOME/config/connect-avro-standalone.properties | sponge $CONFLUENT_HOME/config/connect-avro-standalone.properties
# start zookeeper
$CONFLUENT_HOME/bin/zookeeper-server-start -daemon $ZOOKEEPER_CONFIG
sleep 2
# start kafka broker
$CONFLUENT_HOME/bin/kafka-server-start -daemon $KAFKA_CONFIG
sleep 2
# start schema registry
$CONFLUENT_HOME/bin/schema-registry-start -daemon $SCHEMA_REGISTRY_CONFIG
sleep 2
# start kafka connect
$CONFLUENT_HOME/bin/connect-standalone -daemon $CONNECT_CONFIG $CONFLUENT_HOME/etc/kafka/connect-file-sink.properties
sleep 2
while :
do
echo "Confluent Running "
sleep 5
done
docker-compose :
version : "3.9"
services:
confluent-community:
build: ./
environment:
- KAFKA_CONFIG=$CONFLUENT_HOME/config/server.properties
- ZOOKEEPER_CONFIG=$CONFLUENT_HOME/config/zookeeper.properties
- SCHEMA_REGISTRY_CONFIG=$CONFLUENT_HOME/config/schema-registry.properties
- CONNECT_CONFIG=$CONFLUENT_HOME/config/connect-avro-standalone.properties
- CONNECT_REST_PORT=8083
- CONNECT_OFFSETS=$CONFLUENT_HOME/data/connect/connect.offsets
- CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
- SCHEMA_REGISTRY_URL=http://localhost:8081
- CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
- SCHEMA_REGISTRY_LISTENER=http://0.0.0.0:8081
- KAFKASTORE_TOPIC=_schemas
- SCHEMA_REGISTRY_GROUP_ID=SCHEMA_REGISTRY_A
- SCHEMA_REGISTRY_LEADER_ELIGIBILITY=true
- PROTOCOL_BOOTSTRAP_SERVERS=PLAINTEXT://localhost:9092
- ZOOKEEPER_DATA_DIR=$CONFLUENT_HOME/data/zookeeper
- ZOOKEEPER_CLIENT_PORT=2181
- BOOTSTRAP_SERVERS=localhost:9092
- KAFKA_SERVER_BROKER_ID=0
- ZOOKEEPER_CONNECT_IP_PORT=localhost:2181
- KAFKA_SERVER_LOG_DIR=$CONFLUENT_HOME/data/kafka-logs
# ports:
#- "9092:9092"
# - "8081:8081"
#- "8083:8083"
network_mode: "host"
volumes:
- ~/Documents/confluent/docker-logs:/opt/confluent/logs
- ~/Documents/confluent/config:/opt/confluent/config
- ~/Documents/confluent/docker-data:/opt/confluent/data
When you bind-mount configuration files into a container
volumes:
- ~/Documents/confluent/config:/opt/confluent/config
the files in the container are the files on the host. When your startup script uses envsubst to rewrite the configuration files, there's not a separate copy in the container, so it rewrites the files on the host as well.
If you use a separate directory instead:
volumes:
- ~/Documents/confluent/config:/opt/confluent/config-templates
Then your script can read the files in that directory, and write to a non-volume directory:
for f in "$CONFLUENT_HOME/config-templates/*"; do
ff=$(basename "$f")
envsubst <$f >"$CONFLUENT_HOME/config/$ff"
done
(Run the four processes in four separate containers, without using a -daemon option so they're the single foreground process in their respective containers. You shouldn't need to configure any of the filesystem paths or inject them at run time; the *_CONFIG environment variables, for example, can be safely left at their default values, or if they must be set, set them only in the Dockerfile).
I'm getting started with docker and following official docker documentation.
When I execute docker-compose run command only a temp folder gets created and no other folder/file.
Dockerfile
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /payment-api
WORKDIR /payment-api
COPY Gemfile /payment-api/Gemfile
COPY Gemfile.lock /payment-api/Gemfile.lock
RUN bundle install
COPY . /payment-api
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
Command I'm running to create the rails app
docker-compose run web rails new . --force --no-deps --database=postgresql
P.S: Not getting any sort of error and commands are executing normally but nothing except a temp folder gets created.
I know it's a bit late but I faced a similar issue running Docker toolbox on a Windows 10 Home Edition , along with VirtualBox . The key here is to allow "shared folder" access to VirtualBox . Navigate to VirtualBox -> Settings -> Shared Folders . Add your application path to "Shared Folders" . It should work fine now .
Hope this helps!
I'm trying to build docker-compose, but I'm getting this error:
ERROR: for indicaaquicombrold_mysqld_1 Cannot start service mysqld:
oci runtime error: container_linux.go:247: starting container process
caused "exec: \"/docker-entrypoint.sh\": permission denied"
ERROR: for mysqld Cannot start service mysqld: oci runtime error:
container_linux.go:247: starting container process caused "exec:
\"/docker-entrypoint.sh\": permission denied"
ERROR: Encountered errors while bringing up the project.
docker-compose.yml
version: '3'
services:
php:
build:
context: ./docker/php
image: indicaaqui.com.br:tag
volumes:
- ./src:/var/www/html/
- ./config/apache-config.conf:/etc/apache2/sites-enabled/000-default.conf
ports:
- "80:80"
- "443:443"
mysqld:
build:
context: ./docker/mysql
environment:
- MYSQL_DATABASE=db_indicaaqui
- MYSQL_USER=indicaqui
- MYSQL_PASSWORD=secret
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./config/docker-entrypoint.sh:/docker-entrypoint.sh
- ./database/db_indicaaqui.sql:/docker-entrypoint-initdb.d/db_indicaaqui.sql
Dockerfile (php)
FROM php:5.6-apache
MAINTAINER Limup <limup#outlook.com>
CMD [ "php" ]
RUN docker-php-ext-install pdo_mysql
# Enable apache mods.
# RUN a2enmod php5.6
RUN a2enmod rewrite
# Expose apache.
EXPOSE 80
EXPOSE 443
# Use the default production configuration
# RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
RUN mv "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini"
# Override with custom opcache settings
# COPY ./../../config/php.ini $PHP_INI_DIR/conf.d/
# Manually set up the apache environment variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
# Update the PHP.ini file, enable <? ?> tags and quieten logging.
RUN sed -i "s/short_open_tag = Off/short_open_tag = On/" "$PHP_INI_DIR/php.ini"
RUN sed -i "s/error_reporting = .*$/error_reporting = E_ERROR | E_WARNING | E_PARSE/" "$PHP_INI_DIR/php.ini"
RUN a2dissite 000-default.conf
RUN chmod -R 777 /etc/apache2/sites-enabled/
WORKDIR /var/www/html/
# By default start up apache in the foreground, override with /bin/bash for interative.
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Dockerfile (Mysql)
FROM mariadb:latest
RUN chmod -R 777 /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
Please, help me solve this problem!
Any ideas?
That is most likely a Linux file permission issue on config/docker-entrypoint.sh. If your host is Linux/Mac, you can run:
chmod 755 config/docker-entrypoint.sh
For more on linux permissions, here's a helpful article: https://www.linux.com/learn/understanding-linux-file-permissions
First, you need to copy entrypoint.sh file into other directory instead of same your source code (Eg. /home/entrypoint.sh), then grant permission to execute entrypoint script:
RUN ["chmod", "+x", "/home/entrypoint.sh"]
Solution
ENV USER root
ENV WORK_DIR_PATH /home
RUN mkdir -p $WORK_DIR_PATH && chown -R $USER:$USER $WORK_DIR_PATH
WORKDIR $WORK_DIR_PATH
Info
The USER instruction sets the user name (or UID) and optionally the user group (or GID) to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile.
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
Links
chown command
docker builder reference
A pretty common solution if nothing works is to re-install Docker.. That's what ended up working for me after trying for like 5 hours everything under the sun in terms of permissions etc.
I have a Dockerfile that contains steps that create a directory and runs an angular build script outputing to that directory. This all seems to run correctly. However when the container runs, the built files and directory are not there.
If I run a shell in the image:
docker run -it pnb_web sh
# cd /code/static
# ls
assets favicon.ico index.html main.js main.js.map polyfills.js polyfills.js.map runtime.js runtime.js.map styles.js styles.js.map vendor.js vendor.js.map
If I exec a shell in the container:
docker exec -it ea23c7d30333 sh
# cd /code/static
sh: 1: cd: can't cd to /code/static
# cd /code
# ls
Dockerfile api docker-compose.yml frontend manage.py mysite.log pnb profiles requirements.txt settings.ini web_variables.env
david#lightning:~/Projects/pnb$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea23c7d30333 pnb_web "python3 manage.py r…" 13 seconds ago Up 13 seconds 0.0.0.0:8000->8000/tcp pnb_web_1_267d3a69ec52
This is my dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt install nodejs
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir /code/static
WORKDIR /code/frontend
RUN npm install -g #angular/cli
RUN npm install
RUN ng build --outputPath=/code/static
and associated docker-compose:
version: '3'
services:
db:
image: postgres
web:
build:
context: .
dockerfile: Dockerfile
working_dir: /code
env_file:
- web_variables.env
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
In the second example, the static directory has never been created or built into. I thought that a container is an instance of an image. How can the container be missing files from the image?
You're confusing build-time and run-time, along playing with Volumes.
Remember that host mount has priority over FS provided by the running container, so even your built image has assets, they are going to be overwritten by .services.web.volumes because you're mounting the host filesystem that overwrites the build result.
If you try to avoid volumes mounting you'll notice that everything is working as expected.