Can anybody help me? This docker-compose file worked for me a few days ago with docker command available inside the container, but now it throws: docker: not found inside.
The docker daemon on the host is on /usr/local/bin/docker. It's a mac.
Any idea? Could you help me to try this on yours guys? Thks
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins
build:
context: jenkins
# entrypoint: /var/jenkins_home/entrypoint
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
environment:
- AWS_ACCESS_KEY_ID=xxxxx
- AWS_SECRET_ACCESS_KEY=xxxxx
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: centos
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
db_host:
container_name: db
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=1234
networks:
- net
networks:
net:
Dockerfile for remote_host service is the following:
RUN yum install -y openssh-server
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen > /dev/null 2>&1
RUN yum install -y mysql
RUN yum install -y epel-release && \
yum install -y python-pip && \
pip install --upgrade pip && \
pip install awscli
# CMD /usr/sbin/sshd-keygen -D
CMD tail -f /dev/null
Related
I'm following the docs of mariadb. It says that the db should be created if it find a .sql in /docker-entrypoint-initdb.d.
I'm working on a Ubuntu Server in a Oracle Virtual BOX VM.
My docker-compose.yml looks like this:
version: "3.9"
services:
db:
image: mariadb:10
container_name: mariadb
ports:
- 3306:3306
environment:
- MYSQL_USER=user
- MYSQL_ROOT_PASSWORD=password
- MYSQL_PASSWORD=password
- MARIADB_DATABASE=database // tried with MYSQL_DATABASE and without this line
volumes:
- "db_data:/var/lib/mysql"
- ".database/initdb/dump.sql:/docker-entrypoint-initdb.d/initdb.sql"
# networks:
# - network
volumes:
db_data:
My initdb.sql looks like this (the one that should work in the end looks different but out of simplicity I reduced it to the max and could not even this simple one working):
CREATE DATABASE NEWDB;
I honestly don't know where to look or what to do now because everywhere I looked for a possible solution I found that this is the bare minimum example that should work.
I tried to restarted docker, deleted all containers, images and volumes, modified the initdb.sql into:
CREATE USER user WITH PASSWORD 'password';
CREATE DATABASE IF NOT EXISTS database;
GRANT ALL PRIVILEGES ON DATABASE database TO user;
but the database is not initialized when I docker compose up.
I looked up the container and the initdb.sql was there.
EDIT: It somehow worked, when I docker compose up with MARIADB_DATABASE=database but the script initdb.sql still doesn't work and it's the most important thing because it set's up the whole database.
(NOTE: On top of that I want to set up another PHP-container that runs a PHP-script in order to collect data that is being stored in the above MariaDB-container. The MariaDB is connected with a website that calls data from the container)
Well I'm using the following stack and it works fine for me.
php-apache:
This is an Apache server that runs all my php scripts. You can place your scripts in ./src directory and it will automatically be mounted to DocumentRoot directory of the Apache server.
db:
This the latest docker container of MariaDb
adminer:
This is the lite-weight database browser which I use for creating and altering my databases. You can just visit localhost:8081 and then enter the following credentials. It becomes simpler to manage the databases this way.
username: root
password: example
version: '3.8'
services:
php-apache:
container_name: php-apache
build:
context: .
dockerfile: Dockerfile
image: php:8.0-apache
volumes:
- ./src:/var/www/html/
ports:
- 8080:80
db:
image: mariadb
restart: always
environment:
MARIADB_ROOT_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 8081:8080
Dockerfile:
This is a simple docker container which is extended from the base php:8.0-apache image, with mysql extensions installed in it for PDO support.
FROM php:8.0-apache
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN apt-get update && apt-get upgrade -y
P.S:
Here you'll have to create all your databases manually via GUI of Adminer. But if you prefer SQL queries via initdb.sql then be my guest. I've just provided this configuration as a suggestion.
I came up with a solution. I used a base image of laravel (installed the laravel project with <curl -s "https://laravel.build/project-name?with=mariadb" | sudo bash> and modified it a little bit. So here's the docker-compose.yml:
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mariadb
mariadb:
image: 'mariadb:10'
container_name: 'mariadb-10'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'sail-mariadb:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-mariadb:
driver: local
Here you can see that the "10-create-testing-database.sh" is executed on startup. I tested this container and it created a database, so I just had to modify it a little bit and now the container creates a database and tables on container startup. Here's the "10-create-testing-database.sh":
#!/usr/bin/env bash
mysql --user=root --password="$MYSQL_ROOT_PASSWORD" <<-EOSQL
CREATE DATABASE IF NOT EXISTS database_name;
GRANT ALL PRIVILEGES ON \`testing%\`.* TO '$MYSQL_USER'#'%';
USE database_name;
CREATE TABLE IF NOT EXISTS table_name(
table_entries ...
);
EOSQL
I still don't know why my initial setup did not work. The only difference I see is that this working file is a .sh and the not working one is a .sql (this does not make sence to me but it what it is).
Dockerfile:
FROM ubuntu:22.04
LABEL maintainer="Taylor Otwell"
ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.1-cli php8.1-dev \
php8.1-pgsql php8.1-sqlite3 php8.1-gd \
php8.1-curl \
php8.1-imap php8.1-mysql php8.1-mbstring \
php8.1-xml php8.1-zip php8.1-bcmath php8.1-soap \
php8.1-intl php8.1-readline \
php8.1-ldap \
php8.1-msgpack php8.1-igbinary php8.1-redis php8.1-swoole \
php8.1-memcached php8.1-pcov php8.1-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.1
RUN groupadd --force -g $WWWGROUP sail
RUN useradd -ms /bin/bash --no-user-group -g $WWWGROUP -u 1337 sail
COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.1/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container
EXPOSE 8000
ENTRYPOINT ["start-container"]
I can access to the target with ssh password and with the private key from Jenkins bash, I configured SSH sites on jenkins with the same host, User and private key I get the next error:
Docker logs:
2022-09-23 05:06:52.357+0000 [id=71] SEVERE o.j.h.p.SSHBuildWrapper$DescriptorImpl#doLoginCheck: Auth fail 2022-09-23 05:06:52.367+0000 [id=71] SEVERE o.j.h.p.SSHBuildWrapper$DescriptorImpl#doLoginCheck: Can't connect to server
Docker-compose:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: fedora
dockerfile: Dockerfile
networks:
- net
db_host:
container_name: db
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=PASSWORD"
volumes:
- "$PWD/db:/var/lib/mysql"
networks:
- net
networks:
net:
DockerFile:
FROM fedora
RUN yum update -y
RUN yum -y install unzip
RUN yum -y install openssh-server
RUN useradd RemoteUser && \
echo "RemoteUser:Password"| chpasswd && \
mkdir /home/madchabelo/.ssh && \
chmod 700 /home/madchabelo/.ssh
COPY remote-ki.pub /home/madchabelo/.ssh/authorized_keys
RUN chown madchabelo:madchabelo -R /home/madchabelo/.ssh/ && \
chmod 600 /home/madchabelo/.ssh/authorized_keys
RUN ssh-keygen -A
RUN yum -y install mysql
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
sudo ./aws/install
RUN yum -y install vim
CMD /usr/sbin/sshd -D
I try with the IP and I get the same.
regards
When creating a private key, you should create a code with the following command after version ubuntu 20.04.
ssh-keygen -t ecdsa -m PEM -f remote-key
For a more detailed explanation, see the link below:
https://community.jenkins.io/t/ssh-connection-auth-fail/4121/7
I have a development environment where I run npm run serve in my local terminal and then docker-compose up -d in a different terminal to run the services I need to start my system.
I have an instance where I am attempting to run front-end tests, which I run inside of a running container using nightwatchJS, and for some reason the test runner is not accessing the files loaded from npm run serve. Quite literally when I print out a screenshot using the test runner the page looks as if I have canceled running npm run serve, however when I go to the page 127.0.0.1 in my browser, everything is loading as usual.
I think my issue is that the test is being run inside of a docker container like so:
docker-compose exec web bash -c "npx nightwatch ...file"
where that specific instance is not running npm run serve but I am confused as to why it works when I hit the browser personally. I have tried exposing ports in the Dockerfile but that does not work.
Can anybody point me in the right direction?
Here is my Dockerfile:
FROM python:3.8.5-slim-buster
# the first 2 prevent Python from writing out pyc files or from buffering stdin/stdout
# the others are Node
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 12.7.0
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# the man1 directory is not present for slim-buster so we add that and then install all of the default system based dependencies
# NOTE...TOP LAYERS ARE CACHED FIRST!!!!
RUN mkdir -p /usr/share/man/man1 \
&& apt-get clean && apt-get update -y && apt-get install pdftk-java curl git -y \
&& curl --silent -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash \
&& apt-get install zlib1g-dev libjpeg-dev python3-pythonmagick inkscape xvfb poppler-utils libfile-mimeinfo-perl qpdf libimage-exiftool-perl ufraw-batch ffmpeg gcc procps -y \
&& apt-get clean && apt-get autoclean
# SELINUM
# get wget...
# Adding trusting keys to apt for repositories
RUN apt-get install gnupg -y && apt-get install wget -y \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' \
&& apt-get update -y \
&& apt-get install google-chrome-stable -y \
&& apt-get install unzip -yqq
# Set up Chromedriver Env Vars
ENV CHROMEDRIVER_VERSION 87.0.4280.20
ENV CHROMEDRIVER_DIR /chromedriver
# make directory for it...
RUN mkdir $CHROMEDRIVER_DIR
# Download and install Chromedriver
RUN wget -q --continue -P $CHROMEDRIVER_DIR "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip" \
&& unzip $CHROMEDRIVER_DIR/chromedriver* -d $CHROMEDRIVER_DIR \
&& rm "$CHROMEDRIVER_DIR/chromedriver_linux64.zip"
# Put Chromedriver into the PATH
ENV PATH $CHROMEDRIVER_DIR:$PATH
# Set display port as an environment variable
ENV DISPLAY=:99
# SELINUM
## NIGHTMARE
#RUN apt-get install wget -y && wget http://selenium-release.storage.googleapis.com/2.44/selenium-server-standalone-2.44.0.jar -P /bin/
#RUN apt install default-jre -y
#RUN apt-get install -y xvfb x11-xkb-utils xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic x11-apps clang libdbus-1-dev libgtk2.0-dev libnotify-dev libgconf2-dev libasound2-dev libcap-dev libcups2-dev libxtst-dev libxss1 libnss3-dev gcc-multilib g++-multilib
# ensure node is installed, and at the end, make the working directory
RUN . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default \
&& mkdir /code
# set working directory to /code...it was just made for this purpose
WORKDIR /code
# possible that these will cache so separate them from COPY . /code/
COPY requirements.txt /code/
# now install, this will normally also cache
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# place this at the end because the code will always change...this will almost never cache...
COPY . /code/
EXPOSE 8001
EXPOSE 8888
Here is my compose file:
version: '3.4'
services:
redis:
image: redis
ports:
- "6379"
restart: unless-stopped
networks:
main:
aliases:
- redis
postgres:
image: postgres:12
ports:
- "5432:5432"
env_file: ./.env
restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
networks:
main:
aliases:
- postgres
#access by going to localhost:16543
#when adding a server to the serve list
#the hostname is postgres
#the username is postgres
#the password is postgres
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
env_file: ./.env
restart: unless-stopped
ports:
- "16543:80"
networks:
main:
aliases:
- pgadmin
celery:
build:
network: host
context: .
dockerfile: Dockerfile-dev # use docker-dev because production npm installs and npm builds
command: python manage.py celery
env_file: ./.env
restart: unless-stopped
volumes:
- .:/code
- tmp:/tmp
links:
- redis
depends_on:
- redis
networks:
main:
aliases:
- celery
web:
build:
network: host
context: .
dockerfile: Dockerfile-dev
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- tmp:/tmp
ports:
- "8000:8000"
env_file: ./.env
restart: unless-stopped
links:
- postgres
- redis
- celery
- pgadmin
depends_on:
- postgres
- redis
- celery
- pgadmin
networks:
main:
aliases:
- web
volumes:
pgdata:
tmp:
networks:
main:
This question already has answers here:
Deploying a minimal flask app in docker - server connection issues
(8 answers)
Closed 2 years ago.
I was trying to use docker for my flask backend. I've written a docker file that uses python:3.8 as build-python then install all packages from requirements.txt file. I was PostgreSQL database then written in docker-compose.yml file. When I put command in terminal
sudo docker-compose run
It shows api_1 | Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) which looks good. But I can't access this from my host pc It shows This site can’t be reached
Here is my docker-compose.yml file
version: "2"
services:
api:
ports:
- 5000:5000
build:
context: ./mymeds
dockerfile: ./Dockerfile
restart: unless-stopped
networks:
- mymeds-backend-tier
depends_on:
- db
volumes:
- ./mymeds/app/:/app/app:Z
command: python manage.py run
env_file: common.env
db:
image: library/postgres:11.1-alpine
ports:
- 5432:5432
restart: unless-stopped
networks:
- mymeds-backend-tier
volumes:
- mymeds-db:/var/lib/postgresql
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=user123
volumes:
mymeds-db:
driver: local
networks:
mymeds-backend-tier:
driver: bridge
Here is my Dockerfile for flask backend
FROM python:3.8 as build-python
RUN apt-get -y update \
&& apt-get install -y gettext \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt /app/
WORKDIR /app
RUN pip install -r requirements.txt
FROM python:3.8-slim
RUN apt-get update \
&& apt-get install -y \
libxml2 \
libssl1.1 \
libcairo2 \
libpango-1.0-0 \
libpangocairo-1.0-0 \
libgdk-pixbuf2.0-0 \
shared-mime-info \
mime-support \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY . /app
COPY --from=build-python /usr/local/lib/python3.8/site-packages/ /usr/local/lib/python3.8/site-packages/
COPY --from=build-python /usr/local/bin/ /usr/local/bin/
WORKDIR /app
EXPOSE 5000
ENV PORT 5000
The Flask must be bound to 0.0.0.0. Not to the 127.0.0.1.
I have a nginx and php-fpm containers.
When I'am in my php container in a projet and I exec any command (like vendor/bin/behat or composer update) who takes time and I click on CTRL+C. I'm ejected from the container. I don't know why.. When I click on CTRL+C without executing commands i don't have the problem.
Any idea ?
This is my docker-compose.yml file :
version: '3'
services:
nginx:
image: nginx:latest
restart: always
ports:
- "80:80"
volumes:
- ./nginx/conf:/etc/nginx/custom_conf
- ./nginx/hosts:/etc/nginx/conf.d/
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./logs/nginx:/var/log/nginx
- ..:/var/www
networks:
my_network:
ipv4_address: 10.5.0.31
web:
build: .
restart: always
ports:
- "9000:9000"
- "5001:5001"
volumes:
- ./php/php.ini:/usr/local/etc/php/conf.d/30-php.ini
- ./php/app2.conf:/usr/local/etc/php/conf.d/app2.conf
- ./keys/:/var/www/.ssh
- ./custom-hosts:/etc/custom-hosts
- ..:/var/www
- ./supervisor/supervisord.conf:/etc/supervisor/supervisord.conf
- ./supervisor/conf/:/etc/supervisor/conf.d/
networks:
my_network:
ipv4_address: 10.5.0.20
tty: true
db:
build: mysql
restart: always
ports:
- "3306:3306"
volumes:
- ./logs/mysql:/var/log/mysqld.log
- ./mysql/sql:/var/dumps
- data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
networks:
my_network:
ipv4_address: 10.5.0.23
volumes:
data:
driver: local
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
My php-fpm Dockerfile:
FROM php:7.1-fpm
WORKDIR /var/www
RUN apt-get update && apt-get install -y wget git vim sudo unzip apt-utils
RUN apt-get install -y gnupg
RUN apt-get update
### composer
RUN cd /usr/src
RUN curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
# xdebug
RUN pecl install xdebug-2.5.0 \
&& docker-php-ext-enable xdebug
### php extension
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get clean && apt-get update && apt-get -y --fix-missing install libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng-dev \
libicu-dev \
libxml2-dev \
g++ \
zlib1g-dev
RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/
RUN docker-php-ext-install -j$(nproc) gd
RUN docker-php-ext-install exif
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-configure intl
RUN docker-php-ext-install intl
RUN apt-get install -y libzip-dev
RUN docker-php-ext-install zip
### main
RUN usermod -u 1000 www-data
RUN chmod -R 777 /var/www/
RUN chown -R www-data:www-data /var/www
ADD bash_profile /var/www/.bash_profile
ADD script.sh /usr/bin/script.sh
RUN chmod 755 /usr/bin/script.sh
CMD ["bin/bash"]
ENTRYPOINT ["script.sh"]
EXPOSE 9000
And my script.sh :
#! /bin/bash
php-fpm &
echo "Serveur de développement Cartesia Education"
cat /etc/custom-hosts >> /etc/hosts
dpkg-reconfigure -f noninteractive tzdata
echo "LC_TIME=fr_FR.utf8" >> /etc/environment
service supervisor start
exec su -l www-data -s /bin/bash
Thank you for your help.
Have you tried running the container in detached mode (-d option)?
> docker run -d [CONTAINER-NAME]
This will cause the container to run in the background. You can still SSH into the running container by:
> docker exec -it [CONTAINER-NAME] bash
Exiting the container once in will not terminate it.