I have a Dockerfile that creates an image with Apache/php and redis inside.
I am aware that it should be splitted in 2 containers. But I whant to know if it is possible to start apache and redis during the run process.
For now I could run in two different ways:
docker run --rm -p 80:80 -p 6379:6379 -v $MY_FULLPATH:/var/www/html -e REMOTE_HOST=$REMOTE_HOST my_img redis-server
docker run --rm -p 80:80 -p 6379:6379 -v $MV_FULLPATH:/var/www/html -e REMOTE_HOST=$REMOTE_HOST my_img apache2-foreground
If I run using the first method I must open the terminal to manually start apache.
If I run using the second one I must start REDIS manually.
By the documentation : "If you list more than one CMD then only the last CMD will take effect." I know that only "redis-server" will be working at the start.
So Is there a way to set booth automatically? .
This is my Dockerfile:
FROM php:5-apache
## Update apt-get
RUN apt-get update
RUN apt-get install -y figlet
RUN figlet MV_Docker_Build
## UTILITIES
RUN figlet vim
RUN apt-get install -y vim
RUN figlet wget
RUN apt-get install -y wget
RUN figlet CURL
RUN apt-get install -y curl
## APACHE2 basic installation
RUN figlet APACHE2
RUN apachectl -M
RUN a2enmod rewrite
RUN a2enmod expires
RUN service apache2 restart
RUN apachectl -M
## ====================================================================== > PHP modules
## Note: when installing from php5 for some modules we need to copy from php5/mods-available to local/etc/php/conf.d and create a simbolic link
RUN figlet PHP_MODULES
RUN php -m
RUN apt-get install -y php5-common
RUN apt-get install -y php-calendar
#RUN cp /etc/php5/mods-available/calendar.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/calendar.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/calendar.so
#RUN docker-php-ext-install calendar
RUN docker-php-ext-install bcmath
RUN apt-get install -y php5-mhash
#RUN cp /etc/php5/mods-available/mhash.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/mhash.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/mhash.so
RUN apt-get install -y php5-intl
RUN cp /etc/php5/mods-available/intl.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/intl.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/intl.so
RUN apt-get install -y php5-mcrypt
RUN cp /etc/php5/mods-available/mcrypt.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/mcrypt.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/mcrypt.so
RUN apt-get install -y php5-redis
RUN cp /etc/php5/mods-available/redis.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/redis.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/redis.so
RUN apt-get install -y php5-mysql
RUN cp /etc/php5/mods-available/mysql.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/mysql.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/mysql.so
RUN cp /etc/php5/mods-available/opcache.ini /usr/local/etc/php/conf.d
RUN apt-get install -y php5-gd
RUN cp /etc/php5/mods-available/gd.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/gd.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/gd.so
RUN apt-get install -y php5-gdcm
RUN cp /etc/php5/mods-available/gdcm.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/gdcm.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/gdcm.so
RUN apt-get install -y php5-vtkgdcm
RUN cp /etc/php5/mods-available/vtkgdcm.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/vtkgdcm.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/vtkgdcm.so
RUN apt-get install -y php5-ldap
RUN cp /etc/php5/mods-available/ldap.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/ldap.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/ldap.so
RUN apt-get install -y php5-xsl
RUN cp /etc/php5/mods-available/xsl.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/xsl.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/xsl.so
RUN apt-get install -y php5-tidy
RUN cp /etc/php5/mods-available/tidy.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/tidy.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/tidy.so
RUN apt-get install -y php5-xmlrpc
RUN cp /etc/php5/mods-available/xmlrpc.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/xmlrpc.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/xmlrpc.so
RUN apt-get install -y php5-pgsql
RUN cp /etc/php5/mods-available/pgsql.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/pgsql.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/pgsql.so
RUN cp /etc/php5/mods-available/mysqli.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/mysqli.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/mysqli.so
RUN cp /etc/php5/mods-available/pdo.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/pdo.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/pdo.so
RUN cp /etc/php5/mods-available/pdo_mysql.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/pdo_mysql.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/pdo_mysql.so
RUN cp /etc/php5/mods-available/pdo_pgsql.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/pdo_pgsql.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/pdo_pgsql.so
RUN cp /etc/php5/mods-available/readline.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/readline.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/readline.so
#RUN apt-get install -y php5-snmp
#RUN cp /etc/php5/mods-available/snmp.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/snmp.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/snmp.so
RUN figlet PHP_MODULES
RUN php -m
## ====================================================================== > End of PHP modules
## ====================================================================== > REDIS
RUN figlet REDIS
RUN apt-get install -y telnet redis-server
RUN apt-get install -y redis-server
## ====================================================================== > NPM
RUN figlet NPM
RUN apt-get install -y npm
## ====================================================================== > COPYING php.ini
RUN figlet COPYING__php.ini
RUN cp /etc/php5/cli/php.ini /usr/local/etc/php/
RUN ls -l /usr/local/etc/
## ====================================================================== > XDEBUG
# XDEBUG EXTENSION FOR PHP | DOCUMENTATION => https://xdebug.org/docs/remote
#
# install xdebug and enable it. This block of code goes through the installion from source and compiling steps found
# on the xdebug website
# https://xdebug.org/docs/install
RUN figlet INSTALLING__XDEBUG
RUN cd /tmp \
&& wget http://xdebug.org/files/xdebug-2.5.4.tgz \
&& tar -xvzf xdebug-2.5.4.tgz \
&& cd xdebug-2.5.4 \
&& phpize \
&& ./configure \
&& make \
&& cp modules/xdebug.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/
RUN figlet INSIDE_no-debug-non-zts-20131226/
RUN ls -l /usr/local/lib/php/extensions/no-debug-non-zts-20131226/
#https://stackoverflow.com/questions/47596381/how-to-setup-an-variable-env-inside-dockerfile-to-be-overriden-in-a-docker-conta?noredirect=1#comment82150863_47596381
# ADD xdebug configurations
RUN figlet SETTING__XDEBUG__php.ini
RUN { \
echo '[xdebug]'; \
echo 'zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20131226/xdebug.so'; \
echo 'xdebug.remote_enable=1'; \
echo 'xdebug.remote_port=9000'; \
echo 'xdebug.remote_autostart=1'; \
echo 'xdebug.remote_handler=dbgp'; \
echo 'xdebug.idekey=dockerdebug'; \
echo 'xdebug.profiler_output_dir="/var/www/html"'; \
echo 'xdebug.remote_connect_back=0'; \
echo 'xdebug.remote_host=$REMOTE_HOST'; \
} >> /usr/local/etc/php/php.ini
RUN figlet XDEGUB__IN__php.ini
RUN cat /usr/local/etc/php/php.ini
## ====================================================================== > COMPOSER
RUN figlet Escape_SUDO
RUN exit
RUN figlet Install__COMPOSER
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php \
&& php -r "unlink('composer-setup.php');" \
&& mv composer.phar /usr/bin/composer
RUN composer
## ====================================================================== > PhpUnit
RUN figlet PhpUnit
RUN curl https://phar.phpunit.de/phpunit-5.6.0.phar -L -o phpunit.phar
RUN chmod +x phpunit.phar
RUN mv phpunit.phar /usr/local/bin/phpunit
RUN figlet COPYING_entrypoint.sh
COPY entrypoint.sh /usr/local/bin/
RUN figlet Permission_entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
ENTRYPOINT [ "entrypoint.sh" ]
# EXPOSE - PORTS
RUN figlet EXPOSE_PORTS
EXPOSE 80
#EXPOSE 6379
EXPOSE 9000
#CMD ["apache2-foreground","redis-server"]
#ADD run.sh /run.sh
COPY run.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/run.sh
CMD ["/bin/sh", "-c", "/run.sh"]
This is the run.sh
#!/usr/bin/env bash
exec apache2-foreground &
exec redis-server &
This is entrypoint.sh
#!/bin/bash
set -e
# Check if our environment variable has been passed.
if [ -z "${REMOTE_HOST}" ]
then
echo "REMOTE_HOST has not been set."
exit 1
else
sed -i.bak "s/\$REMOTE_HOST/${REMOTE_HOST}/g" /usr/local/etc/php/php.ini
fi
exec "$#"
You can start more than one process a couple of ways:
Start them as a service
Start them trough a cron job (#reboot)
Start processes in backgound
UPDATE after your Dockerfile post
Before I'll try to answer a couple of pointers:
Every time you enter a RUN command in the Dockerfile it will create a new layer and it makes the image bigger and the build slower.
This container clearly tries to do too much. A container should do 1 thing and 1 thing good.
Having said that, I think I have a solution :-)
remove the run.sh
change your entrypoint to this:
#!/bin/bash
set -e
# Check if our environment variable has been passed.
if [ -z "${REMOTE_HOST}" ]
then
echo "REMOTE_HOST has not been set."
exit 1
else
sed -i.bak "s/\$REMOTE_HOST/${REMOTE_HOST}/g" /usr/local/etc/php/php.ini
fi
echo "Starting redis"
exec redis-server &
exec "$#"
and The end of your Dockerfile to this:
RUN figlet EXPOSE_PORTS
EXPOSE 80
#EXPOSE 6379
EXPOSE 9000
CMD ["apache2-foreground"]
rebuild and have fun :-)
Screenshot of my running console
Related
I installed Freeswitch and fusionPbx using docker. here is the docker file:
FROM ubuntu:20.04
#ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Asia/Tehran
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install Required Dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y wget
RUN wget -O - https://raw.githubusercontent.com/fusionpbx/fusionpbx-install.sh/master/ubuntu/pre-install.sh | sh \
&& cd /usr/src/fusionpbx-install.sh/ubuntu && ./install.sh
#RUN wget -O - https://raw.githubusercontent.com/fusionpbx/fusionpbx-install.sh/master/debian/pre-install.sh | sh \
#&& cd /usr/src/fusionpbx-install.sh/ && ./install.sh
USER root
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY start-freeswitch.sh /usr/bin/start-freeswitch.sh
VOLUME ["/var/lib/postgresql", "/etc/freeswitch", "/var/lib/freeswitch", "/usr/share/freeswitch", "/var/www/fusionpbx"]
CMD /usr/bin/supervisord -n
the docker is working fine and running. and is running using below command:
docker run -dit -name fus5 -p 80:80 -p 443:443 -p 5060:5060 -p 5080:5080 fus_ew:0.5
Problem:
the problem is that when trying to connect to an external sip gateway, give the below error when I hit the start button:
Freeswitch Invalid profile [External]: 2002
Appreciate it if anyone can help.
Regards
When the chromium succeed to launch, its Debugging WebSocket URL should be like ws://127.0.0.1:9222/devtools/browser/ec261e61-0e52-4016-a5d7-d541e82ecb0a.
127.0.0.1:9222 should be able to browse by Chrome to inspect the headless Chromium. However, I cannot access the remote debugger URL by Chrome after I dockerize my application.
launchOption for launching chromium by Puppeteer:
{
"args": [
"--remote-debugging-port=9222",
"--window-size=1920,1080",
"--mute-audio",
"--disable-notifications",
"--force-device-scale-factor=0.8",
"--no-sandbox",
"--disable-setuid-sandbox"
],
"defaultViewport": {
"height": 1080,
"width": 1920
},
"headless": true
}
Dockerfile:
FROM node:10.16.3-slim
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh -O /usr/sbin/wait-for-it.sh \
&& chmod +x /usr/sbin/wait-for-it.sh
WORKDIR /usr/app
COPY ./ ./
VOLUME ["......." ]
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /usr/app \
&& npm install
USER pptruser
CMD npm run start
EXPOSE 3000 9222
Run the new container by :
docker run \
-p 3000:3000 \
-p 9222:9222 \
pptr
Port 9222 should be accessible in my host machine. But Chrome shows the error ERR_EMPTY_RESPONSE when I browse 127.0.0.1:9222 and DOCKER-INTERNAL-IP:9222 will timeout.
I managed to make this work with puppeteer using the following Dockerfile, docker run and puppeteer config:
FROM ubuntu:18.04
RUN apt update \
&& apt install -y \
curl \
wget \
gnupg \
gcc \
g++ \
make \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt install -y nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser
ADD . /app
WORKDIR /app
RUN chown -R pptruser:pptruser /app
RUN rm -rf node_modules
RUN rm -rf build/*
USER pptruser
RUN npm install --dev
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT /app/entrypoint.sh
Docker run:
docker run -p 9223:9222 -it myimage
Puppeteer launch:
this.browser = await puppeteer.launch(
{
headless: true,
args: [
'--remote-debugging-port=9222',
'--remote-debugging-address=0.0.0.0',
'--no-sandbox'
]
}
);
The entrypoint just launches the platform like: node build/main.js
After that I just had to connect to localhost:9223 on Chrome to see the browser. Hope it helps!
I know there is already an accepted answer, but let me add onto this in hopes to greatly reduce your image size. One shouldn't add too many extras into the Dockerfile if one can help it. But ultimately, adding --remote-debugging-port=9222 and --remote-debugging-address=0.0.0.0 will allow you to access it.
Dockerfile
FROM ubuntu:latest
LABEL Full Name <email#email.com> https://yourwebsite.com
WORKDIR /home/
COPY wrapper-script.sh wrapper-script.sh
# install chromium-browser and cleanup.
RUN apt update && apt install chromium-browser --no-install-recommends -y && apt autoremove && apt clean && apt autoclean && rm -rf /var/lib/apt/lists/*
# Run your commands and add environment variables from your compose file.
CMD ["sh", "wrapper-script.sh"]
I use a wrapper script so that I can include environment variables here. You can see URL and USERNAME set so that I can configure them from the compose file. Of course, i'm sure there is a better way to do this, but I do this so that I can scale my containers horizontally with ease.
wrapper-script.sh
#!/bin/bash
# Start the process
chromium-browser --headless --disable-gpu --no-sandbox --remote-debugging-port=9222 --remote-debugging-address=0.0.0.0 ${URL}${USERNAME}
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start chromium-browser: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps aux |grep chromium-browser | grep -q -v grep
PROCESS_1_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Lastly, I have the docker-compose file. This is where I define all my settings so that I can configure my wrapper-script.sh with what I need and scale horizontally. Notice the environment section of the docker-compose file. USERNAME and URL are environment variables, and they can be called from the wrapper script.
docker-compose.yml
version: '3.7'
services:
chrome:
command: [ 'sh', 'wrapper-script.sh' ]
image: headless-chrome
build:
context: .
dockerfile: Dockerfile
environment:
- USERNAME=eaglejs
- URL=https://teamtreehouse.com/
ports:
- 9222:9222
If you are wondering what my folder structure looks like. all three files are at the root of the folder. For example:
My_Docker_Repo:
Dockerfile
docker-compose.yml
wrapper-script.sh
After that is all said and done, I simply run docker-compose up and I have one container running. Right now, using the ports section, you'll have to do something to scale that as well. if you were to run docker-compose up --scale chrome=5 your ports will clash, but let me know if you want to try that and i'll see what I can do for scaling, but other than that, if it is for testing, this should work well the way it is. :) Happy coding!
eaglejs
I'm working on a project that uses a Docker image for a specific feature, other than that I don't need docker at all so I don't understand much about it. The issue is that Docker doesn't finds a file that is actually in the folder and the build process breaks.
When trying to create the image using docker build -t project/render-worker . the error is this:
Step 18/23 : RUN bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo
---> Running in 695db3bf2f02
/bin/sh: 1: bin/composer-install: not found
The command '/bin/sh -c bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo' returned a non-zero code: 127
As mentioned the file composer-install does exist and this is what's in it:
#!/bin/sh
EXPECTED_SIGNATURE="$(wget -q -O - https://composer.github.io/installer.sig)"
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
ACTUAL_SIGNATURE="$(php -r "echo hash_file('SHA384', 'composer-setup.php');")"
if [ "$EXPECTED_SIGNATURE" != "$ACTUAL_SIGNATURE" ]
then
echo 'ERROR: Invalid installer signature'
rm composer-setup.php
fi
Basically this is to get composer as you can see.
This is the Docker file:
FROM php:7.2-apache
RUN echo 'deb http://ftp.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
libpq-dev \
libxml2-dev \
ffmpeg \
imagemagick \
wget \
git \
zlib1g-dev \
libpng-dev \
unzip \
mencoder \
parallel \
ruby-dev
RUN apt-get -t stretch-backports install -y --no-install-recommends \
libav-tools \
&& rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install \
pcntl \
pdo_pgsql \
pgsql \
soap \
gd \
zip
RUN gem install compass
RUN a2enmod rewrite
ENV APACHE_RUN_USER root
ENV APACHE_RUN_GROUP root
EXPOSE 80
WORKDIR /app
COPY . /app
# Configuring apache to run the symfony app
COPY config/docker/apache.conf /etc/apache2/sites-enabled/000-default.conf
RUN echo "export DATABASE_URL" >> /etc/apache2/envvars \
&& echo ". /etc/environment" >> /etc/apache2/envvars
RUN wget -cqO- https://nodejs.org/dist/v10.15.3/node-v10.15.3-linux-x64.tar.xz | tar -xJ
RUN cp -a node-v10.15.3-linux-x64/bin /usr \
&& cp -a node-v10.15.3-linux-x64/include /usr \
&& cp -a node-v10.15.3-linux-x64/lib /usr \
&& cp -a node-v10.15.3-linux-x64/share /usr/ \
&& rm -rf node-v10.15.3-linux-x64 node-v10.15.3-linux-x64.tar.xz
RUN bin/composer-install \
&& php composer-setup.php --install-dir=/bin \
&& php -r "unlink('composer-setup.php');" \
# Install prestissimo for dramatically faster `composer install`
&& php /bin/composer.phar global require hirak/prestissimo
RUN APP_ENV=prod APP_SECRET= DATABASE_URL= AWS_KEY= AWS_SECRET= AWS_REGION= MEDIA_S3_BUCKET= \
GIPHY_API_KEY= FACEBOOK_APP_ID= FACEBOOK_APP_SECRET= \
GOOGLE_API_KEY= GOOGLE_CLIENT_ID= GOOGLE_CLIENT_SECRET= STRIPE_SECRET_KEY= STRIPE_ENDPOINT_SECRET= \
THEYSAIDSO_API_KEY= REV_CLIENT_API_KEY= REV_USER_API_KEY= REV_API_ENDPOINT= RENDER_QUEUE_URL= \
CLOUDWATCH_LOG_GROUP_NAME= \
php /bin/composer.phar install --no-interaction --no-dev --prefer-dist --optimize-autoloader --no-scripts \
&& php /bin/composer.phar clear-cache
RUN npm install \
&& node_modules/bower/bin/bower install --allow-root \
&& node_modules/grunt/bin/grunt
# Don't allow it to keep logs around; they're emitted on STDOUT and sent to AWS
# CloudWatch from there, so we don't need them on disk filling up the space
RUN mkdir -p var/cache/prod && chmod -R 777 var/cache/prod
RUN mkdir -p var/log && ln -s /dev/null var/log/prod.log \
&& ln -s /dev/null var/log/prod.deprecations.log && chmod -R 777 var/log
CMD ["/usr/bin/env", "bash", "./bin/start_render_worker"]
Like I said, unfortunately I don't have the slightest idea of how docker works and what's going on, just that I need it. I'm running docker in Win10 Pro and to make matters even worst it is actually working for another dev running Win10. We tried a few things but we can't make it work. I tried cloning the repo in other locations with no success at all. Everything before this particular step runs correctly.
[EDIT]
As suggested by the users I ran RUN ls bin/ before the composer install line and this is the result:
Step 18/24 : RUN ls bin/
---> Running in 6cb72090a069
append_captions
capture
composer-install
concat_project_video
console
encode_frames
encode_frames_to_gif
format_video_for_concatenation
generate_meme_bar
image_to_video
install.sh
phpcs
phpunit
process_render_queue
publish_docker_image
run_animation_worker
run_render_worker
run_render_worker_osx
start_render_worker
update
Removing intermediate container 6cb72090a069
As you can see composer-install is there so this is quite baffling.
Also I checked and set the line ending sequence to LF and the result is the same error.
[SECOND EDIT]
I added COPY bin/composer-install /bin
Then RUN ls bin/
And the results are the same. The ls command finds the file but the error persists. Also adding a slash before bin doesn't change anything :(
I installed Elasticsearch in my image based on ubuntu:16.04.
And start the service using
RUN service elasticsearch start
but, it was not started.
If I go into the container and run it, it starts.
I want to run the service and dump the index when I create the image, below is a part of my Dockerfile.
How do I start Elasticsearch in the Dockerfile?
#install OpenJDK-8
RUN apt-get update && apt-get install -y openjdk-8-jdk && apt-get install -y ant && apt-get clean
RUN apt-get update && apt-get install -y ca-certificates-java && apt-get clean
RUN update-ca-certificates -f
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME
#download ES
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN apt-get install -y apt-transport-https
RUN echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list
RUN apt-get update && apt-get install -y elasticsearch
RUN service elasticsearch start
The RUN command executes only during the build phase. It stops after the build is completed. You should use CMD (or ENTRYPOINT) instead:
CMD service elasticsearch start && /bin/bash
It's better wrapping the starting command in your own file and then only execute the file:
CMD /start_elastic.sh
I don't know why not take official oss image, but, this Docker file based on Debian work:
FROM java:8-jre
ENV ES_NAME=elasticsearch \
ELASTICSEARCH_VERSION=6.6.1
ENV ELASTICSEARCH_URL=https://artifacts.elastic.co/downloads/$ES_NAME/$ES_NAME-$ELASTICSEARCH_VERSION.tar.gz
RUN apt-get update && apt-get install -y --assume-yes openssl bash curl wget \
&& mkdir -p /opt \
&& echo '[i] Start create elasticsearch' \
&& wget -T 15 -O /tmp/$ES_NAME-$ELASTICSEARCH_VERSION.tar.gz $ELASTICSEARCH_URL \
&& tar -xzf /tmp/$ES_NAME-$ELASTICSEARCH_VERSION.tar.gz -C /opt/ \
&& ln -s /opt/$ES_NAME-$ELASTICSEARCH_VERSION /opt/$ES_NAME \
&& useradd elastic \
&& mkdir -p /var/lib/elasticsearch /opt/$ES_NAME/plugins /opt/$ES_NAME/config/scripts \
&& chown -R elastic /opt/$ES_NAME-$ELASTICSEARCH_VERSION/
ENV PATH=/opt/elasticsearch/bin:$PATH
USER elastic
CMD [ "/bin/sh", "-c", "/opt/elasticsearch/bin/elasticsearch --E cluster.name=test --E network.host=0 $ELASTIC_CMD_OPTIONS" ]
I believe most of the commands you'll be able to use on Ubuntu.
Don't forget to run sudo sysctl -w vm.max_map_count=262144 on your host
I need to build the dockerfile that downloads jenkins.war and through it jenkins-cli.jar need to be downloaded.
I have conf.xml also to configure it.**
Then I need that image to run in the bash, which needs to run that jar file commands.
Here is the code:
FROM ubuntu:14.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:webupd8team/java -y && \
apt-get update && \
echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \
apt-get install -f -y oracle-java8-installer && \
apt install -y default-jre curl wget git nano; \
apt-get clean
# Install dependencies
RUN apt-get -y update && \
apt-get -yqq --no-install-recommends install git bzip2 curl unzip && \
apt-get update
ENV JAVA_HOME /usr
ENV PATH $JAVA_HOME/bin:$PATH
# copy jenkins war file to the container
ADD http://mirrors.jenkins.io/war-stable/2.107.1/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
# configure the container to run jenkins, mapping container port 8080 to that host port
RUN mkdir /jenkins/
RUN echo 2.107.1 > /jenkins/jenkins.install.UpgradeWizard.state
RUN echo 2.107.1 > /jenkins/jenkins.install.InstallUtil.lastExecVersion
CMD ["nohup","java", "-jar", "/opt/jenkins.war"]
EXPOSE 8080
VOLUME /jenkins
#COPY jenkins-cli.jar /jenkins/jenkins-cli.jar
#jenkins-cli installation
ENV JENKINS_URL "http://192.168.99.100:8080"
RUN curl --insecure http://192.168.99.100:8080/jnlpJars/jenkins-cli.jar \
--output /jenkins/jenkins-cli.jar
CMD ["java","-jar","/jenkins/jenkins-cli.jar","-noCertificateCheck","-noKeyAuth"]
Here is what im getting.
MY ASSUMPTION
Do I need to run along congf.xml?If yes , then HOW?
Do I need to be running jenkins.war instance in background??? HOW?
Thank you in advance
If you see the reference , I can find those comments.
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
in your dockerfile, there are multiple CMD commands. only the last one will be executed.
If you want to run multiple commands at once. try bash scripts. here is the example
#!/bin/bash
echo "Starting sshd"
exec /usr/sbin/sshd -D &
if [ -z "$1" ];
then
tail -f $HADOOP_INSTALL/logs/*
fi