How build dockerfile with few needed ports - docker

I want to learn Docker so I decided to create all files (Dockerfile, docker-compose) step by step by my own.
I need Centos 8 with httpd and webmin. I prepared Dockerfile with httpd and it works very well but when I am trying add RUN with install webmin cmd I can’t figure how open webmin panel. Port 10000 doesn’t work or it works but I don’t know how to get there.
Also, If I need Centos 8 with phpmyadmin, webmin, apache etc. should I create docker-compose with Centos 8 and phpmyadmin separately? Or another way?
My Dockerfile
FROM centos:8
RUN yum update -y && yum install -y \
httpd \
httpd-tools \
wget \
perl \
perl-Net-SSLeay \
openssl perl-Encode-Detect
RUN wget https://prdownloads.sourceforge.net/webadmin/webmin-1.930-1.noarch.rpm \
&& rpm -U webmin-1.930-1.noarch.rpm
EXPOSE 80
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]

Related

Jest-dynamoDB connection gets refused inside of docker container

I have a suite of tests written in Jest for dynamoDB that use the dynamodb-local instance as explained here using this dependency. I use a custom-built Docker image which builds a container within which the tests are executed.
Here's the Dockerfile
FROM openjdk:8-jre-alpine
RUN apk -v --no-cache add \
curl \
build-base \
groff \
jq \
less \
py-pip \
python openssl \
python3 \
python3-dev \
yarn \
&& \
pip3 install --upgrade pip awscli boto3 aws-sam-cli
EXPOSE 8000
I yarn install all of my dependencies and then yarn test, at this point after a long time it will output this:
Error
This is the command I ma using:
docker run -it --rm -p 8000:8000 -v $(pwd):/data -w /data aws-cli-java8-v15:latest
The tests work completely fine on my own machine, but no matter what project I use or what I include in my Dockerfile connection always gets dropped.
I solved the issue, turns out it has to do with Alpine Linux. Because it uses musl instead of Glibc local dynamodb won't be able to start and it will crash a few seconds after it was executed without outputting any error messages. The solution is to either use OracleJDK on alpine, which is hard enough given their new license or using any other OS that does use glibc with OpenJDK. Or you could try to install glibc on Alpine and try to link it to your OpenJDK, but it's not a terribly good idea.

How to Fix Shiny App Local Browser Issue?

I have a shiny app that works fine when I run the app through RStudio. Everything loads fine on http://127.0.0.1:xxxx, however I don't see the app when I click on "Open in Browser". Is this a generic issue that has a workaround? If you think the ui and server code is need, I can provide that.
I am trying to fix this above issue because I think this could be the reason that my docker image for this app is not showing up on http://localhost .
Here is the Dockerfile:
FROM r-base:3.5.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Add shiny user
RUN groupadd shiny \
&& useradd --gid shiny --shell /bin/bash --create-home shiny
# Download and install ShinyServer
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-1.5.7.907-amd64.deb && \
gdebi shiny-server-1.5.7.907-amd64.deb
# Install R packages that are required
RUN R -e "install.packages(c('Benchmarking', 'plotly', 'DT'), repos='http://cran.rstudio.com/')"
RUN R -e "install.packages('shiny', repos='https://cloud.r-project.org/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
When I run docker run -p 80:80 myimage, i don't get any error message, however I don't see any app on http://localhost. I even went ahead and added a new rule for network on VM Virtual Box, where the guest and host ports are 80, Host IP = 127.0.0.1, Guest IP = 192.168.99.100, and still app doesn't show up on localhost.
Thanks.

Is it possible to have a custom url for a docker container?

I have the following Dockerfile and was wondering what I would need to do in order to get access to it from my host machine by visiting myapp.dev:
FROM ubuntu:16.04
USER root
RUN apt-get update && apt-get -y upgrade && apt-get install apt-utils -y && DEBIAN_FRONTEND=noninteractive apt-get -y install \
apache2 php7.0 php7.0-mysql libapache2-mod-php7.0 curl lynx-cur git
EXPOSE 80
ADD www /var/www/site
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
CMD /usr/sbin/apache2ctl -D FOREGROUND
EXPOSE 80
I am using the following command to run the container:
docker run -d -p 8080:80
If you only want to be able to resolve it locally you could add an alias for localhost in your hosts file.
Locate your hosts file.
Linux: /etc/hosts
MacOS: /private/etc/hosts
Windows: C:\Windows\System32\drivers\etc\hosts
Add this line at the end of the file:
127.0.0.1 myapp.dev
Now you can access your container using myapp.dev:8080.

Dockerfile with LAMP running (Ubuntu)

I'm trying to create a Docker (LAMP) image with the following
Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
apache2 \
mysql-server \
php7.0 \
php7.0-bcmath \
php7.0-mcrypt
COPY start-script.sh /root/
RUN chmod +x /root/start-script.sh && /root/start-script.sh
start-script.sh:
#!/bin/bash
service mysql start
a2enmod rewrite
service apache2 start
I build it with:
docker build -t resting/ubuntu .
Then run it with:
docker run -it -p 8000:80 -p 5000:3306 -v $(pwd)/html:/var/www/html resting/ubuntu bash
The problem is, the MYSQL and Apache2 service are not started.
If I run /root/start-script.sh manually in the container, port 80 maps fine to port 8000, but I couldn't connect to MYSQL with 127.0.0.1:5000.
How can I ensure that the services are running when I spin up a container with the image, and map MYSQL out to my host machine?
You need to change the execution of the script to a CMD instruction.
FROM ubuntu:latest
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
apache2 \
mysql-server \
php7.0 \
php7.0-bcmath \
php7.0-mcrypt
COPY start-script.sh /root/
RUN chmod +x /root/start-script.sh
CMD /root/start-script.sh
Althought this works, this is not the right way to manage containers. You should have one container for your Apache2 and another one for MySQL.
Take a look to this article that build a LAMP stack using Docker-Compose: https://www.kinamo.be/en/support/faq/setting-up-a-development-environment-with-docker-compose
you need multiple images - one for each service or app.
A Docker container is not a virtual machine in which you run an entire stack. It is a virtual application, running one primary process.
If you need php, apache and mysql, then you will need 3 docker containers. one for your php app, one for apache and one for mysql.

How to handle PHP project code in docker container

I ran into kind of a hen-and-egg problem with my docker setup. In my Dockerfile I install nginx, php and the needed configurations. I also install composer there:
FROM ubuntu
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Now, the next step would be to actually install all dependencies in the project directory via composer. And this is where the trouble starts: As this is my development machine, I don't want to copy my local project files over to the docker container. Instead, I mounted it in my docker-compose.yml:
version: '3'
services:
web:
...
volumes:
- "./crm-application:/var/www/orocrm/"
I cannot put composer install in the Dockerfile, as the mounting of the directory (in my docker-compose file) is taking place after the Dockerfile is run.
What is the best solution here? Another option which comes to my mind is intially copying the files into the container and later on use a filewatcher to scp the changed files into the container. Not a nice solution, though.
UPDATE I would like to emphasize what my actual problem is: I am on my development machine and I want to continuously update the code and have the changes mirrored instantly withouth building the image once again. Therefore, COPY is not an option.
My suggestion is to copy your content in your container using the COPYcommand, like this
FROM ubuntu
COPY ./crm-application /var/www/orocrm/
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/ && \
composer install
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Why? in this way you don't need to use docker-compose or another system. You're going to be able to run your single container.
Even if you want to use docker-compose, you're using a volume that allows you to update the code inside your container.
Notice that I've added composer install in the Docker because you've already the code inside the container at the moment of the build.
Regards,
Idir!

Resources