PHP and redis in same docker image - docker

I'm trying to add redis to a php:7.0-apache image, using this Dockerfile:
FROM php:7.0-apache
RUN apt-get update && apt-get -y install build-essential tcl
RUN cd /tmp \
&& curl -O http://download.redis.io/redis-stable.tar.gz \
&& tar xzvf redis-stable.tar.gz \
&& cd redis-stable \
&& make \
&& make install
COPY php.ini /usr/local/etc/php/
COPY public /var/www/html/
RUN chown -R root:www-data /var/www/html
RUN chmod -R 1755 /var/www/html
RUN find /var/www/html -type d -exec chmod 1775 {} +
RUN mkdir -p /var/redis/6379
COPY 6379.conf /etc/redis/6379.conf
COPY redis_6379 /etc/init.d/redis_6379
RUN chmod 777 /etc/init.d/redis_6379
RUN update-rc.d redis_6379 defaults
RUN service apache2 restart
RUN service redis_6379 start
It build and run fines but redis is never started? When I run /bin/bash inside my container and manually input "service redis_6379 start" it works, so I'm assuming my .conf and init.d files are okay.
While I'm aware it'd much easier using docker-compose, I'm specifically trying to avoid having to use it for specific reasons.

There are multiple things wrong here:
Starting processes in dockerfile has no effect. A dockerfile builds an image. The processes need to be started at container construction time. This can be done using an entrypoint can be defined in the dockerfile by using ENTRYPOINT. That entrypoint is typically a script that is executed when an actual container is started.
There is no init process in docker by default. Issuing service calls will fail without further work. If you need to start multiple processes you can look for the docs of the supervisord program.
Running both redis and a webserver in one container is not best practice. For a php application using redis you'd typically have 2 containers - one running redis and one running apache and let them interact via network.
I suggest you read the docker documentation before continuing. All this is described in depth there.

I am agree with #Richard. Use two or more containers according to your needs then --link them, in order to get the things work!

Related

Why Dockerfile builds but is not working correctly, even though it works manually?

I've been trying to get this running for many MANY hours. I've been scouting docker docs, github repos and other stuff but I can't get it working for some reason.
My dockerfile:
FROM mattrayner/lamp:latest-1804
WORKDIR /app
RUN wget -O /tmp/lwt.zip http://downloads.sourceforge.net/project/lwt/lwt_v_1_6_3.zip && \
yes A | unzip /tmp/lwt.zip &&\
rm /tmp/lwt.zip &&\
mv connect_xampp.inc.php connect.inc.php
EXPOSE 80
CMD ["/run.sh"]
It build normally without any errors but when I run the image nothing appears in the /app directory and I get just a basic Welcome to LAMP view on my browser.
Though,
If I do docker run -p "80:80" -it -v ${PWD}/app:/app mattrayner/lamp:latest-1804 /bin/bash, cd /app, copy and paste
wget -O /tmp/lwt.zip http://downloads.sourceforge.net/project/lwt/lwt_v_1_6_3.zip && \
yes A | unzip /tmp/lwt.zip &&\
rm /tmp/lwt.zip &&\
mv connect_xampp.inc.php connect.inc.php
it still doesn't work BUT if I exit and run the same docker run command it works.
Docker LAMP instructions also state to do exactly as I have done:
FROM mattrayner/lamp:latest-1804
# Your custom commands
CMD ["/run.sh"]
As I followed these instructions I thought that everything would work nicely.
What's the catch here? It has something to do with the intermediate containers probably but I can't comprehend it (I'm not a devops or developer by trade, just a hobbyist).
That happens because you're doing this:
Download a file (wget ...) in your /app dir in your docker image.
After that, you're overwritting this /app dir when you mount volume, with content of your $PWD/app.
If you are installing something doing docker build in some dirs, don't mount into the same path.
If you need something in the same path, you can mount some concrete files, but not the whole dir, or it will override what you had in your docker image when container is created.
You can do wget somewhere else or download it into your ${PWD}/app and then mount it.

Running Kafka how docker image

if someone can help me with this, i would be very grateful, i have a docker image in which a kafka is displayed where i pretend to have 3 brokers and i would like that nothing more be created when the docker container is created, the script that i have to raise kafka will be executed, i have tried in many ways using CMD and ENTRYPOINT commands but i am not successful, the container is created for me but the script is not executed i have to enter the container to start it
Dockerfile
FROM ubuntu
RUN apt-get update
RUN apt-get install -y openjdk-8-jdk
RUN apt-get install -y wget \
&& wget http://apache.rediris.es/kafka/2.4.0/kafka_2.12-2.4.0.tgz \
&& tar -xzf kafka_2.12-2.4.0.tgz \
&& rm -R kafka_2.12-2.4.0.tgz
#WORKDIR /home
RUN chmod +x /kafka_2.12-2.4.0
### COPY ###
COPY server-1.properties /kafka_2.12-2.4.0/config/
COPY server-2.properties /kafka_2.12-2.4.0/config/
#ADD runzk-kf.sh .
COPY runzk-kf.sh /usr/local/bin/runzk-kf.sh
#COPY runzk-kf.sh .
RUN chmod +x /usr/local/bin/runzk-kf.sh
EXPOSE 2181
EXPOSE 9092
EXPOSE 9093
EXPOSE 9094
CMD ./bin/bash
script
#!/bin/sh
# turn on bash's job control
set -m
### RUN Zookeper
./kafka_2.12-2.4.0/bin/zookeeper-server-start.sh /kafka_2.12-2.4.0/config/zookeeper.properties &
### RUN Kafka brokers ###
./kafka_2.12-2.4.0/bin/kafka-server-start.sh /kafka_2.12-2.4.0/config/server.properties &
./kafka_2.12-2.4.0/bin/kafka-server-start.sh /kafka_2.12-2.4.0/config/server-1.properties &
./kafka_2.12-2.4.0/bin/kafka-server-start.sh /kafka_2.12-2.4.0/config/server-2.properties &
View all code
Sorry, but please don't do this.
Docker images should be one service, not 4. Use Compose or MiniKube + Helm Charts to orchestrate multiple.
It's not clear what property files you changed for that to work properly.
JDK 8 is end of life, use 11 or 13, which Kafka supports.
Just use existing Docker images. If you want something minimal, personally I use bitnami/kafka. If you want something more fully featured, take a look over at Confluent's repo on running 3 Brokers via Docker Compose.

CentOS7: How to start the slapd service in a docker container?

I want to run an OpenLDAP server in a docker container using CentOS7.
I managed to have a container running with an openldap installed in it. My problem is that I am using a script entrypoint.sh to start the slapd service and add a user to my directory. I would like this two steps to be in the Dockerfile so that the password to perform ldapadd is not stored in the script.
So far I have only found examples on debian .
https://github.com/kanboard/docker-openldap/blob/master/memberUid/Dockerfile this is what I would like to do but using CentOS 7.
I tried start slapd service in my Dockerfile without success.
My Dockerfile looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
My entrypoint.sh script looks like this :
#!/bin/bash
exec /usr/sbin/slapd -f /etc/openldap/slapd.conf -h "ldapi:/// ldap:///" -d stats &
sleep 10
ldapadd -x -w mypassword -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
This does work however I am looking to start the ldap service and do the ldapadd command in the Dockerfile not to have mypassword stored in entrypoint.sh.
Hence I tried these commands :
RUN systemctl slapd start
RUN ldapadd -x -w password -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
Of course this does not work as systemctl does not work in Dockerfile. What is the best alternative ? I was considering having one container starting the ldap servcie but then I do not know how to call it to build the image of the other container...
EDIT :
Thanks to Guido U. Draheim, I have an alternative to systemctl to start slapd service.
My Dockerfile now looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY files/docker/systemctl.py /usr/bin/systemctl
RUN systemctl enable slapd
RUN systemctl start slapd;\
ldapdd -x -w password -D "cn=ldapadm,dc=sblanche" -f /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
But I have got the following error : ldap_bind: Invalid credentials (49)
(a) you could use the docker-systemctl-replacement to run your "systemctl.py start slapd". Which is the obvious first error.
(b) each RUN in a dockerfile is a new container, so the running process from the earlier invocation can not survive anyway. That's why the referenced dockerfile example has it combined with "&&".
And yeah (c) I am using an openldap centos container. So go ahead, try again.

How to configure Dockerfile correctly to run on Google Cloud Run?

I'm trying to run a Go app using Docker on Google Cloud Run but I'm getting this error:
Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I fixed my port to be 8080 as stated in the docs but I think my Dockerfile is incorrect. Does anyone know what I'm missing?
FROM golang:1.12-alpine
RUN apk upgrade -U \
&& apk add \
ca-certificates \
git \
libva-intel-driver \
make \
&& rm -rf /var/cache/*
ENV GOOS linux
ENV GOARCH amd64
ENV CGO_ENABLED=0
ENV GOFLAGS "-ldflags=-w -ldflags=-s"
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN echo $PATH
RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
RUN go get -u github.com/cespare/reflex
# RUN reflex -h
# Setup modules after reflex install
ENV GO111MODULE=on \
GOFLAGS="$GOFLAGS -mod=vendor"
WORKDIR /go/src/bitbucket.org/team/app/
COPY . .
CMD [ "go", "run", "cmd/main.go" ]
Dockerfiles don't make your application listen on a specific port number.
The EXPOSE directive in Dockerfile is purely a documentation and doesn't do anything functional.
You have 2 options for a Go app:
Just refactor your code to read the PORT env variable: os.Getenv("PORT") and use it on the HTTP server address you’re starting:
port := os.Getenv("PORT")
http.ListenAndServe(":"+port)
Create a -port flag and read it during the entrypoint of your app in the Dockerfile:
e.g. if you can make go run main.go -port=8080 work, change your dockerfile to:
exec go run main.go -port=$PORT
These will get you what you want.
Ideally you should not use go run inside a container. Just do:
RUN go build -o /bin/my-app ./my/pkg
ENTRYPOINT /bin/my-app
to compile a Go program and use it directly. Otherwise, every time Cloud Run starts your container, you would be re-compiling it from scratch, which is not fast, this will increase your cold start times.
Aside from these you seem to have a lot of inconsistencies in your dockerfile. You set a lot of Go env vars like GOOS GOARCH but you don't actually go build your app (go run is an on-the-fly compilation and doesn't take the linker flags in GOFLAGS into account I believe). Look at sample Go dockerfiles to have a better idea on how to write idiomatic Go dockerfiles.
It seems that you are missing the EXPOSE in your Dockerfile. See https://docs.docker.com/engine/reference/builder/#expose

Creating first docker container: Can't find host system file on build

I'm trying to bundle my Jekyll blog as a docker container.
I found this Dockerfile which seems to suit my use case but wanted to be more hands on so I copied it directly into my repo:
FROM ruby:latest
MAINTAINER Peter Etelej <peter#etelej.com>
RUN apt-get -qq update && \
apt-get -qq install nodejs -y && \
gem install -q bundler
RUN mkdir -p /etc/jekyll && \
printf 'source "https://rubygems.org"\ngem "github-pages"\ngem "execjs"\ngem "rouge"' > /etc/jekyll/Gemfile && \
printf "\nBuilding required Ruby gems. Please wait..." && \
bundle install --gemfile /etc/jekyll/Gemfile --clean --quiet
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV BUNDLE_GEMFILE /etc/jekyll/Gemfile
EXPOSE 4000
ENTRYPOINT ["bundle", "exec"]
CMD ["jekyll", "serve","--host=0.0.0.0"]
When I run it I get an error
jekyll 3.4.3 | Error: No such file or directory # rb_sysopen - /etc/modules-load.d/modules.conf
The host system has this file but my assumption was that the container didn't have access to it so I tried to add it into the Dockerfile
ADD /etc/modules-load.d/modules.conf /etc/modules-load.d/modules.conf
I then docker build and get the error
lstat etc/modules-load.d/: no such file or directory
I don't understand why the container is looking for this file in the first place but I'm even more confused by the fact that I can't add a file which is clearly there.
Docker builds run on the docker host, not necessarily the client where you run the command, and so all the files needed to run the build are sent in the build context to the host. That context is most often the current directory, or ., that you pass at the end of the docker build -t $image_name . command.
Everything that you try to include in the image with a COPY or ADD is done in reference to that build context, not the filesystem on your client or host machine. So if you need a modules.conf, you'll need to first copy that into your directory with the Dockerfile, and then COPY the file from there.
As for why jekyll is looking for the file, I'm not familiar with jekyll, but it doesn't look promising for something running inside of a container. The modules are kernel specific and containers are designed to be moved to different hosts with potentially different kernels.

Resources