can we create a docker image with multiple instances in it? - docker

I want an image with elasticsearch and zipkin in it but i dont want to download it from docker hub instead I have downloaded the tar.gz file of those and then creating those images. I am able to run both of them individually but not simultaneously (by docker run command).
Please see below Dockerfile
FROM openjdk:11
RUN groupadd -g 1000 elk-zipkin && useradd elk-zipkin -u 1000 -g 1000
RUN mkdir /usr/share/elasticsearch/
RUN mkdir /usr/share/zipkin
#RUN mkdir /usr/share/kibana
COPY /artifacts/elasticsearch-7.17.6.tar.gz /usr/share/elasticsearch
COPY artifacts/zipkin.jar /usr/share/zipkin
#COPY /artifacts/kibana-7.17.6.tar.gz /usr/share/kibana
COPY script.sh /usr/share/zipkin
WORKDIR /usr/share/elasticsearch
RUN tar xvf elasticsearch-7.17.6.tar.gz
#RUN tar xvf kibana-7.17.6.tar.gz
WORKDIR /usr/share/elasticsearch/elasticsearch-7.17.6
RUN set -ex && for path in data logs config config/scripts; do \
mkdir -p "$path"; \
chown -R elk-zipkin:elk-zipkin "$path"; \
done
USER elk-zipkin
ENV PATH=$PATH:/usr/share/elasticsearch/elasticsearch-7.17.6/bin
WORKDIR /usr/share/elasticsearch/elasticsearch-7.17.6/config
#RUN sed -i "s|#network.host: 192.168.0.1|network.host: 0.0.0.0|g" elasticsearch.yml
#RUN sed -i "s|#discovery.seed_hosts: ["host1", "host2"]|discovery.type: single-node|g" elasticsearch.yml
COPY /artifacts/elasticsearch.yml /usr/share/elasticsearch/elasticsearch-7.17.6/config
#CMD ["elasticsearch"]
#EXPOSE 9200 9300
#WORKDIR /usr/share/zipkin
#CMD ["java","-jar","zipkin.jar"]
#EXPOSE 9411
WORKDIR /usr/share/zipkin
CMD ["sh","script.sh"]
script.sh:
java -jar zipkin.jar elasticsearch
Run command for them:
for zipkin -
docker run -d --name=zipkin \ -p=9411:9411 \ --env=STORAGE_TYPE="elasticsearch" \ --env=ES_HOSTS="someurl" IMAGEID
for elasticsearch -
docker run -d --name=elasticsearch1 -p=9200:9200 -p=9300:9300 IMAGEID
I have tried to run both of the service i.e. elasticsearch and zipkin individually and simultaneously.
I am expecting that both should be in one image and by only single docker run command both of the services should get run.

Somehow I made it, one can create a Dockerfile like mentioned in the question and then have to pass some sleep time into the script file to give some extra time for getting up the previous services.
Example:
nohup elasticsearch &
sleep 10
nohup java -jar zipkin.jar
Note: As per comments and the basics of container, one should not create multiple services inside the same container.

Related

local uaa docker image container not starting in windows docker

I have built a local uaa docker image and tried to run in local.
But I am getting this error when I am trying to start the docker image.
I built the docker image via this below command and the build is successful too.
docker build -t uaa-local --build-arg uaa_yml_name=local.yml .
when I am trying to run the local uaa docker image, I am getting this below error. What I am doing wrong
Content of DockerFile
FROM openjdk:11-jre
ARG uaa_yml_name=local.yml
ENV UAA_CONFIG_PATH /uaa
ENV CATALINA_HOME /tomcat
ADD run.sh /tmp/
ADD conf/$uaa_yml_name /uaa/uaa.yml
RUN chmod +x /tmp/run.sh
RUN wget -q https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.57/bin/apache-tomcat-8.5.57.tar.gz
RUN tar zxf apache-tomcat-8.5.57.tar.gz
RUN rm apache-tomcat-8.5.57.tar.gz
RUN mkdir /tomcat
RUN mv apache-tomcat-8.5.57/* /tomcat
RUN rm -rf /tomcat/webapps/*
ADD dist/cloudfoundry-identity-uaa-74.22.0.war /tomcat/webapps/
RUN mv /tomcat/webapps/cloudfoundry-identity-uaa-74.22.0.war /tomcat/webapps/ROOT.war
RUN mkdir -p /tomcat/webapps/ROOT && cd /tomcat/webapps/ROOT && unzip ../ROOT.war
ADD conf/log4j2.properties /tomcat/webapps/ROOT/WEB-INF/classes/log4j2.properties
RUN rm -rf /tomcat/webapps/ROOT.war
EXPOSE 8080
CMD ["/tmp/run.sh"]
On further investigation I think it is looking for run.sh file in the /tmp/ folder which is added on line 5 in Dockerfile..but when I checked for the file in /tmp/ folder it is not there..Is it because of that?And how to resolve that? I already have the run.sh in my current folder.

How do you use Docker build secrets with Docker Compose?

Using the docker build command line I can pass in a build secret as follows
docker build \
--secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties \
--build-arg project=template-ms \
.
Then use it in a Dockerfile
# syntax = docker/dockerfile:1.0-experimental
FROM gradle:jdk12 AS build
COPY *.gradle .
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle dependencies
COPY src/ src/
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle build
RUN ls -lR build
FROM alpine AS unpacker
ARG project
COPY --from=build /home/gradle/build/libs/${project}.jar /tmp
RUN mkdir -p /opt/ms && unzip -q /tmp/${project}.jar -d /opt/ms && \
mv /opt/ms/BOOT-INF/lib /opt/lib
FROM openjdk:12
EXPOSE 8080
WORKDIR /opt/ms
USER nobody
CMD ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=0.0.0.0:8000", "-Dnetworkaddress.cache.ttl=5", "org.springframework.boot.loader.JarLauncher"]
HEALTHCHECK --start-period=600s CMD curl --silent --output /dev/null http://localhost:8080/actuator/health
COPY --from=unpacker /opt/lib /opt/ms/BOOT-INF/lib
COPY --from=unpacker /opt/ms/ /opt/ms/
I want to do a build using docker-compose, but I can't find in the docker-compose.yml reference how to pass the secret.
That way the developer just needs to type in docker-compose up
You can use environment or args to pass variables to container in docker-compose.
args:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties
environment:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties

Create a custom folder and assign user permission

I'm trying to customize a Dockerfile, I just want to create a folder and assign the user (PID and GID) on the new folder.
Here is my full Dockerfile :
FROM linuxserver/nextcloud
COPY script /home/
RUN /home/script
The content of the script file :
#!/bin/sh
mkdir -p /data/local_data
chown -R abc:abc /data/local_data
I gave him the following permission : chmod +x script
At this moment it doesn't create the folder, and I see no error in logs.
Command to run the container :
docker run -d \
--name=nextcloud \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Europe/Paris \
-p 443:443 \
-p 8080:80 \
-v /home/foouser/nextcloud:/config \
-v /home/foouser/data:/data \
--restart unless-stopped \
nextcloud_custom
Logs from build :
Step 1/3 : FROM linuxserver/nextcloud
---> d1af592649f2
Step 2/3 : COPY script /home/
---> 0b005872bd3b
Step 3/3 : RUN /home/script
---> Running in 9fbd3f9654df
Removing intermediate container 9fbd3f9654df
---> 91cc65981944
Successfully built 91cc65981944
Successfully tagged nextcloud_custom:latest
you can try to run the commands directly:
RUN mkdir -p /data/local_data && chown -R abc:abc /data/local_data
you may try also to chabge your shebang to:
#!/bin/bash
to debugging you may try to set -xin your script a well.
EDIT:
I had notice this Removing intermediate container in your logs , the solution to it would be to use volume with your docker run command:
-v /path/your/new/folder/HOST:/path/your/new/folder/container
You are trying to modify a folder which is specified as a VOLUME in your base image, but as per Docker documentation on Volumes:
Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
linuxserver/nextcloud does declare a volume /data which you are trying to change afterward, it's like doing:
VOLUME /data
...
RUN mkdir -p /data/local_data
The directory created will be discarded. You can however create your directory on container startup by modifying it's entrypoint so when container starts the directory is created. Currently linuxserver/nextcloud uses /init as entrypoint, so you can do:
Your script content which you then define as entrypoint:
#!/bin/sh
mkdir -p /data/local_data
chown -R abc:abc /data/local_data
# Call the base image entrypoint with parameters
/init "$#"
Dockerfile:
FROM linuxserver/nextcloud
# Copy the script and call it at entrypoint instead
COPY script /home/
ENTRYPOINT ["/home/script"]

Why I am bounced from the Docker container?

FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.2
USER root
WORKDIR /usr/share/elasticsearch/
ENV ES_HOSTNAME elasticsearch
ENV ES_PORT 9200
RUN chown elasticsearch:elasticsearch config/elasticsearch.yml
RUN chown -R elasticsearch:elasticsearch data
# install security plugin
RUN bin/elasticsearch-plugin install -b com.floragunn:search-guard-5:5.5.2-16
COPY ./safe-guard/install_demo_configuration.sh plugins/search-guard-5/tools/
COPY ./safe-guard/init-sgadmin.sh plugins/search-guard-5/tools/
RUN chmod +x plugins/search-guard-5/tools/init-sgadmin.sh
ADD ./run.sh .
RUN chmod +x run.sh
RUN chmod +x plugins/search-guard-5/tools/install_demo_configuration.sh
RUN ./plugins/search-guard-5/tools/install_demo_configuration.sh -y
RUN chmod +x sgadmin_demo.sh
RUN yum install tree -y
#RUN curl -k -u admin:admin https://localhost:9200/_searchguard/authinfo
RUN usermod -aG wheel elasticsearch
USER elasticsearch
EXPOSE 9200
#ENTRYPOINT ["nohup", "./run.sh", "&"]
ENTRYPOINT ["/usr/share/elasticsearch/run.sh"]
#CMD ["echo", "hello"]
Once I add either CMD or Entrypoint - "Container is exited with code 0"
#!/bin/bash
exec $#
If I comment ENTRYPOINT or CMD - all is great.
What I am doing wrong???
If you take a look at official 5.6.9 elasticsearch Dockerfile, you will see the following at the bottom:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
If you do not know the difference between CMD and ENTRYPOINT, read this answer.
What you're doing is you're overwriting those two instructions with something else. What you really need is to extend CMD. What I usually do in my images, I create an sh script and combine different things I need and then indicate the script for CMD. So, you need to run sgadmin_demo.sh, but you need to wait for elasticsearch first. Create a start.sh script:
#!/bin/bash
elasticsearch
sleep 15
sgadmin_demo.sh
Now, add your script to your image and run it on CMD:
FROM: ...
...
COPY start.sh /tmp/start.sh
CMD ["/tmp/start.sh"]
Now it should be executed once you start a container. Don't forget to build :)

Docker fedora hbase JAVA_HOME issue

My dockerfile on fedora 22
FROM java:latest
ENV HBASE_VERSION=1.1.0.1
RUN groupadd -r hbase && useradd -m -r -g hbase hbase
USER hbase
ENV HOME=/home/hbase
# Download'n extract hbase
RUN cd /home/hbase && \
wget -O - -q \
http://apache.mesi.com.ar/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz \
| tar --strip-components=1 -zxf -
# Upload local configuration
ADD ./conf/ /home/hbase/conf/
USER root
RUN chown -R hbase:hbase /home/hbase/conf
USER hbase
# Prepare data volumes
RUN mkdir /home/hbase/data
RUN mkdir /home/hbase/logs
VOLUME /home/hbase/data
VOLUME /home/hbase/logs
# zookeeper
EXPOSE 2181
# HBase Master API port
EXPOSE 60000
# HBase Master Web UI
EXPOSE 60010
# Regionserver API port
EXPOSE 60020
# HBase Regionserver web UI
EXPOSE 60030
WORKDIR /home/hbase
CMD /home/hbase/bin/hbase master start
As I understand when I set "FROM java:latest" my current dockerfile overlays on that one, so JAVA_HOME must be setted as it is in java:latest? Am I right? This Dockerfile is builded, but when I "docker run" image, It shows "JAVA_HOME not found" error. How can I properly set it up?
use the ENV directive, something like ENV JAVA_HOME /abc/def the doc https://docs.docker.com/reference/builder/#env
add to ~./bashrc (or for global /etc/bashrc:
export JAVA_HOME=/usr/java/default
export PATH=$JAVA_HOME/bin:$PATH

Resources