Difficulty starting Kafka from docker - docker

Docker newb here - I've defined a simple image that grabs and extracts kafka, exposes the port and then tries to start the server.
For some reason it's not seeing the file as executable in the docker container.
My dockerfile is:
FROM openjdk:8u151-jre-alpine
COPY start-kafka.sh /
ENV PATH="${PATH}:/"
RUN chmod a+x start-kafka.sh
RUN wget http://apache.mirror.gtcomm.net/kafka/2.1.0/kafka_2.11-2.1.0.tgz
RUN gzip -d kafka_2.11-2.1.0.tgz
RUN tar -xvf kafka_2.11-2.1.0.tar
RUN ls -la
RUN echo $PATH
EXPOSE 9092
CMD ["start-kafka.sh"]
My start-kafka.sh is:
#!/bin/sh
cd /kafka_2.11-2.1.0
ls
cd bin
ls
cat kafka-server-start.sh
exec "/kafka_2.11-2.1.0/bin/kafka-server-start.sh" "/kafka_2.11-2.1.0/config/server.properties"
When running docker run -p 9092:9092 kafka1 I get the output of the cat command then the following...
/start-kafka.sh: exec: line 8: /kafka_2.11-2.1.0/bin/kafka-server-start.sh: not found
Help please!

Found the answer to this with help from a colleague - the kafka-server-start.sh requires bash, which the alpine image didn't provide. Added the installation of bash to my script and all was good!

Related

docker-compose stop / start my_image

Is it normal to lose all data, installed applications and created folders inside a container when executing docker-compose stop my_image and docker-compose start my_image?
I'm creating container with docker-compose up --scale my_image=4
update no. 1
my containers have sshd server running in them. When I connect to a container execute touch test.txt I see that the file was created.
However, after executing docker-compose stop my_image and docker-compose start my_image a container is empty and ls -l shows absence of file test.txt
update no. 2
my Dockerfile
FROM oraclelinux:8.5
RUN (yum update -y; \
yum install -y openssh-server openssh-clients initscripts wget passwd tar crontabs unzip; \
yum clean all)
RUN (ssh-keygen -A; \
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config; \
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config; \
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config; \
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config)
RUN (mkdir -p /root/.ssh/; \
echo "StrictHostKeyChecking=no" > /root/.ssh/config; \
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config)
RUN echo "root:oraclelinux" | chpasswd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 22
my docker-compose
version: '3.9'
services:
my_image:
build:
context: .
dockerfile: Dockerfile
ports:
- 30000-30007:22
when I connect to a container
Execute touch test.txt
Execute docker-compose stop my_image
Execute docker-compose start my_image
Execute ls -l
I see no file test.txt (in fact I see that the folder is empty)
update no. 3
entrypoint.sh
#!/bin/sh
# Start the ssh server
/usr/sbin/sshd -D
# Execute the CMD
exec "$#"
Other details
When containers are all up and running, I choose a container running
on a specific port, say port 30001, then using putty I connect to that specific container,
execute touch test.txt
execute ls -l
I do see that the file was created
I execute docker-compose stop my_image
I execute docker-compose start my_image
I connect via putty to port 30001
I execute ls -l
I see no file (folder is empty)
I try other containers to see if file exists inside one of them, but
I see no file present.
So, after a brutal brute force debugging I realized that I lose data
only when I fail to disconnect from ssh before stopping / restarting
container. When I do disconnect data does not disappear after stopping / restarting

Docker Container is not running

Please help. When I want to go into a container is says
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
My DockerFile:
FROM ubuntu:16.04
MAINTAINER Anton Lapitski <a.lapitski#godeltech.com>
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD ./ /usr/src/app
EXPOSE 80
ENTRYPOINT ["/bin/sh", "-c", "/usr/src/app/entry.sh"]
Starting script - start.sh:
sudo docker build -t starter .
sudo docker run -t -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter
entry.sh script:
echo "Hello World"
ls -l
pwd
if mountpoint -q /mounted-directory
then
echo "mounted"
else
echo "not mounted"
fi
sudo docker ps -a gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90599013c666 starter "/bin/sh -c /usr/src…" 18 minutes ago Exited (0) 18 minutes ago thirsty_wiles
And mosе important:
sudo docker exec -it 90599013c666 bash
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
Please could you tell what I am doing wrong?
P.S adding -d flag when running not helped.
Once the ENTRYPOINT completes (in any form), the container exits.
Once the container exits, you can't docker exec into it.
If you want to get a shell on the image you just built to poke around in it, you can
sudo docker run --rm -it --entrypoint /bin/sh starter
To make this slightly easier to run, you might change ENTRYPOINT to CMD in your Dockerfile. (Docker will run the ENTRYPOINT passing the CMD as command-line arguments; or if there is no entrypoint just run the CMD.)
...
RUN chmod +x ./app.sh
CMD ["./app.sh"]
Having done that, you can more easily override the command
sudo docker run --rm -it starter /bin/sh
You can try
docker start container_id and then docker exec -ti container_id bash for a stopped container.
You cannot execute the container, because your ENTRYPOINT script has been finished, and the container stopped. Try this:
Remove the ENTRYPOINT from your Dockerfile
Rebuild the image
run it with sudo docker run -it -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter sh
The key is the i flag and the sh at the end of the command.
I tried these two commands and it works:
sudo docker start <container_id>
docker exec -it <containerName> /bin/bash

dockerfile, how to support docker run options -d, -v and -p?

I have a very simple dockerfile:
FROM ubuntu:16.04
ADD node-v6.11.1 /usr/local
RUN ln -s /usr/local/bin/node /usr/local/bin/nodejs
RUN node -v
COPY server /server
RUN cd /server && npm install
EXPOSE 80 443
VOLUME ["/server/public"]
CMD cd /server && node server
sudo docker run server works as expected.
sudo docker run server -v /public:/server/public results in:starting container process caused "exec: \"-v\": executable file not found in $PATH".
sudo docker run server -d results in:
starting container process caused "exec: \"-d\": executable file not found in $PATH"
sudo docker run server -p 80:80 gives similar error.
You have to pass the options before the image name as follow:
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
For example:
sudo docker run -v /public:/server/public server

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

docker container volumes from directory access in CMD instruction

docker container volumes from directory access in CMD instruction
$ sudo docker run -d --name ext -v /external busybox /bin/sh
and
run.sh
#!/bin/bash
if [[ -f "/external" ]]
then
echo 'success!'
else
echo 'Sorry, I can't find /external...'
fi
and
Dockerfile
FROM ubuntu:14.04
MAINTAINER newbie
ADD run.sh /run.sh
RUN chmod +x /run.sh
CMD ["bash", "/run.sh"]
and
$ sudo docker build -t app .
and
$ sudo docker run -d --volumes-from ext app
ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
So
$ sudo docker logs ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
Sorry, I can't find /external...
My question is,
How can I access /external directory in run.sh in CMD instruction
impossible?
Thank you~
modify your run.sh
-f is check for file exists. in this case use -d check for directory exists.
Check if a directory exists in a shell script
futhermore if you want make only volume container, need not add -d, /bin/sh
volume container run command change like this
$ sudo docker run --name ext -v /external busybox

Resources