Docker run commands works but dockerfile not - docker

I have a problem with running the image of Dockerfile. CLI commands work fine but when I use the Dockerfile I get an error from the localhost:
localhost didn’t send any data.
What I am doing is simple. By CLI:
docker run -d --name mytomcat -p 8080:8080 tomcat:latest
docker exec -it mytomcat /bin/bash
mv webapps webapps2
mv webapps.dist/ webapps
exit
Which works fine.
My Dockerfile:
FROM tomcat:latest
CMD mv webapps webapps2 && mv webapps.dist/ webapps && /bin/bash
Build and run:
docker build -t myrepo/tomacat:1.00 .
docker run -d --name mytomcat -p 8080:8080 myrepo/tomacat:1.00
Doesn't work and show the above error.
Note: I am using mv command because I get 404 error!
Does anybody know the problem here?

When your Dockerfile has a CMD command, that runs instead of the command in the base image. With the tomcat image, the base image would run the Tomcat server; but with this Dockerfile, it's trying to run a bash shell instead, and without any input that just exits immediately.
To just moves files around, it's usually better to use COPY and RUN directives to set up the image once, rather than trying to repeat these steps every time you run the container. For this setup where the base image already has a reasonable CMD, you don't need to repeat it in your own custom Dockerfile.
FROM tomcat:latest
RUN mv webapps webapps2 && mv webapps.dist/ webapps
# no particular mention of bash; use the `CMD` from the base image
It's not uncommon for a base image to include some sort of runtime that needs to be configured, but for the base image's CMD to still be correct. In addition to tomcat, nginx and php:fpm work similarly; so long as their configuration files and code are in the right place, you don't need to repeat the CMD.

Related

executing a simple go .exe in a dockerfile

I have the following dockerfile and everything works fine except for running the .exe
FROM golang:latest
# Set the Current Working Directory inside the container
WORKDIR $GOPATH/src/github.com/user/goserver
# Copy everything from the current directory to the PWD (Present Working Directory) inside the container
COPY . .
# Download all the dependencies
RUN go get -d -v ./...
# Install the package
RUN GOOS=linux GOARCH=amd64 go build -o goserver .
# This container exposes port 8080 to the outside world
EXPOSE 8080
# Run the executable
CMD ./goserver
The problem is that it does not execute './goserver'. I need to manually go into the container and then execute it. Any idea what could be going wrong here ?
The problem is the way you are running the container.
By running the container with the following:
docker run -it -p 8080:8080 goserver /bin/bash
you are overriding the command defined with CMD in Dockerfile to bin/bash command.
You can start the container in detached mode by running it as:
docker run -d -p 8080:8080 goserver
Further, if you want to later exec into the container then you can use the docker exec command.

A script copied through the dockerfile cannot be found after I run docker with -w

I have the following Dockerfile
FROM ros:kinetic
COPY . ~/sourceshell.sh
RUN["/bin/bash","-c","source /opt/ros/kinetic/setup.bash"]
when I did this (after building it with docker build -t test
docker run --rm -it test /bin/bash
I had a bash terminal and I could clearly see there was a sourceshell.sh file that I could even execute from the Host
However I modified the docker run like this
docker run --rm -it -w "/root/afolder/" test /bin/bash
and now the file sourceshell.sh is nowhere to be seen.
Where do the files copied in the dockerfile go when the working directory is reasigned with docker run?
Option "-w" is telling your container execute commands and access on/to "/root/afolder" while your are COPYing "sourceshell.sh" to the context of the build, Im not sure, you can check into documentation but i think also "~" is not valid either. In order to see your file exactly where you access you should use your dockerfile like this bellow otherwise you would have to navigate to your file with "cd":
FROM ros:kinetic
WORKDIR /root/afolder
COPY . ./sourceshell.sh
RUN["/bin/bash","-c","source /opt/ros/kinetic/setup.bash"]
Just in case you don't understand the diff of the build and run process:
the code above belongs to the the build context, meaning, first build and image (the one you called it "test"). then the command:
docker run --rm -it -w "/root/afolder/" test /bin/bash
runs a container using "test" image and use WORKDIR "/root/afolder"

Run a repository in Docker

I am super new to Docker. I have a repository (https://github.com/hect1995/UBIMET_Challenge.git) I have developed in Mac that want to test it in a Ubuntu environment using Docker.
I have created a Dockerfile as:
FROM ubuntu:18.04
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
RUN cmake ..
RUN make
Now, following some examples I am running:
docker run --publish 8000:8080 --detach --name trial
But I do not see the output of the terminal from the docker to see what is going on. How could I create this docker and check what things I need to add and so on and so forth while inside the docker
TLDR
add '-it' and remove '--detach'
or add ENTRYPOINT in Dockerfile and use docker exec -it to access your container
Longer explanation:
With this command
docker run --publish 8000:8080 --detach --name trial image_name
you tell docker to run image image_name as container named trial, expose port 8080 to host and detach (run in background).
Your Dockerfile does not mention which command should be executed (CMD, ENTRYPOINT), however your image extends 'ubuntu:18.04' image, so docker will run command defined in that image. It's bash.
Your container by default is in non interactive mode so bash has nothing to do and simply exits. Check this with docker ps -a command.
Also you have specified --detach command which tells docker to run container in background.
To avoid this situation you need to remove --detach and add -it (interactive, allocate pseudo-tty). Now you can execute commands in your container.
Next step
Better idea is to set ENTRYPOINT to your application or just hang container with 'sleep infinity' command.
try (sleep forever or run /opt/my_app):
docker run --publish 8000:8080 --detach --name trial image_name sleep infinity
or
docker run --publish 8000:8080 --detach --name trial image_name /opt/my_app
You can also define ENTRYPOINT in your Dockerfile
ENTRYPOINT=sleep infinity
or
ENTRYPOINT=/opt/my_app
then use
docker exec -it trial bash #to run bash on container
docker exec trial cat /opt/app_logs #to see logs
docker logs trial # to see console output of your app
You want to provide and ENTRYPOINT or CMD layer to your docker file I believe.
Right now, it configures itself nicely when you build it - but I'm not seeing any component that points to an executable for the container to do something with.
You're probably not seeing any output because the container 'doesn't do anything' currently.
Checkout this breakdown of CMD: Difference between RUN and CMD in a Dockerfile

Docker bind-mount not working as expected within AWS EC2 Instance

I have created the following Dockerfile to run a spring-boot app: myapp within an EC2 instance.
# Use an official java runtime as a parent image
FROM openjdk:8-jre-alpine
# Add a user to run our application so that it doesn't need to run as root
RUN adduser -D -s /bin/sh myapp
# Set the current working directory to /home/myapp
WORKDIR /home/myapp
#copy the app to be deployed in the container
ADD target/myapp.jar myapp.jar
#create a file entrypoint-dos.sh and put the project entrypoint.sh content in it
ADD entrypoint.sh entrypoint-dos.sh
#Get rid of windows characters and put the result in a new entrypoint.sh in the container
RUN sed -e 's/\r$//' entrypoint-dos.sh > entrypoint.sh
#set the file as an executable and set myapp as the owner
RUN chmod 755 entrypoint.sh && chown myapp:myapp entrypoint.sh
#set the user to use when running the image to myapp
USER myapp
# Make port 9010 available to the world outside this container
EXPOSE 9010
ENTRYPOINT ["./entrypoint.sh"]
Because I need to access myapp's logs from the EC2 host machine, i want to bind-mount a folder into the logs folder sitting within "myapp" container here: /home/myapp/logs
This is the command that i use to run the image in the ec2 console:
docker run -p 8090:9010 --name myapp myapp:latest -v home/ec2-user/myapp:/home/myapp/logs
The container starts without any issues, but the mount is not achieved as noticed in the following docker inspect extract:
...
"Mounts": [],
...
I have tried the followings actions but ended up with the same result:
--mount type=bind instead of -v
use volumes instead of bind-mount
I have even tried the --privileged option
In the Dockerfile: I tried to use the USER root instead of myapp
I believe that, this has nothing to do with the ec2 machine but my container. Since running other containers with bind-mounts on the same host works like a charm.
I am pretty sure i am messing up with my Dockerfile.
But what am i doing wrong in that Dockerfile ?
or
What am i missing out ?
Here you have the entrypoint.sh if needed:
#!/bin/sh
echo "The app is starting ..."
exec java ${JAVA_OPTS} -Djava.security.egd=file:/dev/./urandom -jar -Dspring.profiles.active=${SPRING_ACTIVE_PROFILES} "${HOME}/myapp.jar" "$#"
I think the issue might be the order of the options on the command line. Docker expects the last two arguments to be the image id/name and (optionally) a command/args to run as pid 1.
https://docs.docker.com/engine/reference/run/
The basic docker run command takes this form:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You have the mount options (-v in the example you provided) after the image name (myall:latest). I'm not sure but perhaps the -v ... is being interpreted as arguments to be passed to your entrypoint script (which are being ignored) and docker run isn't seeing as a mount option.
Also, the source of the mount here (home/ec2-user/myapp) doesn't start with a leading forward slash (/), which, I believe, will make it relative to where the docker run command is executed from. You should make sure the source path starts with a forward slash (i.e. /home/ec2-user/myapp) so that you're sure it will always mount the directory you expect. I.e. -v /home/ec2-user...
Have you tried this order:
docker run -p 8090:9010 --name myapp -v /home/ec2-user/myapp:/home/myapp/logs myapp:latest

Docker add warfile to official Tomcat image

I pulled official Docker image for Tomcat by running this command.
docker run -it --rm tomcat:8.0
By using this as base image I need to build new image that contains my war file in the tomcat webapps folder. I created Dockerfile like this.
From tomcat8
ADD warfile /usr/local/tomcat
When I run this Dockerfile by building image I am not able to see Tomcat front page.
Can anybody tell me how to add my warfile to official Tomcat images webapp folder.
Reading from the documentation of the repo you would do something like that
FROM tomcat
MAINTAINER xyz
ADD your.war /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
Then build your image with docker build -t yourName <path-to-dockerfile>
And run it with:
docker run --rm -it -p 8080:8080 yourName
--rm removes the container as soon as you stop it
-p forwards the port to your host (or if you use boot2docker to this IP)
-it allows interactive mode, so you see if something get's deployed
Building on #daniel's answer, if you want to deploy your WAR to the root of tomcat, I did this:
FROM tomcat:7-jre7
MAINTAINER xyz
RUN ["rm", "-fr", "/usr/local/tomcat/webapps/ROOT"]
COPY ./target/your-webapp-1.0-SNAPSHOT.war /usr/local/tomcat/webapps/ROOT.war
CMD ["catalina.sh", "run"]
It deletes the existing root webapp, copies your WAR to the ROOT.war filename then executes tomcat.
docker run -it --rm --name MYTOMCAT -p 8080:8080 -v .../wars:/usr/local/tomcat/webapps/ tomcat:8.0
where wars folder contains war to deploy
How do you check the webapps folder?
The webapps folder is within the docker container.
If you want to access your webapps container you could mount a host directory within your container to use it as webapps folder. That way you can access files without accessing docker.
Details see here
To access your logs you could do that when you run your container e.g.
docker run -rm -it -p 8080:8080 **IMAGE_NAME** /path/to/tomcat/bin/catalina.sh && tail -f /path/to/tomcat/logs
or you start your docker container and then do something like:
docker exec -it **CONTAINER_ID** tail -f /path/to/tomcat/logs
If you are using spring mvc project then you require server to run your application suppose you use tomcat then you need base image of tomcat that your application uses which you can specify through FROM command.
You can set environment variable using ENV command.
You can additionally use RUN command which executes during Docker Image buiding.
eg to give read write execute permissions to webapps folder for tomcat to unzip war file
RUN chmod -R 777 $CATALINA_HOME/webapps
And one more command is CMD. Whatever you specifying in CMD command it will execute at a time of container running. You can specify options in CMD command using double quotes(" ") seperated by comma(,).
eg
CMD ["catalina.sh","start"]
(NOTE : Remember RUN command execute at a time of image building and CMD execute at a time of running container this is confusing for new users).
This is my Dockerfile -
FROM tomcat:9.0.27-jdk8-openjdk
VOLUME /tmp
RUN chmod -R 777 $CATALINA_HOME/webapps
ENV CATALINA_HOME /usr/local/tomcat
COPY target/*.war $CATALINA_HOME/webapps/myapp.war
EXPOSE 8080
CMD ["catalina.sh","run"]
Build your image using command
docker build -t imageName <path_of_Dockerfile>
check your docker image using command
docker images
Run image using command
docker run -p 9999:8080 imageName
here 8080 is tomcat port and application can access on 9999 port
Try accessing your application on
localhost:9999/myapp/

Resources