Run a repository in Docker - docker

I am super new to Docker. I have a repository (https://github.com/hect1995/UBIMET_Challenge.git) I have developed in Mac that want to test it in a Ubuntu environment using Docker.
I have created a Dockerfile as:
FROM ubuntu:18.04
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
RUN cmake ..
RUN make
Now, following some examples I am running:
docker run --publish 8000:8080 --detach --name trial
But I do not see the output of the terminal from the docker to see what is going on. How could I create this docker and check what things I need to add and so on and so forth while inside the docker

TLDR
add '-it' and remove '--detach'
or add ENTRYPOINT in Dockerfile and use docker exec -it to access your container
Longer explanation:
With this command
docker run --publish 8000:8080 --detach --name trial image_name
you tell docker to run image image_name as container named trial, expose port 8080 to host and detach (run in background).
Your Dockerfile does not mention which command should be executed (CMD, ENTRYPOINT), however your image extends 'ubuntu:18.04' image, so docker will run command defined in that image. It's bash.
Your container by default is in non interactive mode so bash has nothing to do and simply exits. Check this with docker ps -a command.
Also you have specified --detach command which tells docker to run container in background.
To avoid this situation you need to remove --detach and add -it (interactive, allocate pseudo-tty). Now you can execute commands in your container.
Next step
Better idea is to set ENTRYPOINT to your application or just hang container with 'sleep infinity' command.
try (sleep forever or run /opt/my_app):
docker run --publish 8000:8080 --detach --name trial image_name sleep infinity
or
docker run --publish 8000:8080 --detach --name trial image_name /opt/my_app
You can also define ENTRYPOINT in your Dockerfile
ENTRYPOINT=sleep infinity
or
ENTRYPOINT=/opt/my_app
then use
docker exec -it trial bash #to run bash on container
docker exec trial cat /opt/app_logs #to see logs
docker logs trial # to see console output of your app

You want to provide and ENTRYPOINT or CMD layer to your docker file I believe.
Right now, it configures itself nicely when you build it - but I'm not seeing any component that points to an executable for the container to do something with.
You're probably not seeing any output because the container 'doesn't do anything' currently.
Checkout this breakdown of CMD: Difference between RUN and CMD in a Dockerfile

Related

why do i keep seeing nginx index.html on localhost when i run my docker image

I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

How to specify in Dockerfile that the image is interactive?

I have created a docker container that runs a command line tool. The container is supposed to be interactive. Am I somehow able to specify in the Dockerfile that the container is always started in interactive mode?
For reference this is the dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get -y install curl
RUN mkdir adr-tools && \
cd adr-tools && \
curl -L https://github.com/npryce/adr-tools/archive/2.2.0.tar.gz --output adr-tools.tar.gz && \
tar -xvzf adr-tools.tar.gz && \
cp */src/* /usr/bin && \
rm -rf adr-tools
CMD ["/bin/bash"]
EDIT:
I know of the -it options for the run command. I'm explicitly asking for a way to do this in the docker file.
EDIT2:
This is not a duplicate of Interactive command in Dockerfile since my question addresses an issue with how arguments specified to docker run can be avoided in favor of specifying them in the Dockerfile whereas the supposed duplicate addresses an issue of interactive input during the build of the image by docker itself.
Many of the docker run options can only be specified at the command line or via higher-level wrappers (shell scripts, Docker Compose, Kubernetes, &c.). Along with port mappings and network settings, the “interactive” and “tty” options can only be set at run time, and you can’t force these in the Dockerfile.
You can use the docker run command.
docker build -t curly .
docker run -it curly curl https://stackoverflow.com
The convention is:
docker run -it IMAGE_NAME [COMMAND] [ARG...]
Where [COMMAND] is curl and [ARG...] are the curl arguments, which is https://stackoverflow.com in my example.
-i enables interactive process mode. You can't specify this in the Dockerfile.
-t allocates a pseudo-TTY for the container.
Are you looking for the -it option?
From the Docker documentation:
For interactive processes (like a shell), you must use -i -t together
in order to allocate a tty for the container process.
So, for example you can run it like:
docker run -it IMAGE_NAME [COMMAND] [ARG...]
Actually, in Ubuntu, I am running Apache Server in the background.
But for you, Try with below command and you should be able to go inside docker container.
docker exec -i -t your_container_name bash

Docker commiting changes overrides default start command

I have a jupyter notebook docker image from https://hub.docker.com/r/jupyter/datascience-notebook/.
Typically I run the notebook using this command
docker run -it --rm -p 8888:8888 -v /home/folder/Projects/:/home/jovyan/Projects -e NB_UID=1000 jupyter/datascience-notebook
This works perfectly and I am presented with the message that notebook is running. I am able to create notebooks, run them etc.
Now I want to install the jupyter contrib extenstions from https://github.com/ipython-contrib/jupyter_contrib_nbextensions. I followed the instructions here at: https://gist.github.com/glamp/74188691c91d52770807.
Using
docker run -it jupyter/datascience-notebook /bin/bash
command I am able to enter the container. Then I use pip and bash to install the package. All this goes smoothly.I exit the container and commit the changes using the container id.
docker commit containerid imagename
The problem is after committing the changes, when I run the container I am presented with bash prompt instead of the notebook start command.
Is there a way to commit package installation changes without changing the starting command of the image. Alternatively is there a way to edit the container image without actually running the image?
The problem is that you have committed a container that was started with the command /bin/bash.
What you need to is start the container normally using The command that you provided initially adding the -d option to free the terminal:
docker run -it --rm -d --name datascience-notebook -p 8888:8888 -v /home/folder/Projects/:/home/jovyan/Projects -e NB_UID=1000 jupyter/datascience-notebook
Then from the terminal exec into the container and install the contrib extenstions.
docker exec -it datascience-notebook /bin/bash
Exit the container and commit the image:
docker commit datascience-notebook <imagename>
Update:
In case the extension can't be installed when the container is running, the solution is to build a custom Docker image from using a Dockerfile
FROM jupyter/datascience-notebook
RUN <installation commands>
Finally build the image using docker build -t <image-name> . and run the image built.

Practically, what is the difference between docker run -dit(-itd) vs docker run -d?

I've used docker run -it to launch containers interactively and docker run -d to start them in background. These two options seemed exclusive. However, now I've noticed that docker run -dit (or docker run -itd) is quite common. So what is the difference? When -it is really needed together with -d?
Yes, sometimes, it's necessary to include -it even you -d
When the ENTRYPOINT is bash or sh
docker run -d ubuntu:14.04 will immediately stop, cause bash can't find any pseudo terminal to be allocated. You have to specify -it so that bash or sh can be allocated to a pseudo terminal.
docker run -dit ubuntu:14.04
If you want to use nano or vim with any container in the future, you have to specify -it when the image starts. Otherwise you'll get error. For example,
docker run --name mongodb -d mongo
docker exec -it mongodb bash
apt-get update
apt-get install nano
nano somefile
It will throw an error
Error opening terminal: unknown.

Resources