I have built docker container system where container contains a command line application. I pass arguments and run the application using docker exec command from another application.
When I run the command line application from inside docker, it takes 0.003s to run.
$ time comlineapp "hello"
But when I run it from outside docker using docker exec, it takes 0.500s
$ time docker exec comline app "hello"
So clearly docker exec takes lot of time. We need any help to reduce the time as much as possible for docker exec command.
Here is the docker file
FROM ubuntu:18.04
RUN adduser --disabled-password --gecos "" newuser
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get -y install time && \
apt-get -y install gcc mono-mcs && \
apt-get install pmccabe && \
rm -rf /var/lib/apt/lists/*
all required softwares are already installed.
When you send a request from outside docker, there’s (multiple) API requests over a unix socket and lots of extra setup for the process itself such as applying a seccomp profile, setting namespaces, dropping privileges, etc.
The proper way to leverage docker is to create a service inside it and then have the endpoints take care of these. A simple python service should cater to this. We changed the same in our platform and saved 1000s of ms post that.
Related
I am new in docker, I want to build an image with Ubuntu 20.04 and bind9 service installation.
below is my code of docker file
FROM ubuntu:20.04
ENV TZ=Asia
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update && apt-get install -y \
apt-utils \
systemctl \
bind9
RUN /usr/sbin/named -g -c /etc/bind/named.conf -u bind
RUN systemctl restart bind9
I am getting an error like below
ERROR:systemctl:Unit bind9.service could not be found.
Can anyone help me, after installation of Bind9, why I am getting an error with above command?
Error comes with Docker only, if I run same command in Host environment which is Ubuntu 20.04 then it works fine.
You generally cannot use service management commands (like service or systemctl, etc) in a container, because there is no service manager running.
Additionally, even if there were a service manager running, it wouldn't make any sense to interact with it in a RUN command: these commands are part of the image build process, and there are no persistent services running at this point. A RUN command runs in an isolated environment that is completely torn down when the RUN command completes.
If you want bind to start when you run a container using your image, you would need to place the appropriate bind command line into the CMD option. For example, the official bind9 image includes:
CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]
(See the Dockerfile for details)
I'm trying to install SSH on a docker image using the command:
RUN APT INSTALL -Y SSH
This seemingly installs SSH on the image, however if I tunnel into a running container and run the command manually I get prompted to set both Region and Timezone. Is it possible to pass these options to the install SSH command?
The container I am able to manually install SSH on can be started with the command below:
docker container run -it --rm -p 22:22 ubuntu:latest
My Docker Image is as follows:
FROM ubuntu:latest
RUN apt update
apt -y install ssh
Thanks
You can use DEBIAN_FRONTEND to disable interaction with user (DEBIAN_FRONTEND):
noninteractive
This is the anti-frontend. It never interacts with you at all,
and makes the default answers be used for all questions. It
might mail error messages to root, but that's it; otherwise it
is completely silent and unobtrusive, a perfect frontend for
automatic installs. If you are using this front-end, and require
non-default answers to questions, you will need to preseed the
debconf database; see the section below on Unattended Package
Installation for more details.
Like this:
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/London
RUN apt update && \
apt -y install ...
This is my dockerfile
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y git
RUN mkdir api
WORKDIR ./api
RUN git clone --branch develop https://link
WORKDIR ./api/api/
RUN apt-get install -y docker.io
RUN apt-get -y install curl
RUN curl -L "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN mv /usr/local/bin/docker-compose /usr/bin/docker-compose
RUN chmod +x /usr/bin/docker-compose
RUN docker-compose up
I want to docker-compose up inside docker image. However,
It gives ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running? and
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable
How can I solve this problem I searched but none of them worked?
I'd suggest rethinking the entire approach of this Dockerfile: you can't run the Docker daemon in a Dockerfile and you can't start any sort of background process. A shell script that runs on the host might be a better match.
Running any sort of daemon inside a Dockerfile mostly doesn't work; at the end of each RUN instruction all running processes are terminated. Creating a Docker image doesn't preserve any running processes, just the final filesystem and metadata like the default CMD to run when you start a container. So even if docker-compose up worked, the results of that wouldn't be persisted in your image.
Running a Docker daemon inside a Docker container is difficult and generally discouraged. (Sharing the host's Docker socket has significant security implications but is the preferred approach.) Either way requires some additional permissions, that again just aren't available inside a Dockerfile.
The other red flag for me here is the RUN git clone line. Because of Docker's layer caching, it will be happy to say "oh, I've already RUN git clone so I don't need to repeat that step" and you won't wind up with current code. Feeding credentials for remote git repositories into a Dockerfile is also tricky. I'd also recommend running source control commands exclusively on the host and not a Dockerfile.
The standard approach here would be to commit a docker-compose.yml file to the top of your repository, and run git clone and docker-compose up directly from the host. You can't use a Dockerfile as a general-purpose automation tool.
I am new in docker and I am learning how to build a new container. I faced an issue to build a container, inherited from Ubuntu. I want to install Python3 and some other packages on the Ubuntu container with proper messages, but it does not work.
When I build a container with Dockerfile with:
FROM ubuntu
CMD echo "hello new Ubuntu"
RUN apt-get upgrade && apt-get update && apt-get install -y python3
CMD echo "installed python"
the call of the built Ubuntu with docker run -it my_new_ubuntu does not enter to the interactive mode and it only prints installed python, not even the "hello new Ubuntu".
Although, when I build a container with Dockerfile without any message:
FROM ubuntu RUN apt-get upgrade && apt-get update && apt-get install
-y python3
and call the built container with docker run -it my_new_ubuntu, it enters the Ubuntu root and I can call python. I am not sure why the first Dockerfile does not work. It seems that I cannot mix RUN and CMD commands together.
I appreciate any help or comment.
RUN specifies a command to run while building an image from your Dockerfile. You can have multiple RUN instructions, and each will apply to the image in the order specified.
CMD specifies the default command the image has been instantiated into a container and started. If there are multiple CMD instructions, only the last one applies.
I am a newbie to docker and trying to understand how to create dockerfiles.
While attempting the same I created this sample file
FROM debian
RUN apt-get update && apt-get upgrade -y
RUN apt-get install apache2 -y
COPY ./index.html /etc/www/html/
CMD service apache2 start && /bin/bash
The CMD part has always confused me and I am using the /bin/bash mostly because I read somewhere that we need to make sure that there is some running command in the Docker Image when we are bringing it up. I use this to run the image :-
docker run -t -p 5000:8080 --name myfinal 912ccd578eae
where I'm using the id of the image built. As you can see, I'm a novice and even the minutest of details would help.
The usual CMD for apache2 should be
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
That way, you don't have to use the "bash" trick to keep a foreground process running.
And any exit signal will impact correctly the apache2 process, not the bash one.
No need for ENTRYPOINT here: Docker maintains a default entrypoint, /bin/sh.
So this (with CMD) is the same as:
/bin/sh -c “apachectl -D FOREGROUND”