Installing SSH on a Docker image - docker

I'm trying to install SSH on a docker image using the command:
RUN APT INSTALL -Y SSH
This seemingly installs SSH on the image, however if I tunnel into a running container and run the command manually I get prompted to set both Region and Timezone. Is it possible to pass these options to the install SSH command?
The container I am able to manually install SSH on can be started with the command below:
docker container run -it --rm -p 22:22 ubuntu:latest
My Docker Image is as follows:
FROM ubuntu:latest
RUN apt update
apt -y install ssh
Thanks

You can use DEBIAN_FRONTEND to disable interaction with user (DEBIAN_FRONTEND):
noninteractive
This is the anti-frontend. It never interacts with you at all,
and makes the default answers be used for all questions. It
might mail error messages to root, but that's it; otherwise it
is completely silent and unobtrusive, a perfect frontend for
automatic installs. If you are using this front-end, and require
non-default answers to questions, you will need to preseed the
debconf database; see the section below on Unattended Package
Installation for more details.
Like this:
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/London
RUN apt update && \
apt -y install ...

Related

docker compose inside docker in docker

What I have:
I am creating a Jenkins(BlueOcean Pipeline) for CI/CD. I am using the docker in docker approach to use Jenkins as described in the Jenkins docs tutorail.
I have tested the setup, it is working fine. I can build and run docker images in the Jenkins container. Now, I am trying to use docker-compose, but it says docker-compose: not found
`
Problem:
Unable to use `docker-compose inside the container(Jenkins).
What I want:
I want to able to use `docker-compose inside the container using the dind(docker in docker) approach.
Any help would be very much appreciated.
Here is my working solution:
FROM maven:3.6-jdk-8
USER root
RUN apt update -y
RUN apt install -y curl
# Install Docker
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz | tar xvz -C /tmp/ && mv /tmp/docker/docker /usr/bin/docker
# Install Docker Compose
RUN curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/bin/docker-compose
# Here your customizations...
It seems docker-compose is not installed in that machine.
You can check if docker-compose is installed or not using docker-compose --version. If it is not installed then you can install it in below way :
Using apt- package manager : sudo apt install -y docker-compose
OR
Using python install manager : sudo pip install docker-compose

Docker exec command is very slow

I have built docker container system where container contains a command line application. I pass arguments and run the application using docker exec command from another application.
When I run the command line application from inside docker, it takes 0.003s to run.
$ time comlineapp "hello"
But when I run it from outside docker using docker exec, it takes 0.500s
$ time docker exec comline app "hello"
So clearly docker exec takes lot of time. We need any help to reduce the time as much as possible for docker exec command.
Here is the docker file
FROM ubuntu:18.04
RUN adduser --disabled-password --gecos "" newuser
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get -y install time && \
apt-get -y install gcc mono-mcs && \
apt-get install pmccabe && \
rm -rf /var/lib/apt/lists/*
all required softwares are already installed.
When you send a request from outside docker, there’s (multiple) API requests over a unix socket and lots of extra setup for the process itself such as applying a seccomp profile, setting namespaces, dropping privileges, etc.
The proper way to leverage docker is to create a service inside it and then have the endpoints take care of these. A simple python service should cater to this. We changed the same in our platform and saved 1000s of ms post that.

Building a new container with mixed CMD and RUN command does not work

I am new in docker and I am learning how to build a new container. I faced an issue to build a container, inherited from Ubuntu. I want to install Python3 and some other packages on the Ubuntu container with proper messages, but it does not work.
When I build a container with Dockerfile with:
FROM ubuntu
CMD echo "hello new Ubuntu"
RUN apt-get upgrade && apt-get update && apt-get install -y python3
CMD echo "installed python"
the call of the built Ubuntu with docker run -it my_new_ubuntu does not enter to the interactive mode and it only prints installed python, not even the "hello new Ubuntu".
Although, when I build a container with Dockerfile without any message:
FROM ubuntu RUN apt-get upgrade && apt-get update && apt-get install
-y python3
and call the built container with docker run -it my_new_ubuntu, it enters the Ubuntu root and I can call python. I am not sure why the first Dockerfile does not work. It seems that I cannot mix RUN and CMD commands together.
I appreciate any help or comment.
RUN specifies a command to run while building an image from your Dockerfile. You can have multiple RUN instructions, and each will apply to the image in the order specified.
CMD specifies the default command the image has been instantiated into a container and started. If there are multiple CMD instructions, only the last one applies.

Docker build error

I'm trying to build a Docker image, and are following the basic tutorial on Dockers own page. My Dockerfile looks like
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
That is exact the same as Docker provide.
I'm running linux mint 18, and Docker is installed. I'm able to run images, like hello-world or others that I've build earlier and pushed to docker hub. (Used windows when i created them)
If I try to create images that I've created earlier, the same thing happens. It always crashes when RUN apt-get -y update && apt-get -y install.
Do anyone know how to solve this problem?
Thanks!
Picture of error message
As per the image it fails to resolve "archive.ubuntu.com"
do the following as per references.
Uncomment the following line in /etc/default/docker
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
Restart the Docker service sudo service docker restart
Delete any images which have cached the invalid DNS settings.
Build again and the problem should be solved.
Ref: Docker build "Could not resolve 'archive.ubuntu.com'" apt-get fails to install anything
Actual Ref: https://www.digitalocean.com/community/questions/docker-on-ubuntu-14-04-could-not-resolve-archive-ubuntu-com

How to run yum inside a docker container on a partially isolated host?

I have a machine that is on a semi-isolated network. I have to use proxy to connect to the internet. I have verified that on my host, after I set http_proxy environment variable, I can receive update from public yum repository.
Now I am trying to do the same thing inside a docker container on my machine, but it doesn't seem to work.
$docker run --rm -it --net=host rhel /bin/bash
[root#MyCont] http_proxy=http://myproxy:1234/
[root#MyCont] echo -e "[base] \nname=CentOS-7 - Base - centos.com\nbaseurl=http://mirror.centos.org/centos/7/os/\$basearch/\ngpgcheck=1\ngpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7" > /etc/yum.repos.d/CentOS7-Base-centos.repo \
&& yum clean all \
&& yum install -y openssh-server openssh-clients
I brought up my container in host network mode, so I assume I should have the same network stack as my host in my container. Therefore, if I set the http_proxy properly, I should have the same yum behavior inside and outside container.
Is it possible to run yum inside a docker container whose host is on a semi-isolated network that needs http_proxy to access yum?
you should install software inside your image on a build
thus use yum install in your dockerfile
however, if for some reason you need to yum inside the container then ....
docker run -e http_proxy=http://10.9.2.2:8080
note if yum install isn't working then try ...
yum update
first - for some reason this helps!
also use ...
docker exec -it containerName bash
to enter into the container, then try execute your yum command. export any proxy env vars as required.
Like danday74 said, you have to use 'export https_proxy=http...'.
Works in my environment. Just to be safe, I export both http_proxy and https_proxy.

Resources