Building and pushing a docker image from inside a container - docker

Context: I am using repo2docker to build images containing experiments, then to push them to a private registry.
I am dockerizing this whole pipeline (cloning the code of the experiment, building the image, pushing it) with docker-compose.
This is what I tried:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y python3-pip python3-dev git apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
RUN apt-get update && apt-get install docker-ce --yes
RUN service docker start
# more setup
ENTRYPOINT rqworker -c settings image_build_queue
Then I pass the jobs to the rqworker (the rqworker part works well).
But docker doesn't start in my container. Therefore I can't login to the registry and can't build the image.
(Note that I need docker to run, but I don't need to run containers.)

The solution was to share the host's Docker socket, so the build actually happens on the host.

Related

Docker - New contianer keeps using old user data

I am using docker to run Home Assistant container, The host machine is Ubuntu.
After running the container I uploaded a snapshot from my RPi to restore the data and it worked fine.
Now the problem is I want a fresh install of HA but every time I run the container(new run) I still keep getting the old user data from the initial container (the snapshot I uploaded).
I tried deleting the images, containers, volumes, and even the docker and container folders under var/lib, reinstalled docker but without any luck.
Here are the commands I used to install the container:
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo apt install jq
sudo su
sudo curl -sL https://raw.githubusercontent.com/home-assistant/supervised-installer/master/installer.sh | bash -s
docker container ls -a

Running Docker commands inside Jenkins pipeline

Is there a proper way to run Docker commands through a Jenkins containerized service?
I see there are many plugins to support Docker commands in the Jenkins ecosystem, although all of them raise errors because Docker isn't installed in the Jenkins container.
I have a Dockerfile that provides a Jenkins image with a working Docker installation, but to work I have to mount the host's Docker socket:
FROM jenkins/jenkins:lts
USER root
RUN apt-get -y update && \
apt-get -y install sudo \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get -y update && \
apt-get -y install --allow-unauthenticated \
docker-ce \
docker-ce-cli \
containerd.io
RUN echo "jenkins:jenkins" | chpasswd && adduser jenkins sudo
RUN echo jenkins ALL= NOPASSWD: ALL >> /etc/sudoers
USER jenkins
It can be run like this:
docker run -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock
This way it's possible to run Docker commands inside the Jenkins container. Although, I am concerned about security: namely this way the Jenkins container can access all the containers running in the host machine, moreover Jenkins is a root user, which I wouldn't like for production.
I seek to run a Jenkins instance within a Kubernetes cluster to support CI and CD pipelines within that cluster, therefore I'm guessing Jenkins must be containerized.
Am I missing something?

How to add/install cypress in dockerimage

How to add/install cypress in my docker base image? This is my baseimage dockerfile file where I am installing common dependencies.
How can I install cypress. I don't want to install it via package.json. I want it to be pre-installed.
FROM node:lts-stretch-slim
RUN apt-get update && apt-get install -y curl wget gnupg
RUN apt-get install python3-dev -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN pip3 install awscli --upgrade
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-key update && apt-get update && apt-get install -y google-chrome-stable
There are docker images available with cypress already in them.
CircleCI have one for their CI testing.
For convenience, CircleCI maintains several Docker images. These
images are typically extensions of official Docker images and include
tools especially useful for CI/CD. All of these pre-built images are
available in the CircleCI org on Docker Hub. Visit the circleci-images
GitHub repo for the source code for the CircleCI Docker images. Visit
the circleci-dockerfiles GitHub repo for the Dockerfiles for the
CircleCI Docker images
https://circleci.com/docs/2.0/circleci-images/?gclid=Cj0KCQiApaXxBRDNARIsAGFdaB9QO4ZaUXxHzyuRWVc19uzIN0Baz5qd5npQb6rHL3wbup6pFLwKb-4aArzOEALw_wcB

How to install Docker inside my ubuntu container?

I installed docker inside a container running on ubuntu:18.04 to run my nodejs app, I need docker installed inside this container because i need to dockerize an other small app
Her is my Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY package*.json ./
# Install Nodejs
RUN apt-get update
RUN apt-get -y install curl wget dirmngr apt-transport-https lsb-release ca-certificates software-properties-common gnupg-agent
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get -y install nodejs
# Install Chromium
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update
RUN apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst \
--no-install-recommends
RUN rm -rf /var/lib/apt/lists/*
# Install Docker
RUN curl -fsSL https:/download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN apt-key fingerprint 0EBFCD88
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
RUN npm install
COPY . .
CMD [ "npm", "start" ]
EXPOSE 3000
When the container is up, i docker exec -it app bash.
If i do a service docker start then ps ax, got this
PID TTY STAT TIME COMMAND
115 ? Z 0:00 [dockerd] <defunct>
What can i do to be able to use docker inside the container or is there a docker image not using apk but apt-get ? Because when i need to use it, i got this error :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
First thing better to use one of the base images, either for node-image and install docker and for docker-image and installed node, instead of creating image from scratch. All you need
FROM node:buster
RUN apt-get update
RUN apt install docker.io -y
RUN docker --version
ENTRYPOINT nohup dockerd >/dev/null 2>&1 & sleep 10 && node /app/app.js
second thing, The error Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?, The reason is you are not starting the docker process in the Dockefile, and also running multiple processes in the container is not recommended, as if Docker process dies you will not know the status, you have to put one process in the background.
CMD nohup dockerd >/dev/null 2>&1 & sleep 10 && node /app/app.js
and run
docker run --privileged -it -p 8000:8000 -v /var/run/docker.sock:/var/run/docker.sock your_image

Debian with nginx docker image won't update

I'm bulding nginx in a Debian-based docker image. Every time I run it, it shows me the current nginx version nginx/1.10.3. I need it to download the latest stable nginx.
This is my Dockerfile:
FROM debian:latest
RUN apt-get -y update
RUN apt-get install -yq gnupg2
RUN apt-get install -yq software-properties-common
RUN apt-get install -yq lsb-release
RUN apt-get install -yq curl
RUN add-apt-repository "deb http://archive.canonical.com/ $(lsb_release -sc) partner"
RUN add-apt-repository "deb http://nginx.org/packages/debian `lsb_release -cs` nginx"
RUN apt-get install -y nginx
RUN rm -rf /var/lib/apt/lists/
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD ["/usr/sbin/nginx"]
Docker image layers serve as a cache for subsequent builds. Without some sort of change in the Dockerfile, you're likely getting nginx 1.10.3 because it was cached from a previous build.
Instead of building your own nginx image, you should use the official nginx image, and choose the tag (e.g., 1.15.9) for the version you want.
First off, trivially, you need to apt-get update to fetch the index files from the repos you added before apt will find any packages there.
RUN add-apt-repository blah blah
RUN apt-get update -y # Add this
RUN apt-get install -y whatever
But also, you have invalid repos in the add-apt-repository section. The output of lsb_release -sc is a Debian code name like stretch which of course the Canonical partner repo doesn't have a section for; and the NGninx repo only supports Debian squeeze (though I would expect the packages to also work on newer versions of Debian).
Finally, you need to manage the keys of these repos, or otherwise mark them as safe. As a small bonus, I tried to condense your apt-get downloads slightly. Try this Dockerfile:
FROM debian:latest
RUN apt-get -y update
RUN apt-get install -yq gnupg2 \
software-properties-common curl # lsb-release
# XXX FIXME: the use of [trusted=yes] is really quick and dirty
RUN add-apt-repository "deb [trusted=yes] http://archive.canonical.com/ bionic partner"
RUN add-apt-repository "deb [trusted=yes] http://nginx.org/packages/debian squeeze nginx"
RUN apt-get update -y
RUN apt-get install -y nginx
RUN rm -rf /var/lib/apt/lists/
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
CMD ["/usr/sbin/nginx"]

Resources