How to add/install cypress in dockerimage - docker

How to add/install cypress in my docker base image? This is my baseimage dockerfile file where I am installing common dependencies.
How can I install cypress. I don't want to install it via package.json. I want it to be pre-installed.
FROM node:lts-stretch-slim
RUN apt-get update && apt-get install -y curl wget gnupg
RUN apt-get install python3-dev -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN pip3 install awscli --upgrade
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-key update && apt-get update && apt-get install -y google-chrome-stable

There are docker images available with cypress already in them.
CircleCI have one for their CI testing.
For convenience, CircleCI maintains several Docker images. These
images are typically extensions of official Docker images and include
tools especially useful for CI/CD. All of these pre-built images are
available in the CircleCI org on Docker Hub. Visit the circleci-images
GitHub repo for the source code for the CircleCI Docker images. Visit
the circleci-dockerfiles GitHub repo for the Dockerfiles for the
CircleCI Docker images
https://circleci.com/docs/2.0/circleci-images/?gclid=Cj0KCQiApaXxBRDNARIsAGFdaB9QO4ZaUXxHzyuRWVc19uzIN0Baz5qd5npQb6rHL3wbup6pFLwKb-4aArzOEALw_wcB

Related

Copy file from GCP to docker container

I have to copy file from gcp location to a specific directory in the docker image. I am using ubuntu:bionic as a parent image.
After installing Python and Pip, I tried following,
RUN pip install gsutil \
&& gsutil cp gs:<some location> /home/${USER}/<some other location>
when I am building docker image, I am getting following error,
13 19.84 /bin/sh: 1: gsutil: not found
Please let me know the mistake I am doing.
The best solution for your issue depends on whether you need to use gsutil for other purposes inside your container or just to copy the file.
If you just need to copy the file with gsutil, it would be a good idea to use a multi-stage build in Docker so that your final container does not have extra tools installed (Cloud SDK in this case). This way it would be much lighter. The Dockerfile would be:
FROM google/cloud-sdk:latest
RUN gsutil cp <src_location> <intermediate_location>
FROM ubuntu:bionic
COPY --from=0 <intermediate_location> <dst_location>
If you need gsutil for further actions in your container, the Dockerfile to install it in ubuntu is the following:
FROM ubuntu:bionic
RUN apt-get update && \
apt-get install -y curl gnupg && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
RUN gsutil cp <src_location> <dst_location>
Use Bash inside the Docker container docker exec -it (container name) bin/bash and run each command one at a time, and if the command is successful, and then add to the dockerfile, this would help you a lot.

Minimize size of docker image

I am building a docker image for running yarn jobs.
In order to install yarn, I need curl to fetch the package repository. After installing yarn, I am not really interested in curl anymore so I purge it again.
But this has no effect on the resulting docker image size since the layer with curl installed is still and underlying image layer (as far as I understand docker images).
I am less interested in this specific case (curl and yarn) but in general how to minimize my docker image in such a scenario. How can I "purge" a no longer needed underlying layer in my docker image?
Example Dockerfile for reference:
FROM ubuntu:focal
# Updating and installing curl (not required in final image)
RUN apt update && apt install -y curl
# Using curl to install yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - &&\
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
apt update && apt install -y yarn
# Doing cleanup (no positive effect on image size)
RUN apt purge -y curl && rm -rf /var/lib/apt/lists/* && apt autoremove -y && apt clean -y
EDIT:
Just for clarification:
ubuntu/focal on it's own is just 74 MB in image size.
After running apt update it's at 95 MB
After apt installing curl wget git it's at 198 MB
Even purging all these installations doesn't bring me back to the 74 MB
multi-stage builds are a nice concept which I will look into.
This question although is about wheather or not it is possible to reduce a single image size again.
You can build your image using a multi stage Dockerfile.
For example:
FROM ubuntu:focal AS building_stage
RUN apt update && apt install -y curl
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - &&\
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
apt update && apt install -y yarn
RUN yarn install # or whatever you want to do with yarn
FROM ubuntu:focal AS running_stage
COPY --from=building_stage /root/node_modules .
After building this Dockerfile, the final image doesn't contain either yarn and curl, but it has necessary files for your final image to run. I didn't know what you wanted to do with yarn, so I couldn't show a pure example from your sample, but multi stage builds are probably the thing you want to use.

Dockerfile to create an image to be used as an container agent in jenkins

I am trying to use a node agent container in Jenkins to run npm instructions on it. So that, I am creating a Dockerfile to get a valid image with ssh and nodejs. The executor runs fine, but when I use npm it says that it doesn't know the command.
The same problem happens when (after building the dockerfile) I do docker exec -it af5451297d85 bash and after that, inside the container, I try to do npm --v (for example).
# This Dockerfile is used to build an image containing an node jenkins agent
FROM node:9.0
MAINTAINER Estefania Castro <estefania.castro#luceit.es>
# Upgrade and Install packages
RUN apt-get update && apt-get -y upgrade && apt-get install -y git openssh-server
# Install NGINX to test.
RUN apt-get install nginx -y
# Prepare container for ssh
RUN mkdir /var/run/sshd && adduser --quiet jenkins && echo "jenkins:jenkins" | chpasswd
RUN npm install
ENV CI=true
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I would like to run npm instructions like npm install, npm publish, ... to manage my project in a jenkinsfile. Could anyone help?
Thanks
I have already solved the problem (after two weeks haha).
FROM jenkins/ssh-slave
# Install selected extensions and other stuff
RUN apt-get update && apt-get -y --no-install-recommends install && apt-get clean
RUN apt-get install -y curl
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs && apt-get install -y nginx

Building and pushing a docker image from inside a container

Context: I am using repo2docker to build images containing experiments, then to push them to a private registry.
I am dockerizing this whole pipeline (cloning the code of the experiment, building the image, pushing it) with docker-compose.
This is what I tried:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y python3-pip python3-dev git apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
RUN apt-get update && apt-get install docker-ce --yes
RUN service docker start
# more setup
ENTRYPOINT rqworker -c settings image_build_queue
Then I pass the jobs to the rqworker (the rqworker part works well).
But docker doesn't start in my container. Therefore I can't login to the registry and can't build the image.
(Note that I need docker to run, but I don't need to run containers.)
The solution was to share the host's Docker socket, so the build actually happens on the host.

Create Jenkins Docker Image with pre configured jobs

I have created a bunch of Local deployment pipeline jobs, these jobs do things like remove an existing container, build a service locally, build a docker image, run the container - etc. These are not CI/CD jobs, just small pipelines for deploying locally during dev.
What I want to do now is make this available to all our devs, so they can just simply spin up a local instance of jenkins that already contains the jobs.
My docker file is reasonably straight forward...
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y sudo
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
# Docker
RUN apt-get update
RUN apt-get dist-upgrade -y
RUN apt-get install apt-transport-https ca-certificates -y
RUN sh -c "echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list"
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN apt-get update
RUN apt-cache policy docker-engine
RUN apt-get install docker-engine -y
# .NET Core CLI dependencies
RUN echo "deb [arch=amd64] http://llvm.org/apt/jessie/ llvm-toolchain-jessie-3.6 main" > /etc/apt/sources.list.d/llvm.list \
&& wget -q -O - http://llvm.org/apt/llvm-snapshot.gpg.key|apt-key add - \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
clang-3.5 \
libc6 \
libcurl3 \
libgcc1 \
libicu52 \
liblldb-3.6 \
liblttng-ust0 \
libssl1.0.0 \
libstdc++6 \
libtinfo5 \
libunwind8 \
libuuid1 \
zlib1g \
&& rm -rf /var/lib/apt/lists/*
#DotNetCore
RUN curl -sSL -o dotnet.tar.gz https://go.microsoft.com/fwlink/?linkid=847105
RUN mkdir -p /opt/dotnet && tar zxf dotnet.tar.gz -C /opt/dotnet
RUN ln -s /opt/dotnet/dotnet /usr/local/bin
# Minimal Jenkins Plugins
RUN /usr/local/bin/install-plugins.sh git matrix-auth workflow-aggregator docker-workflow blueocean credentials-binding
# Skip initial setup
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
COPY LocallyDeployIdentityConfig.xml /var/jenkins_home/jobs/identity/config.xml
USER jenkins
What I thought I could do is simply copy a job config file into the /jobs/jobname folder and the job would appear, but not only does this not appear, but now I cannot manually create jobs either. I now get a java.io.IOException "No such file or directory" - Note when I exec into the running container, the job and jobname directories exist and my config file is in there.
Any ideas?
For anyone who is interested - I found a better solution. I simply map the jobs folder to a folder on my host, that way I can put the created jobs into source control and edit then add them without having to build a new docker image.
Sorted.
Jobs need to bootstrapped while the Jenkins starts can be copied to /usr/share/jenkins/ref/jobs/ folder.
But keep in mind that if the jobs(or any) already exist in Jenkins home folder, updates from /usr/share/jenkins/ref/jobs/ folder won't have any effect unless you end the files with *.override name.
https://github.com/jenkinsci/docker/blob/master/jenkins-support#L110
Dockerfile
# First time building of jenkins with the preconfigured job
COPY job_name/config.xml /usr/share/jenkins/ref/jobs/job_name/config.xml
# But if jobs need to be updated, suffix the file names with '.override'.
COPY job_name/config.xml.override /usr/share/jenkins/ref/jobs/job_name/config.xml.override
I maintain the jobs in a bootstrap folder together with configs etc.
To add a job (i.e. seedjob) I need to add the following to the Dockerfile:
# copy seedjob
COPY bootstrap/seedjob.xml /usr/share/jenkins/ref/jobs/seedjob/config.xml

Resources