pgAdmin on OpenShift using RedHat base image - docker

I am trying to create an image for OpenShift v4 using RedHat universal base image(registry.access.redhat.com/ubi8/ubi). Unfortunately this image comes with some limitations at least for me, i.e. missing wget and on top I have corporate proxy messing up with the SSL certificates so I am creating builds from dockerfile and running them directly in OpenShift.
So far my Dockerfile looks like:
FROM registry.access.redhat.com/ubi8/ubi
RUN \
dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-aarch64/pgdg-redhat-repo-latest.noarch.rpm && \
dnf install -y postgresql13-server
CMD [ "systemctl start postgresql-13" ]
This ends-up with "Error: GPG check FAILED". I need some help how to create the proper Dockerfile using an image from RedHat and the rpm package for Docker. Any other ideas are pretty welcome.
Thanks in advance!

"Error: GPG check FAILED" is telling you that your system is not trusting that repo. You need to import it's key as rpm --import https://download.postgresql.org/pub/repos/yum/RPM-GPG-KEY-PGDG-AARCH64 or whichever key is right for your version
You don't want to start a postgres server with a systemd, that's actually against the container philosophy of running a single process inside container. Also, you can't have a proper pid 1 inside openshift without messing with SCCs, since the main idea of openshift restrictions is to run unprivileged containers, so getting systemd might be impossible in your environment.
Look at the existing postgres dockerfiles out there to gain inspiration, i.e. very popular bitnami postgres image. Notice that there is entrypoint.sh, which checks if database is already initialized, and creates it if it's not. Then in actually launces as postgres "-D" "$POSTGRESQL_DATA_DIR" "--config-file=$POSTGRESQL_CONF_FILE" "--external_pid_file=$POSTGRESQL_PID_FILE" "--hba_file=$POSTGRESQL_PGHBA_FILE"
Unless you really need a postgres 13 built upon rhel 8 UBI, i suggest you to look at official redhat docker images, here is the link if you want to build them yourself - https://github.com/sclorg/postgresql-container . As you can see - building a proper postgresql is quite a task, and without working all the quirks and knowing everything beforehand - you may end up with improperly configured or corrupted database.
You may also have postgres helm charts, templates or even operators configured in you cluster, and deploying a database can be as easy as couple of clicks.
TL,DR: Do not reinvent the wheel and do not create custom database images unless you have to. And if you have to - draw inspiration from existing Dockerfiles from reputable vendors.

Related

Install Python Wheel when starting Docker Containers with docker-compose

We are currently developing a python package, which will be build via an AzureDevOps Pipeline and the result package will be stored in the Azure Artifacts.
In production we install that package directly to some databricks clusters directly from the Azure Artifacts. Benfit is, whenever a new Version of the package is available, it is getting installed when starting a cluster
For developing, i want to do the similar within a local spark environment with docker container. We already set up docker containers which are working fine except one thing.
When i run my docker-compose command, i want to install my package from AzureArtifacts with the latest version.
Because we need access tokens to get this package in our setup, i can't provide this tokens in a git Repo. Therefore i need a way to provide the token safely to a docker-compose command and install the package from startup.
Also, if using the Dockerfile for the command, each time we will get a new version of our package, i have to build the docker-images again.
So this tasks need to be done from the user in my mind (assume DockerImages are already build):
Have a local file where a token is stored
Use my Docker-compose command to start up a local environment (by the way, with spark-master and workers and jupyter-notebook)
Automatic: get the token from the local file, provide it to any startup-script in the docker container and install the package from Azure Artifacts.
As i am no real expert on Docker, i found some topics regarding to ENTRYPOINT and CMD, but didn't understand that and what exactly to do.
Have anyone a hint which way we can go to easily implement that above logic?
PS: For testing i tried to install the package with command: during docker-compose with plaintext token, the installation worked but the juypter notebook was not accessible anymore :-(
Hopefully anybody has an idea or a better approach for what i am aiming to do.
Best Regards
You can use build-args:
docker-compose build --build-arg ARTIFACTORY_USERNAME=<your_username> --build-arg ARTIFACTORY_PASSWORD=<your_password> <service_to_build>
then your Dockerfile might look like:
FROM <my_base_image>
ARG ARTIFACTORY_USERNAME
ARG ARTIFACTORY_PASSWORD
RUN pip install <your_package_name> --extra-index-url https://$ARTIFACTORY_USERNAME:$ARTIFACTORY_PASSWORD#pkgs.dev.azure.com/<org>/_packaging/<your_package_name>/pypi/simple/
...

Run e2e test with simulation of k8s

we want to create e2e test (integration test ) for our applications on k8s and we want to use
minikube but it seems that there is no proper (maintained or official ) docker file for minikube. at least
I didn’t find any…In addition I see k3s and not sure which is better to run e2e test on k8s ?
I found this docker file but when I build it it fails with errors
https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/
e - –no-install-recommends error
any idea ?
Currently there's no official way to run minikube from within a container. Here's a two months old quote from one of minikube's contributors:
It is on the roadmap. For now, it is VM based.
If you decide to go with using a VM image containing minikube, there are some guides how to do it out there. Here's one called "Using Minikube as part of your CI/CD flow
".
Alternatively, there's a project called MicroK8S backed by Canonical. In a Kubernetes Podcast ep. 39 from February, Dan Lorenc mentions this:
MicroK8s is really exciting. That's based on some new features of recent Ubuntu distributions to let you run a Kubernetes environment in an isolated fashion without using a virtual machine. So if you happen to be on one of those Ubuntu distributions and can take advantage of those features, then I would definitely recommend MicroK8s.
I don't think he's referring to running minikube in a container though, but I am not fully sure: I'd enter a Ubuntu container, try to install microk8s as a package, then see what happens.
That said, unless there's a compelling reason you want to run kubernetes from within a container and you are ready to spend the time going the possible rabbit hole – I think these days running minikube, k3s or microk8s from within a VM should be the safest bet if you want to get up and running with a CI/CD pipeline relatively quickly.
As to the problem you encountered when building image from this particular Dockerfile...
I found this docker file but when I build it it fails with errors
https://aspenmesh.io/2018/01/building-istio-with-minikube-in-a-container-and-jenkins/
e - –no-install-recommends error
any idea ?
notice that:
--no-install-recommends install
and
–no-install-recommends install
are two completely different strings. So that the error you get:
E: Invalid operation –no-install-recommends
is the result you've copied content of your Dockerfile from here and you should have rather copied it from github (you can even click raw button there to be 100% sure you copy totally plain text without any additional formatting, changed encoding etc.)

Building a docker image on EC2 for web application with many dependencies

I am very new to Docker and have some very basic questions. I was unable to get my doubts clarified elsewhere and hence posting it here. Pardon me if the queries are very obvious. I know I lack some basic understanding regarding images but i had a hard time finding some easy to understand explanation for the whole of it.
Problem at hand:
I have my application running on an EC2 node (r4.xlarge). It is a web application which has a LOT of dependencies (system dependencies + other libraries etc). I would like to create a docker image of my machine so that i can easily run it at ease when I launch a new EC2 instance.
Questions:
Do i need to build the docker image from scratch or can I use some base image?
If i can use a base image, which one do I select? (It is hard to know the OS version on the EC2 machine and hence I am not sure which base image do i start on.
I referred this documentation-
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#install_docker
But it creates from an Ubuntu base image.
The above example has instructions on installing apache (and other things needed for the application). Let's say my application needs server X to be installed + 20 system dependencies + 10 other libraries.
Ex:
yum install gcc
yum install gfortran
wget <abc>
When I create a docker file do i need to specify all the installation instructions like above? I thought creating an image is like taking a copy of your existing machine. What is the docker file supposed to have in this case?
Pointing me out to some good documentation to build a docker image on EC2 for a web app with dependencies will be very useful too.
Thanks in advance.
First, if you want to move toward docker then I will suggest using AWS ECS which specially designed for docker container and have auto-scaling and load balancing feature.
As for your question is concern so
You need a docker file which contains all the packages and application which already installed in your EC2 instance. As for base image is concern i will recommend Alpine. Docker default image is Alpine
Why Alpine?
Alpine describes itself as:
Small. Simple. Secure. Alpine Linux is a security-oriented,
lightweight Linux distribution based on musl libc and busybox.
https://nickjanetakis.com/blog/the-3-biggest-wins-when-using-alpine-as-a-base-docker-image
https://hub.docker.com/_/alpine/
Let's say my application needs server X to be installed + 20 system
dependencies + 10 other libraries.
So You need to make dockerfile which need all these you mentioned.
Again I will suggest ECS for best docker based application because that is ECS that designed for docker, not EC2.
CONTAINERIZE EVERYTHING
Amazon ECS lets you easily build all types of
containerized applications, from long-running applications and
microservices to batch jobs and machine learning applications. You can
migrate legacy Linux or Windows applications from on-premises to the
cloud and run them as containerized applications using Amazon ECS.
https://aws.amazon.com/ecs/
https://aws.amazon.com/getting-started/tutorials/deploy-docker-containers/
https://caylent.com/containers-kubernetes-docker-swarm-amazon-ecs/
You can use a base image, you specify it with the first line of
your Docker file, with FROM
The base OS of the EC2 instance doesn't matter for the container.
that's the point of containers, you can run linux on windows, arch
on debian, whatever you want.
Yes, dependencies that don't exist in your base image will need to
be specified and installed. ( Depending on the default packager
manger for the base image you are working from you might use dpkg,
or yum or apt-get. )

How to make docker image of host operating system which is running docker itself?

I started using Docker and I can say, it is a great concept.
Everything is going fine so far.
I installed docker on ubuntu (my host operating system) , played with images from repository and made new images.
Question:
I want to make an image of the current(Host) operating system. How shall I achieve this using docker itself ?
I am new to docker, so please ignore any silly things in my questions, if any.
I was doing maintenance on a server, the ones we pray not to crash, and I came across a situation where I had to replace sendmail with postfix.
I could not stop the server nor use the docker hub available image because I need to be clear sure I will not have problems. That's why I wanted to make an image of the server.
I got to this thread and from it found ways to reproduce the procedure.
Below is the description of it.
We start by building a tar file of the entire filesystem of the machine (excluding some non necessary and hardware dependent directory - Ok, it may not be as perfect as I intent, but it seams to be fine to me. You'll need to try whatever works for you) we want to clone (as pointed by #Thomasleveil in this thread).
$ sudo su -
# cd /
# tar -cpzf backup.tar.gz --exclude=/backup.tar.gz --exclude=/proc --exclude=/tmp --exclude=/mnt --exclude=/dev --exclude=/sys /
Then just download the file into your machine, import targz as an image into the docker and initialize the container. Note that in the example I put the date-month-day of image generation as image tag when importing the file.
$ scp user#server-uri:path_to_file/backup.tar.gz .
$ cat backup.tar.gz | docker import - imageName:20190825
$ docker run -t -i imageName:20190825 /bin/bash
IMPORTANT: This procedure generates a completely identical image, so it is of great importance if you will use the generated image to distribute between developers, testers and whateever that you remove from it or change any reference containing restricted passwords, keys or users to avoid security breaches.
I'm not sure to understand why you would want to do such a thing, but that is not the point of your question, so here's how to create a new Docker image from nothing:
If you can come up with a tar file of your current operating system, then you can create a new docker image of it with the docker import command.
cat my_host_filesystem.tar | docker import - myhost
where myhost is the docker image name you want and my_host_filesystem.tar the archive file of your OS file system.
Also take a look at Docker, start image from scratch from superuser and this answer from stackoverflow.
If you want to learn more about this, searching for docker "from scratch" is a good starting point.

How do I create docker image from an existing CentOS?

I am new to docker.io and not sure if this is beyond the scope of docker. I have an existing CentOS 6.5 system. I am trying to figure out how to create a docker image from a CentOS Linux system I already have running. I would like to basically clone this existing system; so I can port it to another cloud provider. I was able to create a docker image from a base CentOS image but I want to basically clone my existing system and use docker.io going forward.
Am I stuck with creating a base CentOS from scratch and configure it for docker from there? This might be more of a VirtualBox/Vagrant thing, but am interested in docker.io.
Looks like I need to start with base CentOS and create a Dockerfile with all the addons I need... Think I'm getting there now....
Cloning a system that is up and running is certainly not what Docker is intended for. Instead, Docker is meant to develop your OS and server installation together with the app or service, making DevOps even more DevOpsy. By starting with a clean CentOS image, you will be sure to install only what you need for the service, and have everything under control. You actually don't want all the other stuff that might produce incompatibilities. So, the answer here is that you definitely should approach the problem here the other way around.

Resources