Error on OpenShift when trying to use Dockerfile - docker

I just started using OpenShift and currently using the 60 day free trial. I was hoping to test some of my developmental Dockerfiles in it, but when I try to use any Dockerfile I get this error:
admission webhook "validate.build.create" denied the request: Builds with docker strategy are prohibited on this cluster
To recreate:
Developer view -> Topology -> From Dockerfile ->
GitHub Repo URL = https://github.com/alpinelinux/docker-alpine -> Defaults for everything else -> Create
This example just uses the official Alpine Dockerfile and it does not work.

Based on this answer made by Graham Dumpleton
If you are using OpenShift Online, it is not possible to enable the docker build type. For OpenShift Online your options are to build your image locally and then push it up to an external image registry such as Docker Hub, or login to the internal OpenShift registry and push your image directly in to it. The image can then be used in a deployment.
If you have set up your own OpenShift cluster, my understanding is that docker build type should be enabled by default. You can find more details at:
https://docs.openshift.com/container-platform/3.11/admin_guide/securing_builds.html
If you are after a way to deploy a site using a httpd web server, there is a S2I builder image available that can do that. See:
https://github.com/sclorg/httpd-container
OpenShift Online provides the source build strategy (S2I). Neither docker or custom build strategies are enabled. So you can build images in OpenShift Online, but only using the source build strategy.

Related

Automatically deploy new container to Google Cloud Compute Engine from Google Container Registry

I have a docker container which I push to GCR like gcloud builds submit --tag gcr.io/<project-id>/<name>, and when I deploy it on GCE instance, every time I deploy it creates a new instance and I have to remove the old instance manually. The question is, is there a way to deploy containers and force the GCE instances to fetch new containers? I need exactly GCE, not Google Cloud Run or other because it is not an HTTP service.
I deploy the container from Google Console using the Deploy to Cloud Run button
I'm posting this Community Wiki for better visibility. In the comment section there were already a few good solutions, however at the end OP wants to use Cloud Run.
At first I'd like to clarify a few things.
I have a docker container which I push to GCR like gcloud builds submit
gcloud builds submit is a command to build using Google Cloud Build.
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
In this question, OP is referring to Container Registry, however GCP recommends to use Artifact Registry which soon will replace Container Registry.
Pushing and pulling images from Artifact Registry is explained in Pushing and pulling images documentation. It can be done by docker push or docker pull command, where earlier you have to tag an image and create Artifact Registry.
Deploying on different GCP products
Regarding deploying on GCE, GKE and Cloud Run, those are GCP products which are quite different from each.
GCE is IaaS where you are specifying the amount of resources and you are maintaining all the installation of all software (you would need to install Docker, Kubernetes, programming libs, etc).
GKE is like Hybrid as you mention the amount of resources you need but it's customized to run containers on it. After creation you already have docker, kubernetes and other software needed to run containers on it.
Cloud Run is a serverless GCP product, where you don't need to calculate the amount of needed resources, installing software/libs, it's a fully managed serverless platform.
When you want to deploy a container app from Artifact Registry / Container Registry, you are creating another VM (GCE and GKE) or new service (Cloud Run).
If you would like to deploy new app on the same VM:
On GCE, you would need to pull an image and deploy it on that VM using Docker or Kubernetes (Kubeadm).
On GKE you would need to deploy a new deployment using command like
kubectl create deployment test --image=<location>-docker.pkg.dev/<projectname>/<artifactRegistryName>/<imageName>
and delete the old one.
In Cloud Run you can deploy an app without concerns about resources or hardware, which steps are described here. You can create revisions for specific changes in the image. However Cloud Run also allows CI/CD using GitHub, BitBucket or Cloud Source Repositories. This process is also well described in GCP documentation - Continuous deployment
Possible solutions:
Write a Cloudbuild.yaml file that do that for you at each CI/CD pipeline run
Write a small application on GCE that subscribes to Pub/Sub notifications created by Cloud Build. You can then either pull the new container or launch a new instance.
Use Cloud Run with CI/CD.
Based on one of the OP's comments, as chosen solution was to use Cloud Run with CI/CD.

How to manage my application in a container and deploy with no downtime on gcloud

I have a monolithic application that I am hosting on google cloud.
I am using cloud build that builds my docker image when I push to my repository.
Other than using Kubernetes, what other options do I have to push my latest docker image to my web instances in a rolling update to not bring my website down?
I cant' seem to find any documentation other than Kubernetes related.
I believe I should be building a instance template that has my latest docker image. Not sure how to make this happen in an automated fashion.

Installing Confluent Plugins on Kubernetes

Our team is developing Kafka Connect Source Connector-plugins.
Do you have any ideas on how to install/upgrade the plugins? How is the flow (git -> Jenkins -> running Source Connector) supposed to look on-prem?
We use Confluent on Kubernetes which complicates things further.
PS. We are required by law to not use cloud solutions.
To store custom connectors, use Nexus, Artifactory, S3, or some plain HTTP/file server.
If you are using Kubernetes, then you probably have a release policy around your Docker images.
Therefore, you can extend the Confluent Connect Docker images by adding additional RUN statements to the Dockerfile, then build and tag your images with Jenkins, and upgrade your Kubernetes services to use the new image tag.
The answer I would give for a bare-metal (or cloud) installation of managing Kafka Connect would be to use Ansible or other orchestration tool to push out the new files, and restart the services

Using docker the way Openshift does?

I read this How does docker compare to openshift?
But I have a question :
This is an extremely simplified description of what usually devs do with Openshift :
Select a "pod" (let's say a JBoss/Wildfly container)
From within Openshift you point to your github repo
Openshift would clone the repo, build it and deploy it
Openshift present you with a web URL to access this repo port 8080
There's of course a lot more going on but that's as simple as it gets
Is this setup doable in my own linux box, VM or a cloud instance (Docker Container --> clone, build and deploy from git repo)? What would I need without messing too much with networking and domains etc?
from my research I see the following tools:
Kubernetes
Dokku : I see it described as "Your own Heroko"
I also keep hearing about CaaS (Containers as a Service)
I understand I would be needing another tool or process to the build (CI/CD) capability, and to triggering builds with git push.

Docker hub/store doesn't show build information

I'm having problems with docker continuous integration.
I setup automated builds in cloud.docker.com for my project, but there is not information at all either in their webs (hub/store) or their api, which shows that my build is not automated.
Docker Cloud looks like this:
But in the registry there is no "builds" section:
I guess that should look like other members projects, something like this:
Also, like I said, using the endpoint: https://registry.hub.docker.com/v2/repositories/{user}/{project}/ shows me "automated build: false"
I just realized that, in some way, there is no link between the Docker Cloud automatic builds and Docker Hub ones.
If you create an automated build in Docker Hub, everything works. I don't understand the logic of this, because if you create a repo either in docker cloud or docker hub, they are syncronized as one, but automated builds created on Docker Cloud don't show correctly in Docker Hub/Store.
Both, the Docker Hub and Docker Store builds will be updated whenever you do a push to your repo or a new build is sent with docker push, but the information about the automatic build only will be showed in Docker Cloud if you did it here.

Resources