Can't deploy docker image on openshift - docker

I try to deploy my docker image from docker hub on openshift using both Openshift web console or minishift, all the processes seemed to be successful including Deployment, Service and Route. However when i click on the link to test my service it shows the error. I have tested my docker image and it work perfectly fine. Openshift
Openshift Error
Localhost

Related

Gitlab runner stucks while pulling docker image

I was trying to run my gitlab-ci in my hosted gitlab server and I picked docker for gitlab-runner executer but in pipline it got stucked and doesn't work.
What should I do to fix this?
Seems the same issue, the Machine on which the docker is running, is sitting behind a proxy server, which is why its getting stuck when its trying to pull the image.
If you are able to login to the machine and check the internet access..
Check if you are using some kind of proxy or not?
Your ID may have SSO to Proxy and hence your ID works .. if the gitlab-runner service runs on a different account, that account may not have internet access

Deploy docker to on-premise using azure CI-CD

I have created.NetCore Application and was successfully deployed to the local PC docker container.
Now I am trying to build it from Azure DevOps and publish it to one of my servers hosted on-premise.
Now I have no idea how to host it. Also not sure what is Docker Registry Service Connection & Container Registry Type.
My DevOps server is also hosted on-premise with no docker installed on it.
I have a docker account with one private repository.
Please suggest how to continue as I am getting the below error while building the image
open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
Thanks
Deploy docker to on-premise using azure CI-CD
If you want to deploy app to the local PC docker container, you can use Self-hosted Agent(Build Pipeline and Release Pipeline) or Deployment Group(Release Pipeline).
Note: we need set the self-agent on the server where have docker installed.
Then you could try the following pipeline settings.
Here is a blog about ASP.Net Application Deployment in Docker for Windows.
You could use Command Line Task to run the docker command. In this case, you can move the local build and deploy process to azure devops

Can't access certain services running on host machine from inside docker container

We're trying to setup a GitLab Runner, which is resposible for building and testing our web application. For running the jobs we use the Docker executor with DinD.
Our problem is now: When trying to access certain services from inside the Runner Container (docker image) we get a timeout and no response back. It includes:
logging in to our own docker registry which is hosted on the same
system
wget on our domain (which is hosted on the same system)
What we can do:
ping our domain as well as the registry
ping other domains
wget other domains
Logging into the registry and wget our domain is successful when trying it native on the server and not in a docker container.
So it maybe looks like a docker problem.
Hope someone can help us.

How to build docker images on AWS EC2 Windows Server instance?

We use Team City to build C# applications on a Windows server in AWS EC2.
Now there is a requirement to build Docker containers using the same system. The build steps have been tested locally and are able to produce a docker image.
Docker is not installing correctly on the server which leads to the builds failing.
Docker Edge supports Windows Server but fails on EC2 due to Hyper-V not functioning correctly.
Docker Toolbox also fails because VT-X/AMD-v are not enabled.
Is there any way to build docker images on an AWS EC2 Windows Server instance?

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

Resources