Openshift privileged container access services from openshift namespace - docker

I'm trying to run my custom Jenkins on Openshift. I'm trying to run dockerized pipelines using privileged containers and scc to be able to run docker using my Jenkins. So far, I managed to run the job and it is creating a new Docker container successfully. But, since my new docker is created by Jenkins it doesn't have access to Nexus service on my project. How can I fix this? I was thinking the solution should be for the Jenkins to run docker in the same namespace as my Jenkins.

I'm assuming that you want to run your container in Kubernetes.
On your Deployment I would advise using either a ConfigMap or if you want to keep in encrypted in the cluster you can use a Secret to store your Nexus credentials.
Then you can mount your ConfigMap or Secret under ~/.ivy2/.credentials for example.

Related

How to handle "docker-in-docker" problem when using Jenkins inside K8S

New to Kubernetes, a little complex question needs help.
Background
Using Jenkins in GKE (Google Kubernetes Engine)
Want to use jenkins-docker plugin to provide the specific test environment for each type of tests
Don't want to mixin docker binary in the Jenkins image (because it is large)
Don't want docker-in-docker
More specifically, I don't want the Jenkins Pod be a new Docker Server
What I want
Each test environment can create a new pod in GKE Cluster, rather than creating containers inside the Jenkins Pod
P.S.
I have just read some articles, but half of them are telling about "how to use K8S to scale up the Jenkins (using jenkins-slave + jenkins-kubernates plugin)", another half are telling about how to "use docker plugin in a dockerized jenkins container on a bare metal machine (you can use /var/run/docker.sock to communicate between the host and the docker container)", but I cannot find **how to use docker plugin (to provide a specific environment) in a dockerized jenkins container inside K8S

connecting to an insecure local docker registry in uncontrolled CI environment

I'm building a microservice that performs operations on a docker registry.
The microservice i'm building has a test which starts a docker-registry via the docker-registry image in Docker Hub, so the microservice can connect to it, set it up, work on it etc...
The test fails in CI: The Docker client can't connect to the test-registry because it's insecure. This is in CI, and dynamic, different random ip/port each time, and the docker daemon is used by other parallel tests... so having the test edit the global jsons and restarting docker daemon seems like a bad solution.
Has anyone solved this? how do you test integration with docker-registry in CI? am i doomed to modify the global docker jsons and restart/trigger reload of config?
Some specifics:
The build tool is Bazel and runs in GCB so the test itself runs in RBE workers on the Google cloud which are isolated and don't have network access when running the tests and i can't really configure too much, it's not my machine, it's a radon machine each time for each test etc...
we ended up starting another container that has a docker daemon in it (without mounting the external docker daemon socket, so it's actually another docker daemon instance).
we do this at our leisure, so only after we know the private registry address and configure the docker daemon to startup with insecure registry flag.
in order for the containers to communicate we had their container have a name and share a network.

gitlab ci/cd deploy docker to aws ec2

We are developing spring boot application which is currently deploying in AWS manually. For that, first we build docker image through Dockerfile and then connect to AWS EC2 instance from laptop & then pull the image and then we use docker run to start it. But we want to automate the process using gitlab CI/CD.
We created .gitlab-ci.yml, build stage builds spring-boot application and generates jar file. Package stage then build docker images using Dockerfile from source code and then push the image to registry.
Now i don't know how to finish deploy stage. Most of the tutorials explains only about deploying into Google cloud provider. I use below steps to deploy the docker image...
ssh -i "spring-boot.pem" ubuntu#ec2-IP_address.compute-2.amazonaws.com
sudo docker pull username/spring-boot:v1
sudo docker run -d -p 80:8080 username/spring-boot:v1
Can anybody help me to add above steps into deploy stage. Do I need to add pem file into source to connect to ec2 instance.
Or is there any easy way to deploy docker in ec2 from gitlab ci/cd.
First thing, If there is ssh then it's mean you must provide the key or password by default unless you allow access to everyone.
Do I need to add pem file into source to connect to ec2 instance?
Yes, you should provide the key for ssh.
Or is there any easy way to deploy Docker in ec2 from gitlab ci/cd?
Yes, there is the easiest way to do that but for that, you need to use ECS, the specially designed for Docker container and you can manage your deployment through API instead of doing ssh to the ec2 server.
ECS is designed for running Docker container, Some of the big Advantage of ECS over ec2 is you do not need to worry about container management, scalability and availability, ECS will take care of it. provide ECR which is like docker registry but it's private and with in-network.
deploy-docker-containers

Kubernetes on Docker for Windows -> AKS/EKS

With the Kubernetes orchestrator now available in the stable version of Docker Desktop for Win/Mac, I've been playing around with running an existing compose stack on Kubernetes locally.
This works fine, e.g., docker stack deploy -c .\docker-compose.yml myapp.
Now I want to go to the next step of running this same application in a production environment using the likes of Amazon EKS or Azure AKS. These services expect proper Kubernetes YAML files.
My question(s) is what's the best way to get these files, or more specifically:
Presumably, docker stack is performing some conversion from Compose YAML to Kubernetes YAML 'under the hood'. Is there documentation/source code links as to what is going on here and can that converted YAML be exported?
Or should I just be using Kompose?
It seems that running the above docker stack deploy command against a remote context (e.g., AKS/EKS) is not possible and that one must do a kubectl deploy. Can anyone confirm?
docker stack deploy with a Compose file to Kube only works on Docker's Kubernetes distributions - Docker Desktop and Docker Enterprise.
With the recent federation announcement you'll be able to manage AKS and EKS with Docker Enterprise, but using them direct means you'll have to use Kubernetes manifest files and kubectl.

Deploying multiple docker containers to AWS ECS

I have created a Docker containers using docker-compose. In my local environment, i am able to bring up my application without any issues.
Now i wanted to deploy all my docker containers to the AWS EC2 (ECS). After going over the ECS documentation, i found out that we can make use of the same docker-compose to deploy in ECS using ECS-CLI. But ECS-CLI is not available for windows instances as of now. So now i am not sure how to use my docker-compose to build all my images using a single command and deploy it to the ECS from an windows instance.
It seems like i have to deploy my docker containers one by one to ECS as like below steps,
From the ECS Control Panel, create a Docker Image Repository.
Connect your local Docker client with your Docker credentials in ECS:
Copy and paste the Docker login command from the previous step. This will log you in for 24 hours
Tag your image locally ready to push to your ECS repository – use the repo URI from the first step
Push the image to your ECS repoository\
create tasks with the web UI, or manually as a JSON file
create a cluster, using the web UI.
Run your task specifying the EC2 cluster to run on
Is there any other way of running the docker containers in ECS ?
Docker-Compose is wrong at this place, when you're using ECS.
You can configure multiple containers within a task definition, as seen here in the CloudFormation docs:
ContainerDefinitions is a property of the AWS::ECS::TaskDefinition resource that describes the configuration of an Amazon EC2 Container Service (Amazon ECS) container
Type: "AWS::ECS::TaskDefinition"
Properties:
Volumes:
- Volume Definition
Family: String
NetworkMode: String
PlacementConstraints:
- TaskDefinitionPlacementConstraint
TaskRoleArn: String
ContainerDefinitions:
- Container Definition
Just list multiple containers there and all will be launched together on the same machine.
I got the same situation as you. One way to resolve this was using the Terraform to deploy our containers as a Task Definitions on AWS ECS.
So, we using the docker-compose.yml to run locally and the terraform is a kind of mirror of our docker-compose on AWS.
Also another option is Kubernetes that you can translate from docker-compose to Kubernetes resources.

Resources