Docker installation without internet (RHEL) - docker

I am planning to install Docker CE under RHEL on an offline (Intranet) box.
Please share the procedure how to create repository.

You need to gain access to or set up a self-hosted (local) Yum repository inside your internal network but with access to the internet, sync it with external (public) Yum repos, and add the local Yum repo to your client system's repos list.

Related

How to use conda to install packages in a docker container which has no access to the internet via ssh tunnel

I am able to ssh to a docker container running on a remote server.
The container has no access to the internet.
Request:
I want to set up my deep leaning environment in the container.
Things I can do now
Data transfer between my host and the docker container is fast.
Things I do not know how to
Using conda to install packages.
Using pip to install packages.
Immature solution
Using conda proxy to install packages directly inside the docker container.
I do not know how to forward the http/https request through ssh tunnel from the docker container to the destination.
I think it might be:
First configure proxy server for conda throught modifying ~/.condarc as
proxy_servers:
http: http://localhost:9998
https: https://localhost:9998
Then ssh from my PC to the container using command like:
ssh -p port kd#mlp -R 9998:someIP:80
I tried to replace someIP with localhost or https://repo.anaconda.com/pkgs/main. None of both helps.
Is the above solution possible? What would the ssh command be look like?
Noticed that:
Since there are some packages not provided by pip, conda is necessary.
I have no access to the remote host and am only able to ssh to the docker container.
I can not modify the docker image or relaunch the docker container.
update:
There are some ways to install packages offline. However, they are not good solutions since this means I have to manually download conda/pip packages and transfer them to the corresponding docker. And I have to mannually deal with each environment. It is hoped that some solutons could setup ssh tunnel so that my conda would access the internet.

Install Confluent Platform inside docker Ubuntu container on Windows 10

I am trying to install Confluent Platform on my Windows machine. As far as I know, installing Confluent Platform will give me access to KSQL which is not available in Apache Kafka package. The other hurdle is : KSQL can't run on windows directly, it requires Unix environment. As I am on Windows so my options are limited.
I explored below options :
I tried to use Windows Subsystem for Linux, but installing anything from Windows App Store is
restricted in my environment. So, it's not possible to install Ubuntu from app store.
As I have docker installed on my system. I am planning to pull Ubuntu image and run Kafka inside it. I pulled Ubuntu image from Docker hub. Now, I need to download confluent-platform. I am planning to download it using WGET but not sure about the URL or path that I need to provide to wget.
Please suggest me the path to download confluent package.
You could use this instead of the bare bones ubuntu image.
It runs a Debian base, so apt-get will still work if you want to extend it
Please suggest me the path to download confluent package
Try the Confluent website??
As far as I know, installing Confluent Platform will give me access to KSQL which is not available in Apache Kafka package.
First, it's now referred to as ksqlDB. Second, it works with any Kafka provider. Confluent provides Apache Kafka as part of their distribution
You can use ksqlDB container with Apache Kafka running on Windows
Or you can run everything in containers, as shown in the quickstart - https://ksqldb.io/quickstart.html

How to deploy a Docker local registry open source on windows WITH web interface AND manage users permissions?

I'm novice in docker and I would like to deploy a docker private registry on my host (Windows10 usign docker for windows) with users permissions so I used TLS to securite it according to the doc from https://docs.docker.com/registry/deploying/
I have the docker private registry deployed and to push the user must do docker login command.
Now, I would like to connect a UI to my private registry and make it read only to be able to pull and for that I tried to setup Harbor, Portus and many other examples but they are not documented for windows.
I tried to use this project https://github.com/kwk/docker-registry-frontend but same thing.
All of these projects they bind files in volumes docker run -v pathToFiles:pathToFiles:ro but in windows it is not supported.
I tries to make modification in images and put the files into them and build a new images with docker commit but the UI still not work or not connected to my server.
So, what is the best way to deploy a docker private registry with the docker registry open source in docker for windows AND manage user permissions with auth ? Should I use a reverse proxy ? but how on windows?
I'm not using docker EE.
Thank you.

What are the dependencies for running containers on a system (linux)?

I am new to the world containers, docker, kubernetes and I am investigating requirements for implementing a my middleware distributed project. I took some key container courses with Docker and Kubernetes.
But I would like to ask for those who have more experience, in a production environment (or just execution and instantiation of modules, where each module would be a container) what would be the dependencies to execute a container?
Is it mandatory for me to have the dependency package for docker and docker itself installed for this? To just raise the pods and services with Kubernetes is it also mandatory to have kubectl installed on my host?
Note: For local development and deployment using google cloud I have already done some testing and I know it is necessary.
To Setup docker on your system you need below things
if you are going to setup K8s with docker
docker-ce/docker
kubelet
kubectl
curl & wget
if you are going to setup k8s with minikube
you will need minikube
virtualenv
I feel you need to be more specific what exactly you wanted to know.
There are multiple container technologies are existing currently. To install docker specifically your linux machine should have kernel version > 3.10.
If you want to install Kubernetes on your linux machines
you need to modify OS level things.(like firewall,swap etc)
you need to install one of the container run time & other kubernetes packages(kubelet kubeadm kubectl ) then setup container networking.
Here you can find clear instructions to install kuberentes via Kubeadm

How to monitoring Docker Container using OMD User

OMD User
# omd create docker-user
# su - docker-user
How to monitor docker container?
Micro services memory usage inside the docker container?
How to configer docker container as check_mk agent?
Iam using Check_mk for monitoring my servers and know want to monitor for docker as well?
Here are two options:
when you deploy your container add the check_mk_agent at/during provisioning and using the Check_MK Web-API, add your host, do discovery, etc.
you can use the following plugin to monitor docker containers.
Alternatively if you are using the enterprise version you can use the current innovation release (1.5.x) which has native Docker support.
This is a late answer but since this came on top of my Google search results, I will take some time to add up to Marius Pana's answer. As of now, the raw version of Check_MK also supports natively dockers. However, if you want dedicated checks inside your docker, you will need to actually install a Check_MK agent inside the docker. To do that, you need to start some sort of shell (generally sh or bash) inside the docker with docker exec -it <id> sh. You can get your docker ID with docker ps.
Now that's the easy part. The hard part is to figure out which package manager you are dealing with inside the docker (if any) and how to install inetd/xinetd or your preferred way of communication for your agent (unless it's already installed). If it's a Ubuntu-based image, you will generally need to start with a apt update, apt-get install xinetd and then you can install your packaged Check_MK agent or install it manually if you prefer. If it's a CentOS-based image, you will instead use yum. If the image is based on Arch Linux, you will probably want to use pacman.
Once you managed to install everything in your docker, you can test by adding your docker IP to Check_MK as a host. Please note that if your docker is using the host IP, you will need to forward port 6556 from your docker to another port on your host since I assume you're already monitoring the host through port 6556.
After you've checked everything is working, 2 more things. If you stop there, a simple restart of your docker will cancel every change you've made, so you need to do a docker commit to save your changes to your container image. And lastly, you will want to plan container updates ahead: you can do reinstall the agent every time a new version of the container is pulled (you could even script this), or you could add instructions to your cont-init.d which would be executed every time you launch your docker.

Resources