steps required to run Docker image using kubernetes - docker

I have developed a simple Docker image. This can be run using command
docker run -e VOLUMEDIR=agentsvolume -v /c/Users/abcd/config:/agentsvolume app-agent
Same thing if I want to run using kubernetes, can someone guide me what are the steps to do it?
Do I must create Pods/ Controller or service.. am not able to get clear steps to run using Kubernetes?

This kubernetes command is the equivalent to your docker run command:
kubectl run --image=app-agent app-agent --env="VOLUMEDIR=agentsvolume"
This will create a deployment called app-agent.

Related

Dev environnement for gcc with Docker?

I would like to create a minimalist dev environment for occasional developers which only need Docker.
The ecosystem would have:
code-server image to run Visual Studio Code
gcc image to build the code
git to push/commit the code
ubuntu with some modifications to run the code
I looked to docker-in-docker which could be a solution:
Docker
code-server
docker run -it -v ... gcc make
docker run -it -v ... git git commit ...
docker run -it -v ... ubuntu ./program
But it seems perhaps a bit overkill. What would be the proper way to have a full dev environment well separated, that only require Docker to be installed on the host machine (Linux, Windows, MacOS, Chromium)
I suggest using a Dockerfile.
This file specifies a few steps used to build an image.
The first line of the file specifies a base image(in your case, I would use Ubuntu):
FROM ubuntu:latest
Then, you can e.g. copy files to the image or select commands to run:
RUN apt install gcc make
RUN apt install git
and so on.
At the end, you may want to specify the program that is run when you start the container
CMD /bin/bash
Then you can build it with the command docker build -f Dockerfile -t devenv:latest. This builds a new image named devenv:latest (latest is the version) from the file Dockerfile.
Then, you can create a container from the file using docker run devenv:latest.
If you want to use this container multiple times, you could create it using docker run -it devenv:latest
If you want to, you can also use the code-server base image instead of ubuntu:latest.

How to replace args from docker to singularity?

I am trying to pull a docker image but have to use singularity. How can I do this? Here is the script I am running.
cp -rp ~/adversarial-policies/ $SLURM_TMPDIR
cd adversarial-policies/
singularity pull docker://humancompatibleai/adversarial_policies:latest
singularity run -it --env MUJOCO_KEY=~/.mujoco/mjkey.txt ./adversarial_policies-latest.simg
source ./modelfreevenv/bin/activate
python -m modelfree.multi.train with paper --path $SLURM_TMPDIR --data-path $SLURM_TMPDIR
cp $SLURM_TMPDIR/job-output.txt /network/tmp1/gomrokma/
cp $SLURM_TMPDIR/error.txt /network/tmp1/gomrokma/
The errors I get are with ERROR: Unknown option: --build-arg
ERROR: Unknown option: -it.
Any help would be appreciated. I am new to using singularity containers instead of docker
Singularity and Docker are both containers, but they are not a drop in replacement for each other. I strongly recommend reading the documentation of the relevant version of the singularity you're using. The latest version has a good section on using docker and singularity together.
If you are using singularity v3 or newer, the file created from singularity pull will be named adversarial_policies_latest.sif, not adversarial_policies-latest.simg. If v2 is the only version available on your cluster, ask the admins to install the v3. 2.6.1 is the only v2 without security issues and it is no longer getting any updates.
As for the singularity run ..., the -it docker options are to force an interactive tty session rather than run in the background. singularity exec and singularity run both will always run in the foreground, so there is no equivalent required option to use with singularity. Passing environment variables is also handled differently. Since the container is run as your user, it passes your environment through to it. You can either set export MUJOCO_KEY=~/.mujoco/mjkey.txt further up the script or have it set just for the command: MUJOCO_KEY=~/.mujoco/mjkey.txt singularity run ./adversarial_policies-latest.simg.

Docker restarts container every time

I am just learning Docker, I pulled my first container using:
docker run -it debian:latest /bin/bash
After installing some services, like systemd, openssh, etc... I exit the container, using CTRL+D and the next time i start the container (using the same command) I get fresh install of debian without my configs.
I tried using docker run -it --restart no debian:buster without success.
How can I prevent this from happening?
Each time you use
docker run
command, you create a new container from an existing docker image. With
docker start $containerName
command, you can start the existing container ($containerName should replace your container real name). Otherwise, to have a custom image of a debian, it is better to write a dockerfile and build an image out of it. Here are the best practices to write a Dockerfile: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

How to redeploy a docker image using jenkin pipeline?

I created a pipeline for spring boot microservice project. I am automating the deployment process using jenkin pipeline.
The steps which I used in pipeline as follows:
Jenkin script first checkout code from bitbucket.
Build a project using maven.
Create a docker image.
Push docker image to dockerhub.
Then run this docker image by downloading docker image from docker hub.
It works perfectly one time. It would works second time as I need to stop docker conatiner and then remove image from there.
.
I used docker run -rm According to documentation -rm is used to removed image form docker. But this not working any one help me out in this case
docker run --rm -p 8761:8761 -d --name ccpserviceregistry mydockerRepo/ccpserviceregistry:1.0
Want to redeploye the image with latest one .
Follow these steps:
Checkout code from bitbucket
Build project using maven
Create docker image
Push docker image to dockerhub
Remove if any docker container already running docker rm -f container-name
Remove docker image if you want to if any (docker rmi -f image-name)
Run docker image (use --name option in docker run so that it will be easier while removing the container, no need to provide --rm option)
Hope this helps.

How to get Container Id of Docker in Jenkins

I am using Docker Custom Build Environment Plugin to build my project inside "jpetazzo/dind" docker image. After building, in console output it shows:
Docker container 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc started to host the build
$ docker exec --tty 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc env
[workspace] $ docker exec --tty --user 122:docker 4aea29fff86ba4e50dbcc7387f4f23c55ff3661322fb430a099435e905d6eeef env BUILD_DISPLAY_NAME=#73
Here Docker Container which got started has container id 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc .
Now further I want to execute some command on "Execute shell" part in "Build" option in Jenkins, there I want to use this Container Id. I tried using ${BUILD_CONTAINER_ID} as mentioned in the plugin page. But that does't work.
The documentation tells you to use docker run, but you're trying to do docker exec. The exec subcommand only works on a currently running container.
I suppose you could do a docker run -d to start the container in the background, and then make sure to docker stop when you're done. I suspect this will leave you with some orphaned running containers when things go wrong, though.

Resources