I need to run a Docker action in OpenWhisk. Inside the Docker Container, I execute a Java program.
Now I pulled the docker skeleton from Openwhisk and installed Java on it.
I also put my Java program inside the container and replaced the exec.
I can create the action with:
wsk create action NAME --docker myDockerHub/repo:1 -i
This is not optimal since my code should not be on DockerHub. Does OpenWhisk provide usage for my local Registy?
wsk action create ImportRegionJob --docker server.domain.domain:5443/import-region-job:v0.0.2 -i
error: Unable to create action 'ImportRegionJob': The request content was malformed:
image prefix not is not valid (code qXB0Tu65zOfayHCqVgrYJ33RMewTtph9)
Run 'wsk --help' for usage.
I know you can provide a .zip file to a docker action when creating it, but that does not work because the default image used does not have Java installed.
I achieved this for a distributed OpenWhisk environment. The docker actions are hosted in GitLab, built by GitLab CI and deployed to a custom container registry in their respective GitLab repositories. Pulls from the local registry are significantly faster than pulls from docker hub.
In OpenWhisk, create an action using the full path including registry url.
wsk create action NAME --docker YourRegistry:port/namespace/image[:tag]
On invocation, the pull command for the action will be carried out inside the invoker containers on the compute nodes. The following table shows in the first column an example setup of invoker hosts (configured in openwhisk/ansible/environments/distributed/hosts, section [invokers]), and in the second column the respective invoker container name running on that host. The invoker container in the second column should show up, when doing a docker ps on the hostname from the first column:
invoker-host-0 invoker0
invoker-host-1 invoker1
...
invoker-host-2 invokerN
for $I in $(seq 0 N); do ssh invoker-host-$I docker ps | grep invoker$I; done
Now, you can do a docker login for all invokers in one command.
for $I in $(seq 0 N); do ssh invoker-host-$I docker exec invoker$I docker login YourRegistry:port -u username -p TokenOrPassword; done
As a prerequisite, inside all invoker containers, I had to add the root certificates for the registry, update-ca-certificates and restart the docker deamon.
An improvement might be to do this already in the invoker container image, that is built when deploying openwhisk (before running) invoker.yml, which is imported in openwhisk.yml.
Docker images can be specified when deploying applications from zip files. This allows you use the existing Java runtime with the zip file, which has Java installed, removing the need for a custom image.
wsk action update --docker openwhisk/java8action action_name action_files.zip
Related
I am using ubuntu 20.04
while following book -> https://www.oreilly.com/library/view/kubeflow-for-machine/9781492050117/
on page 17, it says the following (only relevant parts) which I don't understand....
You will want to store container images called a
container registry. The container registry will be accessed by your Kubeflow cluster.
I am going to use docker hub as container registry. Next
we'll assume that you've set your container registry via an environment variable
$CONTAINER_REGISTRY, in your shell" NOTE: If you use registry that isn't on Google Cloud
Platform, you will need to configure Kubeflow pipelines container builder to have access to
your registry by following the Kaniko configuration guide -> https://oreil.ly/88Ep-
First, I do not understand how to set container registry through environment variable, am I supposed to give it a link??
Second, I've gone into Kaniko config guide and did everything as told -> creating config.json with "auth":"mypassword for dockerhub". After that In the book it says:
To make sure your docker installation is properly configured, you can write one line Dc and
push it to your registry."
Example 2.7 Specify the new container is built on top of Kubeflow's
container
FROM gcr.io/kubeflow-images-public/tensorflow-2.1.0-notebook-cpu:1.0.0
Example 2.8 Build new container and push to registry for use
IMAGE="${CONTAINER_REGISTRY}/kubeflow/test:v1"
docker build -t "${IMAGE}" -f Dockerfile . docker push "${IMAGE}"
I've created Dockerfile with code from Example2.7 inside it, then ran code from Example 2.8 however not working.
Make sure that:
You set the environment variable using export CONTAINER_REGISTRY=docker.io/your_username in your terminal (or in your ~/.bash_profile and run source ~/.bash_profile).
Your .docker/config.json does not have your password in plain text but in base64, for example the output of echo -n 'username:password' | base64
The docker build and the docker push are two separate commands, in your example they're seen as one command, unlike the book.
One question, how do you handle secrets inside dockerfile without using docker swarm. Let's say, you have some private repo on npm and restoring the same using .npmrc inside dockerfile by providing credentials. After package restore, obviously I am deleting .npmrc file from container. Similarly, it goes for NuGet.config as well for restoring private repos inside container. Currently, I am supplying these credentials as --build-arg while building the dockerfile.
But command like docker history --no-trunc will show the password in the log. Is there any decent way to handle this. Currently, I am not on kubernetes. Hence, need to handle the same in docker itself.
One way I can think of is mounting the /run/secrets/ and storing the same inside either by using some text file containing password or via .env file. But then, this .env file has to be part of pipeline to complete the CI/CD process, which means it has to be part of source control. Is there any way to avoid this or something can be done via pipeline itself or any type of encryption/decryption logic can be applied here?
Thanks.
Thanks.
First, keep in mind that files deleted in one layer still exist in previous layers. So deleting files doesn't help either.
There are three ways that are secure:
Download all code in advance outside of the Docker build, where you have access to the secret, and then just COPY in the stuff you downloaded.
Use BuildKit, which is an experimental Docker feature that enables secrets in a secure way (https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information).
Serve secrets from a network server running locally (e.g. in another container). See here for detailed explanation of how to do so: https://pythonspeed.com/articles/docker-build-secrets/
Let me try to explain docker secret here.
Docker secret works with docker swarm. For that you need to run
$ docker swarm init --advertise-addr=$(hostname -i)
It makes the node as master. Now you can create your secret here like: -
crate a file /db_pass and put your password in this file.
$docker secret create db-pass /db_pass
this creates your secret. Now if you want to list the secrets created, run command
$ docker secret ls
Lets use secret while running the service: -
$docker service create --name mysql-service --secret source=db_pass,target=mysql_root_password --secret source=db_pass,target=mysql_password -e MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_password" -e MYSQL_PASSWORD_FILE="/run/secrets/mysql_password" -e MYSQL_USER="wordpress" -e MYSQL_DATABASE="wordpress" mysql:latest
In the above command /run/secrets/mysql_root_password and /run/secrets/mysql_password files location is from container which stores the source file (db_pass) data
source=db_pass,target=mysql_root_password ( it creates file /run/secrets/mysql_root_password inside the container with db_pass value)
source=db_pass,target=mysql_password (it creates file /run/secrets/mysql_password inside the container with db_pass value)
See the screenshot from container which container secret file data: -
I have a situation where I need to wait for an image to show up on a given docker registry (this is an OpenShift external registry) before continuing my script with additional oc-commands.
Before that happens, the external docker registry has no knowledge what so ever of this image. Afterwards it is available as :latest.
Can this be done programatically? (Preferably without trying to download the image)
My order of preference:
oc command
docker command
A REST api (using oc or docker credentials)
assuming the repository from openshift works similarly to dockerhub, you can do this:
curl --silent -f -lSL https://index.docker.io/v1/repositories/$1/tags/$2 > /dev/null
source
I am new to Docker. I know the default registry is 'docker hub'. And there are tutorials on navigating 'Docker Hub', e.g. search image etc. But that kind of operations are performed in Docker Hub UI via web.
I was granted a private Docker registry. After I login using the command like docker login someremotehost:8080, I do not know what command to use to navigate around inside the registry. I do not know what images are available and what their tags are.
Could anyone share some info/link on what command to use to explore private remote registry after user login?
Also, to use images from the private registry, the name I need to use becomes something like 'my.registry.address:port/repositoryname.
Is there a way to change the configuration of my docker application, so that it will make my.registry the default registry, and I can just use repositoryname, without specifying registry name in every docker command?
There are no standard CLI commands to interact with remote registries beyond docker pull and docker push. The registry itself might provide some sort of UI (for example, Amazon ECR can list images through the standard AWS console), or your local development team might have a wiki that lists out what's generally available.
You can't change the default Docker registry. You have a pretty strong expectation that e.g. ubuntu is really docker.io/library/ubuntu and not something else.
For the Docker there are only two commands for communication of registry:
Docker Pull and Docker Push
And about the docker private registry there is no any default setting in docker to get the pull from only from the specific registry. The reason for this is name of docker image.For official docker image there is direct name like Centos . But in the docker registry there is also some images which is created by non-official organisation or person. In that kind of docker images there is always name of user or organisation like pivotaldata/centos. So this naming convention is help to docker find the image in docker registry in public(via login) or public registry.
In the case you want to interact more with private repo you can write your own batch script or bash script. Like I have created a batch script which pull all the tag from the private repo if user give the wrong tag.
#echo off
docker login --username=xxxx --password=xxxx
docker pull %1:%2
IF NOT %ERRORLEVEL%==0 (
echo "Specified Version is Not Found "
echo "Available Version for this image is :"
for /f %%i in (' curl -s -H "Content-Type:application/json" -X POST -d "{\"username\":\"user\",\"password\":\"password\"}" https://hub.docker.com/v2/users/login ^|jq -r .token ') do set TOKEN=%%i
curl -sH "Authorization: JWT %TOKEN%" "https://hub.docker.com/v2/repositories/%1/tags/" | jq .results[].name
)
I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.