I'd like to pull a private image from Docker Hub in Paperspace Deployment.
It uses a yaml file, in which the command can overwrite the default pull command.
This is the yaml file:
image: image_name/ref
port: xxxx
command:
- docker login -u 'docker_user' -p 'docker_password'
- docker pull image_name/ref:latest
resources:
replicas: 1
instanceType: C4
I have the following error:
Node State: errored
Error: An error occurred when pulling image:[image_name/ref] from deployment
Note: the commands
- docker login -u 'docker_user' -p 'docker_password'
- docker pull image_name/ref:latest
Work from my PC.
command in docker-compose.yaml is used to:
overrides the default command declared by the container image
Not overwrite the default pull command as what you thought.
So, to let docker-compose to pull a private docker image, you need to do a initial docker login before run compose, detail see docker login.
In fact, there is a container menu to specify user and password (see team settings in the top right menu -> containers).
Then you have to use the option "containerRegistry" to pull the image properly:
image: image_name/ref
containerRegistry: name_in_paperspace
port: 5000
resources:
replicas: 1
instanceType: C4
Everything is explained in this video:
https://www.youtube.com/watch?v=kPQ7AKwNlWU
Related
I have pushed a linux/arm64 image to a docker registry and I am trying to use this image with the container tag in an azure-pipeline. The image seems to be pulled correctly but it can't be executed because the Virtual Machine is ubuntu-20.04 (not the same architecture -->linux/amd64). When I am on my local computer and want to execute this docker image I simply need to run the following command. However, I can't seem to be able to run an emulator before the container tries to execute in the azure job.
docker run --privileged --rm tonistiigi/binfmt:qemu-v6.2.0 --install all
Here is the azure pipeline that I am trying to run:
resources:
containers:
- container: build_container_arm64
image: my_arm_image
endpoint: my_endpoint
jobs:
- job:
pool:
vmImage: ubuntu-20.04
timeoutInMinutes: 240
container: build_container_arm64
steps:
- bash: |
echo "Hello world"
I am wondering if there is a way that I could install or run an emulator before the container tries to execute.
Thanks
I am running locust (the official image from Docker Hub) locally using a Docker-compose file as below
version: '3'
services:
locust:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py -H https://my-host-url.com
I have done the stress-testing in my local with docker-compose up. The next step is to push this compose file onto another registry. I am following the steps given in docker hub documentation. However, I just need some help in copying the necessary locustfile.py as well to my other registry (let's say artifactory).
To upload an image to your custom registry it has to be properly tagged (named) and it is not necessary to use docker build for that. You can do with docker tag:
# pull the image
docker pull locustio/locust
# rename it for your registry
docker tag locustio/locust:latest my-registry.com:5000/locust:latest
# push it to your registry using its new name
docker push my-registry.com:5000/locust:latest
I've used helm create helloworld-chart to create an application using a local docker image I created. i think the issue is that i have the ports all messed up.
DOCKER PIECES
--------------------------
Docker File
FROM busybox
ADD index.html /www/index.html
EXPOSE 8008
CMD httpd -p 8008 -h /www; tail -f /dev/null
(I also have an index.html file in the same directory as my Dockerfile)
Create Docker Image (and publish locally)
docker build -t hello-world .
I then ran this with docker run -p 8080:8008 hello-world and verified I am able to reach it from localhost:8080. (I then stopped that docker container)
I also verified this image was in docker locally with docker image ls and got the output:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 8640a285e98e 20 minutes ago 1.23MB
HELM PIECES
--------------------------
Created a helm chart via helm create helloworld-chart.
Edited the files:
values.yaml
# ...elided because left the same as default...
image:
repository: hello-world
tag: latest
pullPolicy: IfNotPresent
# ...elided because left the same as default...
service:
name: hello-world
type: NodePort # Chose this because MiniKube doesn't have LoadBalancer installed
externalPort: 30007
internalPort: 8008
port: 80
service.yaml
# ...elided because left the same as default...
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.internalPort }}
nodePort: {{ .Values.service.externalPort }}
deployment.yaml
# ...elided because left the same as default...
spec:
# ...elided because left the same as default...
containers:
ports:
- name: http
containerPort: {{ .Values.service.internalPort }}
protocol: TCP
I verified this "looked" correct with both helm lint helloworld-chart and helm template ./helloworld-chart
HELM AND MINIKUBE COMMANDS
--------------------------
# Packaging my helm
helm package helloworld-chart
# Installing into Kuberneters (Minikube)
helm install helloworld helloworld-chart-0.1.0.tgz
# Getting an external IP
minikube service helloworld-helloworld-chart
When I do that, it gives me an external ip like http://172.23.13.145:30007 and opens in a browser but just says the site cannot be reached. What do i have mismatched?
UPDATE/MORE INFO
---------------------------------------
When I check the pod, it's in a CrashLoopBackOff state. However, I see nothing in the logs:
kubectl logs -f helloworld-helloworld-chart-6c886d885b-grfbc
Logs:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
I'm not sure why it's exiting.
The issue was that Minikube was actually looking in the public Docker image repo and finding something also called hello-world. It was not finding my docker image since "local" to minikube is not local to the host computer's docker. Minikube has its own docker running internally.
You have to add your image to minikube's local repo: minikube cache add hello-world:latest.
You need to change the pull policy: imagePullPolicy: Never
I have a simple docker-compose file like the following:
version: "3.7"
services:
mongo:
image: asia.gcr.io/myproj/mymongo:latest
hostname: mongo
volumes:
- type: bind
source: $MONGO_DB_DATA
target: /data/db
command: [ "--bind_ip_all", "--replSet", "rs0", "--wiredTigerCacheSizeGB", "1.5"]
I am launching it in Kubernetes using the following command
docker-compose config | docker stack deploy --orchestrator kubernetes --compose-file - mystack
However, when the pod fails with this error
Failed to pull image "asia.gcr.io/myproj/mymongo:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
My private registry is the gcloud one. I have already logged in docker like the following using the service account keyfile.
docker login -u _json_key -p "$(cat keyfile.json)" https://asia.gcr.io
The image is pulled correctly when I run
docker-compose pull
From this link https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/, I found that I need to create ImagePullSecrets
I have two questions.
How can I write the ImagePullSecrets syntax in my docker-compose so that it is referred correctly.
The method that the links mentions asks you to use .docker/config.json file. However, my config.json has
"auths": {
"asia.gcr.io": {},
},
It doesn't include the username and password since I configured it using the keyfile. How can I do this?
Or is there any simpler way to do this?
I solved this issue by first creating a secret like this
kubectl create secret docker-registry regcred --docker-server https://<docker registry> --docker-username _json_key --docker-password <json key> --docker-email=<email>
and then adding it to the default service account
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'
I tested a gitlab-runner on a virtual machine, it worked perfectly. I followed this tutorial at part Use docker-in-docker executor :
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
When i register a runner with exactly the same configuration on my dev server, the runner is called when there is a commit but i got alot of errors :
*** WARNING: Service runner-XXX-project-XX-concurrent-X-docker-X probably didn't start properly.
ContainerStart: Error response from daemon: Cannot link to a non running container: /runner-XXX-project-XX-concurrent-X-docker-X AS /runner-XXX-project-XX-concurrent-X-docker-X-wait-for-service/service (executor_docker.go:1337:1s)
DEPRECATION: this GitLab server doesn't support refspecs, gitlab-runner 12.0 will no longer work with this version of GitLab
$ docker info
error during connect: Get http://docker:2375/v1.39/info: dial tcp: lookup docker on MY.DNS.IP:53: no such host
ERROR: Job failed: exit code 1
I believe all these error are due to the first warning. I tried to :
Add a second DNS with 8.8.8.8 IP to my machine, same error
Add privileged=true manually in /etc/gitlab-runner/config.toml, same error, so it's not due to the privileged = true parameter
Replace tcp://docker:2375 by tcp://localhost:2375, can't find docker daemon on the machine when docker info
gitlab-ci.yml content :
image: docker:stable
stages :
- build
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build-folder1:
stage: build
script:
- docker build -t image1 folder1/
- docker run --name docker1 -p 3001:5000 -d image1
only:
refs:
- dev
changes:
- folder1/**/*
build-folder2:
stage: build
script:
- docker build -t image2 folder2/
- docker run --name docker2 -p 3000:3000 -d image2
only:
refs:
- dev
changes:
- folder2/**/*
If folder1 of branch dev is modified, we build and run the docker1
If folder2 of branch dev is modified, we build and run the docker2
docker version on dev server :
docker -v
Docker version 17.03.0-ce, build 3a232c8
gitlab-runner version on dev server :
gitlab-runner -v
Version: 11.10.1
I will try to provide an answer for you, as I come to fix this same problem when trying yo run DinD.
This message:
*** WARNING: Service runner-XXX-project-XX-concurrent-X-docker-X probably didn't start properly.
Means that either you have not properly configured your runner, or it is not linked by the gitlab-ci.yml file. You should be able to ckeck the ID of the runner used in the log page at Gitlab.
To start with, verify that you entered the gitlab-runner register command right, with the proper registration token.
Second, since you are setting a specific runner manually, verify that you have set some unique tag to it (eg. build_docker), and call it from your gitlab-ci.yml file. For example:
...
build-folder1:
stage: build
script:
- docker build -t image1 folder1/
- docker run --name docker1 -p 3001:5000 -d image1
tags:
- build_docker
...
That way it should work.