I have images on google container registry moved from docker hub. I have my docker-compose.yml. compose file is successfully pull the images from docker hub. But I can't pull from google container registry.
step to login to container registry
gcloud auth revoke --all
gcloud auth login
gcloud config set project projectId
gcloud auth activate-service-account deploy#projectId.iam.gserviceaccount.com --key-file=service-account.json
gcloud auth configure-docker
(a) gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://asia.gcr.io
Login Result is success
docker-compose up
ERROR: pull access denied for [my_image_name], repository does not exist or may require 'docker login': denied: requested access to the resource is denied
I can pull the image with below command
docker pull asia.gcr.io/projectid/myimagename/data-api:latest
docker compose
version: "3.3"
services:
data_api:
container_name: myimagename-data-api
image: myimagename/data-api
expose:
- 4000
ports:
- "4001:4000"
depends_on:
- db
environment:
DATABASE_URL: mysql://root:root#db:3306/myimagename
ACCESS_TOKEN_SECRET: xxxxxxxxxx
REFRESH_TOKEN_SECRET: xxxxxxxxx
networks:
- db-api
db:
container_name: myimagename-db
image: myimagename/db
restart: always
volumes:
- ./db/data/:/var/lib/mariadb/data
environment:
MARIADB_ROOT_PASSWORD: root
MARIADB_DATABASE: myimagename
expose:
- 3306
ports:
- "3307:3306"
networks:
- db-api
networks:
db-api:
If you look at the service-account.json file, you will see that it's not your "password" in the traditional sense. Hence piping it in as a stdin password will not work. EDIT: TIL - you can pipe a credentials file in as a password as per doc
I would recommend using the gcloud credential helper -- you can login as yourself if you have the perms or you can use a service account with its credentials.json file -- which appears to be your case there. Be sure to have the correct IAM perms on your service account.
Pull (read) only:
roles/storage.objectViewer
Push (write) and Pull:
roles/storage.legacyBucketWriter
Ok, Finally, I found the issue. It is image name. We can not use same image name as docker hub. we need full path.
image: asia.gcr.io/projectid/myimagename/data-api:latest
instead of myimagename/data-api
Related
I connected doctl to my account and logged into the private registry, validate it's successfully authorized, but when I try to docker-compose up -d an image it says that pull access is denied. What could be the reson ?
> doctl account get
User Email Team Droplet Limit Email Verified User UUID Status
user#domain.name My Team 25 true aa11a5d9-1913-4f8d-b427-005fa9e11be6 active
Docker daemon is logged into the registry:
> docker login registry.digitalocean.com
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /home/admin/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Connection with the registry is established:
> doctl registry login
Logging Docker in to registry.digitalocean.com
My linux user is part of the docker group:
> groups
sudo docker
Docker images are present locally:
>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.digitalocean.com/project/strategy latest 60d4796574fc 26 hours ago 1.96GB
registry.digitalocean.com/project/redis latest ee373138aeec 47 hours ago 177MB
But I'm unable to execute containers:
(strategy) admin#ubuntu-s-2vcpu-2gb-fra1:/var/www/strategy$ docker-compose up -d
[+] Running 0/5
⠿ flower Error 1.6s
⠿ celery_worker Error 1.6s
⠿ celery_beat Error 1.6s
⠿ django Error 1.5s
⠿ redis-4 Error 1.5s
Error response from daemon: pull access denied for strategy, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
docker-compose.yml
version: '3.8'
services:
django:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: strategy
command: /start
volumes:
- .:/app
ports:
- "8004:8004"
env_file:
- strategy/.env
depends_on:
- redis-4
networks:
- mynetwork
redis-4:
build:
context: .
dockerfile: ./compose/local/redis/Dockerfile
container_name: redis-4
image: redis
expose:
- "6375"
networks:
- mynetwork
celery_worker:
image: strategy
command: /start-celeryworker
volumes:
- .:/app:/strategy
env_file:
- strategy/.env
depends_on:
- redis-4
- strategy
networks:
- mynetwork
celery_beat:
image: strategy
command: /start-celerybeat
volumes:
- .:/app:/strategy
env_file:
- strategy/.env
depends_on:
- redis-4
- strategy
networks:
- mynetwork
flower:
image: strategy
command: /start-flower
volumes:
- .:/app:/strategy
env_file:
- strategy/.env
depends_on:
- redis-4
- strategy
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
The names of the images are not correct:
strategy -> registry.digitalocean.com/project/strategy
redis -> registry.digitalocean.com/project/redis
...
If you don't specify the registry then docker assumes it's Docker Hub.
I've tried pull a image to dependency proxy from GitLab, I've read the documentation https://docs.gitlab.com/14.10/ee/user/packages/dependency_proxy/
# .gitlab-ci.yml
image: docker:19.03.12
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
services:
- docker:19.03.12-dind
build:
image: docker:19.03.12
before_script:
- docker login -u $TOKEN_USERNAME -p $TOKEN_PASSWORD $CI_DEPENDENCY_PROXY_SERVER
script:
- docker pull ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/php:7-fpm-alpine3.15
I've used a token created in my group but in console show that error
Error response from daemon: unauthorized: authentication required
Are $TOKEN_USERNAME and $TOKEN_PASSWORD defined? The documentation says to use the predefined variables $CI_DEPENDENCY_PROXY_USER and $CI_DEPENDENCY_PROXY_PASSWORD.
docker login -u $CI_DEPENDENCY_PROXY_USER -p $CI_DEPENDENCY_PROXY_PASSWORD $CI_DEPENDENCY_PROXY_SERVER
When running Corda in docker with external Postgres DB configurations, I get insufficient privileges to access error.
Note:
Corda: 4.6 Postgresql: 9.6
Docker engine 20.10.6
Docker-compose: docker-compose version 1.29.1, build c34c88b2
docker-compose.yml file:
version: '3.3'
services:
partyadb:
hostname: partyadb
container_name: partyadb
image: "postgres:9.6"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: partyadb
ports:
- 5432
partya:
hostname: partya
# image: corda/corda-zulu-java1.8-4.7:RELEASE
image: corda/corda-zulu-java1.8-4.6:latest
container_name: partya
ports:
- 10006
- 2223
command: /bin/bash -c "java -jar /opt/corda/bin/corda.jar run-migration-scripts -f /etc/corda/node.conf --core-schemas --app-schemas && /opt/corda/bin/run-corda"
volumes:
- ./partya/node.conf:/etc/corda/node.conf:ro
- ./partya/certificates:/opt/corda/certificates:ro
- ./partya/persistence.mv.db:/opt/corda/persistence/persistence.mv.db:rw
- ./partya/persistence.trace.db:/opt/corda/persistence/persistence.trace.db:rw
# - ./partya/logs:/opt/corda/logs:rw
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
- ./shared/cordapps:/opt/corda/cordapps:rw
- ./shared/drivers:/opt/corda/drivers:ro
- ./shared/network-parameters:/opt/corda/network-parameters:rw
environment:
- ACCEPT_LICENSE=${ACCEPT_LICENSE}
depends_on:
- partyadb
Error:
[ERROR] 12:41:24+0000 [main] internal.NodeStartupLogging. - Exception during node startup. Corda started with insufficient privileges to access /opt/corda/additional-node-infos/nodeInfo-5B........................................47D
The corda/corda-zulu-java1.8-4.6:latest image runs under the user corda, not root. This user has user id 1000, and also is in a group called corda, also with gid 1000:
corda#5bb6f196a682:~$ id -u corda
1000
corda#5bb6f196a682:~$ groups corda
corda : corda
corda#5bb6f196a682:~$ id -G corda
1000
The problem here seems to be that the file you are mounting into the docker container (./shared/additional-node-infos/nodeInfo-5B) does not have permissions setup in such a way as to allow this user to access it. I'm assuming the user needs read and write access. A very simple fix would be to give other read and write access to this file:
$ chmod o+rw ./shared/additional-node-infos/nodeInfo-5B
There are plenty of other ways to manage this kind of permissions issue in docker, but remember that the permissions are based on uid/gid which usually do not map nicely from your host machine into the docker container.
So the error itself describes that it's a permission problem.
I don't know if you crafted this dockerfile yourself, you may want to take a look at generating them with the dockerform task (https://docs.corda.net/docs/corda-os/4.8/generating-a-node.html#use-cordform-and-dockerform-to-create-a-set-of-local-nodes-automatically)
This permission problem could be that you're setting only read / write within the container:
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
or it could be that you need to change the permissions on the shared folder. Try changing the permissions of shared to 777 and see if that works, then restrict your way back down to permissions you're comfortable with.
I just configure the image to be run as root. This works but may not be safe. Simply add
services:
cordaNode:
user: root
to the service configuration.
Ref: How to configure docker-compose.yml to up a container as root
I am trying to upload my backend to Google Cloud Run. I'm using Docker-Compose with 2 components: a Golang Server and a Postgres DB.
When I run Docker-Compose locally, everything works great! When I upload to Gcloud with
gcloud builds submit . --tag gcr.io/BACKEND_NAME
gcloud run deploy --image gcr.io/BACKEND_NAME --platform managed
Gcloud's health check fails, getting stuck on Deploying... Revision deployment finished. Waiting for health check to begin. and throws Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I understand that Google Cloud Run provides a PORT env variable, which I tried to account for in my docker-compose.yml. But the command still fails. I'm out of ideas, what could be wrong here?
Here is my docker-compose.yml
version: '3'
services:
db:
image: postgres:latest # use latest official postgres version
container_name: db
restart: "always"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
api:
container_name: api
depends_on:
- db
restart: on-failure
build: .
ports:
# Bind GCR provided incoming PORT to port 8000 of our api
- "${PORT}:8000"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
database-data: # named volumes can be managed easier using docker-compose
and the api container is a Golang binary, which waits for a connection to be made with the Postgres DB before calling http.ListenAndServe(":8000", handler).
I am trying to set up a docker registry with a frontend and am having problems
I use the following docker compose, and am unable to see the repos that are present in the registry. I assume the most basic setup as follows:
web:
container_name: registry-frontend
image: hyper/docker-registry-web
ports:
- "8085:8080"
links:
- registry
environment:
REGISTRY_HOST: registry
registry:
container_name: registry
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /registry
volumes:
- /data/docker-registry/:/registry
When I check for repositories present in the registry service, I can see the repos
I ran the following command:
http://<host>:5000/v2/_catalog
and got the following result
{"repositories":["baseimage","myfirstimage"]}
However, these are not visible in the frontend. Essentially the integration does not seem to be working. Can anyone help out in figuring out what could be wrong?