docker --add-hosts flag equivalent in remote API? - docker

I want to use the --add-hosts flag in the docker remote API.
https://docs.docker.com/reference/run/#network-settings
--add-host="" : Add a line to /etc/hosts (host:IP)
This is an option for docker run so I assumed it would be possible to pass it to /containers/create in the remote API.
https://docs.docker.com/reference/api/docker_remote_api_v1.16/#create-a-container
Is there a remote API equivalent for this flag yet?

By looking in the Docker source, I can see it's called ExtraHosts.
Update: now in the docs also.

Related

Adding ghcr (Github Docker Regustry) to Synology docker results in "Registry returned bad result"

When trying to add the Github Registry to Synology Docker, I always get a prompt saying "Registry returned bad result".
The URL I try to connect to is: https://ghcr.io
I'm trying to do the same (DS920+, DSM 7.1 latest). According to this Reddit:
https://www.reddit.com/r/portainer/comments/u1vf1s/how_to_add_ghcr_as_a_registry/
it used to work with 'docker.pkg.github.com' as the repo url, but according to the current Github docs, it was the old namespace and the actual repo is now 'https://ghcr.io'
https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry
According to the docs, authentication is implied many times, maybe it is not possible to use the repo w/o authentication (tried with access tokens, not working).
I opened a Synology support ticket, let's see what they can say.
2022-10-27 - Synology Support replied and the official statement is that the token authentication currently used by Github Container Registry is not supported on the DSM's Docker package GUI. Its possible to ssh to the DSM and use docker from the command line.

Docker Hub: Remote Build Trigger doesn't work

I am trying to trigger image build via remote build trigger URL.
I have followed the Docker Hub documentation, but the actual Docker Hub UI option doesn't have the same options as described in the Docker Hub Docs for remote build trigger.
Docker Hub interface shown as per the docs:
My Docker Hub Interface:
I don't see token option anywhere.
Also, I tried hitting the trigger URL directly via browser, but that doesn't help either.
I guess I haven't understand this correctly, or there is some serious bug in Docker Hub especially for remote build trigger.
It seems you are following some unofficial documentation, which is outdated. Docker Hub redesigned this part some time ago. Now you don't need a token, because it's already included in the URL. But opening it in the browser is not enough, it must be a POST request, so you can try that from the command line with curl for example:
curl -X POST "<the-trigger-url-here>"

does docker remote api support creating a container using a docker-compose file?

I have an app that creates docker containers using the docker remote api, which is done using this library.
So far it is working fine with simple configuration options for the container creation. Now I need to create the container with much more config options, so wondering if i can use a docker-compose file. This api is created based on v1.23 of docker remote api spec, does docker remote api support creating a container using a compose file?
I cannot find an option from this documentation. but wondering if i am looking in wrong place.
No; Docker Compose itself is an application that uses the API. You’d need to directly run docker-compose up or something similar as a shell command if you wanted to directly use it.
(You might be able to hack into its internals if you have a Python program, but not from Java.)

Kong in Docker : Configuring API endpoints without curl

Is there a way to add API endpoints in Kong without using curl? I have Kong up and running in a docker container using docker-compose and I would like to be able to pass in a configuration file (or what-have-you) on container spin up that outlines the endpoints I would like setup. Is this possible? This is the closest I have found to a solution : http://blog.toast38coza.me/kong-up-and-running-part-2-defining-our-api-gateway-with-ansible/
One option could be to use the YAML driven Kongfig tool to manage the config of the machine. You could run it external to the container e.g. via a CI process (Jenkins etc.) or in theory add a bootstrap action with Konfig running locally within the container.
You can use Kongfig as Mark said or throught the GUI Konga

Google Container Registry access denied when pushing docker container

I try to push my docker container to the google container registry, using this tutorial, but when I run
gcloud docker push b.gcr.io/my-bucket/image-name
I get the error :
The push refers to a repository [b.gcr.io/my-bucket/my-image] (len: 1)
Sending image list
Error: Status 403 trying to push repository my-bucket/my-image: "Access denied."
I couldn't find any more explanation (no -D, --debug, --verbose arguments were recognized), gcloud auth list and docker info tell me I'm connected to both services.
Anything I'm missing ?
You need to make sure the VM instance has enough access rights. You can set these at the time of creating the instance, or if you have already created the instance, you can also edit it (but first, you'll need to stop the instance). There are two ways to manage this access:
Option 1
Under the Identity and API access, select Allow full access to all Cloud APIs.
Option 2 (recommended)
Under the Identity and API access, select Set access for each API and then choose Read Write for Storage.
Note that you can also change these settings even after you have already created the instance. To do this, you'll first need to stop the instance, and then edit the configuration as mentioned above.
Use gsutil to check the ACL to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You'll need to check which group the account you are using is in ('owners', 'editors', 'viewers' etc.)
EDIT: I have experienced a very similar problem to this myself recently and, as #lampis mentions in his post, it's because the correct permission scopes were not set when I created the VM I was trying to push the image from. Unfortunately there's currently no way of changing the scopes once a VM has been created, so you have to delete the VM (making sure the disks are set to auto-delete!) and recreate the VM with the correct scopes ('compute-rw', 'storage-rw' seems sufficient). It doesn't take long though ;-).
See the --scopes section here: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
I am seeing this but on an intermittent basis. e.g. I may get the error denied: Permission denied for "latest" from request "/v2/...."., but when trying again it will work.
Is anyone else experiencing this?
For me I forgot to prepend gcloud in the line (and I was wondering how docker would authenticate):
$ gcloud docker push <image>
In your terminal, run the code below
$ sudo docker login -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://[HOSTNAME]
Where
-[HOSTNAME] is your container registry location (it is either gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io). Check your tagged images to be sure by running $ sudo docker images).
If this doesn't fix it, try reviewing the VM's access scopes.
If you are using Docker 1.7.0, there was a breaking change to how they handle authentication, which affects users who are using a mix of gcloud docker and docker login.
Be sure you are using the latest version of gcloud via: gcloud components update.
So far this seems to affect gcloud docker, docker-compose and other tools that were reading/writing the Docker auth file.
Hopefully this helps.
Same problem here, the troubleshooting section from https://cloud.google.com/tools/container-registry/#access_denied wasn't very helpful. I have Docker and GCloud full updated. Don't know what else to do.
BTW, I'm trying to push to "gcr.io".
Fixed. I was using a VM in compute engine as my development machine, and looks like I didn't give it enough rigths in Storage.
I had the same problem with access denied and I resolved it with creating new image using Tag:
docker tag IMAGE_WITH_ACCESS_DENIED gcr.io/my-project/my-new-image:test
After that I could PUSH It to Container registry:
gcloud docker -- push gcr.io/my-project/my-new-image:test
Today I also got this error inside Jenkins running on Google Kubernetes Engine when pushing the docker container. The reason was a node pool node version upgrade from 1.9.6-gke.1 to 1.9.7-gke.0 in gcp I did before. Worked again after the downgrade.
You need to login to gcloud from the machine you are:
gcloud auth login

Resources