All my team members use the same server as docker remote context. I have set up a project using VSCode-Devcontainer with a devcontainer.json like this:
{
"name": "MyProject - DevContainer",
"dockerFile": "../Dockerfile",
"context": "..",
"workspaceMount": "source=vsc-myprojekt-${localEnv:USERNAME},target=/workspace,type=volume",
"workspaceFolder": "/workspace",
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
],
"postCreateCommand": "/opt/entrypoint.sh",
"mounts": [
"source=/media/Pool/,target=/Pool,type=bind",
"source=cache,target=/cache,type=volume"
]
}
This worked fine for me, but now as my colleges start their devcontainers, we have the problem, that a newly started devcontainer kill other already running devcontainers.
We found that the local folder of the projekt seems to by the way to identify already running devcontainers:
[3216 ms] Start: Run: docker ps -q -a --filter label=devcontainer.local_folder=d:\develop\myproject
[3839 ms] Start: Run: docker inspect --type container 8ca7d3a44662
[4469 ms] Start: Removing Existing Container
As we all use the same path this identification based on the local folder is problematic. Is there a way to use other labels?
Seems to be a bug, because the issue I opened, was accepted as a bug report.
I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.
Background:
For Development purposes I do a lot of docker-compose up -d and docker-compose stop.
To view logs of a container I do either
- docker logs --details --since=1m -t -f container_name
or
- docker inspect --format='{{.LogPath}}' container_name
cat path-from-previous
The problem is when I want to view 10 days older logs, there are none, the logs just have todays logs.
when I do a docker inspect container_name I get the following
"Created": "todays-timestamp"
my logging is the default config.
"LogConfig": {
"Type": "json-file",
"Config": {}
},
the reason behind this is because there is no rotation in your docker-logs.
in case you are using a linux system go to:
/etc/logrotate.d/
and create the file docker-container like this => /etc/logrotate.d/docker-container
write this into the file:
/var/lib/docker/containers/*/*.log {
rotate 7
daily
compress
missingok
delaycompress
copytruncate
}
it takes all builded images and their daily log and rotates + compress them.
you can test this with:
logrotate -fv /etc/logrotate.d/docker-container
enter your docker folder /var/lib/docker/containers/[CONTAINER ID]/ and you can see the rotation.
reference: https://sandro-keil.de/blog/logrotate-for-docker-container/
I am trying to get Docker user namespaces to work with SELinux enabled on Centos 7.5. However, I get this error everytime:
docker run -itd --name temp -p 80:80 httpd
1a83588651b407e547881e15190b6d39692a7a2cf2df73dcaf4f37730ebdca65
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"exit status 40\"": unknown.
This does not happen if I turn of SELinux.
Here is my /etc/docker/daemon.json:
{
"userns-remap": "dockerspace",
"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2376"],
"tls": true,
"tlscacert": "/etc/pki/tls/certs/docker-ca.pem",
"tlscert": "/etc/pki/tls/certs/docker-cert.pem",
"tlskey": "/etc/pki/tls/private/docker-key.pem",
"tlsverify": true,
"selinux-enabled": true
}
uname -a output:
Linux atlantis.newtarget.net 3.10.0-862.9.1.el7.x86_64 #1 SMP Mon Jul 16 16:29:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
cat /proc/cmdline output:
BOOT_IMAGE=/vmlinuz-3.10.0-862.9.1.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8 namespace.unpriv_enable=1 user_namespace.enable=1
Any help is greatly appreciated. Thanks.
You've got a bit more going on than I do when I was getting that error, but here goes!
Based on your cat /proc/cmdline output it looks like you have already done:
sudo grubby --args="namespace.unpriv_enable=1" --update-kernel=/boot/vmlinuz-$(uname -r)
You might need to restart for this to take effect (if you haven't already).
You also need to make sure the value in /proc/sys/user/max_user_namespaces is at least greater than 0:
echo 12345 > /proc/sys/user/max_user_namespaces
These settings along with configuring [/etc/subuid, /etc/subgid, /etc/docker/daemon.json] correctly it worked for me with selinux enabled. docker documentation on user namespacing, how to configure the above files (must be done manually for each on centos/rhel)
I'm not sure if I have already logged in to a docker registry in cmd line by using cmd: docker login. How can you test or see whether you are logged in or not, without trying to push?
Edit 2020
Referring back to the (closed) github issue, where it is pointed out, there is no actual session or state;
docker login actually isn't creating any sort of persistent session, it is only storing the user's credentials on disk so that when authentication is required it can read them to login
As others have pointed out, an auths entry/node is added to the ~/.docker/config.json file (this also works for private registries) after you succesfully login:
{
"auths": {
"https://index.docker.io/v1/": {}
},
...
When logging out, this entry is then removed:
$ docker logout
Removing login credentials for https://index.docker.io/v1/
Content of docker config.json after:
{
"auths": {},
...
This file can be parsed by your script or code to check your login status.
Alternative method (re-login)
You can login to docker with docker login <repository>
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If
you don't have a Docker ID, head over to https://hub.docker.com to
create one.
Username:
If you are already logged in, the prompt will look like:
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If
you don't have a Docker ID, head over to https://hub.docker.com to
create one.
Username (myusername): # <-- "myusername"
For the original explanation for the ~/.docker/config.json, check question: how can I tell if I'm logged into a private docker registry
I use one of the following two ways for this check:
1: View config.json file:
In case you are logged in to "private.registry.com" you will see an entry for the same as following in ~/.docker/config.json:
"auths": {
"private.registry.com": {
"auth": "gibberishgibberishgibberishgibberishgibberishgibberish"
}
}
2: Try docker login once again:
If you are trying to see if you already have an active session with private.registry.com, try to login again:
bash$ docker login private.registry.com
Username (logged-in-user):
If you get an output like the above, it means logged-in-user already had an active session with private.registry.com. If you are just prompted for username instead, that would indicate that there's no active session.
You can do the following command to see the username you are logged in with and the registry used:
docker system info | grep -E 'Username|Registry'
The answers here so far are not so useful:
docker info no longer provides this info
docker logout is a major inconvenience - unless you already know the credentials and can easily re-login
docker login response seems quite unreliable and not so easy to parse by the program
My solution that worked for me builds on #noobuntu's comment: I figured that if I already known the image that I want to pull, but I'm not sure if the user is already logged in, I can do this:
try pulling target image
-> on failure:
try logging in
-> on failure: throw CannotLogInException
-> on success:
try pulling target image
-> on failure: throw CannotPullImageException
-> on success: (continue)
-> on success: (continue)
The docker cli credential scheme is unsurprisingly uncomplicated, just take a look:
cat ~/.docker/config.json
{
"auths": {
"dockerregistry.myregistry.com": {},
"https://index.docker.io/v1/": {}
This exists on Windows (use Get-Content ~\.docker\config.json) and you can also poke around the credential tool which also lists the username ... and I think you can even retrieve the password
. "C:\Program Files\Docker\Docker\resources\bin\docker-credential-wincred.exe" list
{"https://index.docker.io/v1/":"kcd"}
For private registries, nothing is shown in docker info. However, the logout command will tell you if you were logged in:
$ docker logout private.example.com
Not logged in to private.example.com
(Though this will force you to log in again.)
At least in "Docker for Windows" you can see if you are logged in to docker hub over the UI. Just right click the docker icon in the windows notification area:
Just checked, today it looks like this:
$ docker login
Authenticating with existing credentials...
Login Succeeded
NOTE: this is on a macOS with the latest version of Docker CE, docker-credential-helper - both installed with homebrew.
If you want a simple true/false value, you can pipe your docker.json to jq.
is_logged_in() {
cat ~/.docker/config.json | jq -r --arg url "${REPOSITORY_URL}" '.auths | has($url)'
}
if [[ "$(is_logged_in)" == "false" ]]; then
# do stuff, log in
fi
My AWS ECR build-script has:
ECR_HOSTNAME="${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com"
TOKEN=$(jq -r '.auths["'$ECR_HOSTNAME'"]["auth"]' ~/.docker/config.json)
curl --fail --header "Authorization: Basic $TOKEN" https://$ECR_HOSTNAME/v2/
If accessing ECR fails, a login is done:
aws ecr get-login-password --region ${REGION} | docker login --username AWS --password-stdin https://$ECR_HOSTNAME
For this to work, a proper Docker credential store cannot be used. Default credentials store of ~/.docker/config.json is assumed.
Use command like below:
docker info | grep 'name'
WARNING: No swap limit support
Username: <strong>jonasm2009</strong>
On windows you can inspect the login "authorizations" (auths) by looking at this file:
[USER_HOME_DIR].docker\config.json
Example:
c:\USERS\YOUR_USERANME.docker\config.json
It will look something like this for windows credentials
{
"auths": {
"HOST_NAME_HERE": {},
"https://index.docker.io/v1/": {}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.0 (windows)"
},
"credsStore": "wincred",
"stackOrchestrator": "swarm"
}
On Linux if you have the secretservice enabled via the credsStore option in your ~/.docker/config.json like below:
"credsStore": "secretservice",
then you will not see the credentials in the config.json. Instead you need to query the credentials using the docker-credential-desktop, see the below answer for more details:
How to know if docker is already logged in to a docker registry server
In Azure Container Registry (ACR) following works as a login-check:
registry="contosoregistry.azurecr.io"
curl -v --header "Authorization: Bearer $access_token" https://$registry/v2/_catalog
If access token has expired, a HTTP/401 will be returned.
Options for getting an access token are from ~/.docker/config.json or requesting one from https://$registry/oauth2/token using a refresh token stored into Docker credStore: echo $registry | docker-credential-desktop get.
More information about refresh tokens and access tokens are at ACR integration docs.
To many answers above is just about how to check login status manually. To do it from command line you can use the command below.
cat ~/.docker/config.json | jq '.auths["<MY_REGISTRY_HOSTNAME>"]' -e > /dev/null && echo "OK" || echo "ERR"
Ensure you have jq command in your local. To test that run jq --version command. If you can't get an version output follow the directions from here to install it https://stedolan.github.io/jq/download/
Replace <MY_REGISTRY_HOSTNAME> with your registry address.
When you run it returns OK if you successfully login already otherwise ERR
NOTE: if you used a credential helper to login (e.g. google cloud auth tool for container registry) replace .auths keyword with .credHelpers
As pointed out by #Christian, best to try operation first then login only if necessary. Problem is that "if necessary" is not that obvious to do robustly. One approach is to compare the stderr of the docker operation with some strings that are known (by trial and error). For example,
try "docker OPERATION"
if it failed:
capture the stderr of "docker OPERATION"
if it ends with "no basic auth credentials":
try docker login
else if it ends with "not found":
fatal error: image name/tag probably incorrect
else if it ends with <other stuff you care to trap>:
...
else:
fatal error: unknown cause
try docker OPERATION again
if this fails: you're SOL!
Here's a powershell powershell command to check if you have previously logged into the registry, making use of the file $HOME/.docker/config.json that others have mentioned:
(Get-Content $HOME/.docker/config.json | ConvertFrom-Json).auths.PSobject.Properties.name -Contains "<registry_url>"
This returns a True / False boolean, so can use as follows:
if ((Get-Content $HOME/.docker/config.json | ConvertFrom-Json).auths.PSobject.Properties.name -Contains "<registry_url>" ) {
Write-Host Already logged into docker registry
} else {
Write-Host Logging into docker registry
docker login
}
If you want it to not fail if the file doesn't exist you need an extra check:
if ( (-Not (Test-Path $HOME/.docker/config.json)) -Or (-Not (Get-Content $HOME/.docker/config.json | ConvertFrom-Json).auths.PSobject.Properties.name -Contains "<registry_url>") )
{
Write-Host Already logged into docker registry
} else {
Write-Host Logging into docker registry
docker login
}
I chose to use the -Not Statements because for some reason when you chain a command after a failed condition with -And instead of -Or the command errors out.